Commit 8a67200b authored by David Johnson's avatar David Johnson

Add Docker documentation.

Also refactor VM stuff a little, and add a small Xen section.
parent a1372771
......@@ -6,7 +6,7 @@
@section[#:tag "disk-images"]{Disk Images}
Disk images in @(tb) are stored and distributed in the
Most disk images in @(tb) are stored and distributed in the
@hyperlink["http://www.flux.utah.edu/paper/hibler-atc03"]{Frisbee} disk
image format. They are stored at block level, meaning that, in theory,
any filesystem can be used. In practice, Frisbee's filesystem-aware
......
......@@ -155,13 +155,32 @@ belong to. Currently, the only such permission is the ability to make a profile
visible onto to the owning project. We expect to introduce more
project-specific permissions features in the future.
@section[#:tag "virtual-machines"]{Virtual Machines}
@section[#:tag "physical-machines"]{Physical Machines}
Users of @(tb) may get exclusive, root-level control over @italic{physical
machines}. When allocated this way, no layers of virtualization or indirection
get in the way of the way of performance, and users can be sure that no other
users have access to the machines at the same time. This is an ideal situation
for @seclink["repeatable-research"]{repeatable research}.
Physical machines are @seclink["disk-images"]{re-imaged} between users, so you
can be sure that your physical machines don't have any state left around from
the previous user. You can find descriptions of the
hardware in @(tb)'s clusters in the @seclink["hardware"]{hardware} chapter.
@apt-only{
Physical machines are relatively scarce, and getting access to large numbers of
them, or holding them for a long time, may require
@seclink["getting-help"]{contacting @(tb) staff}.
}
@section[#:tag "virtual-machines"]{Virtual Machines and Containers}
@apt-only{
The default node type in @(tb) is a @italic{virtual machine}, or VM. VMs in
@(tb) are currently implemented on
@hyperlink["http://blog.xen.org/index.php/2013/07/09/xen-4-3-0-released/"]{Xen
4.3} using
@hyperlink["https://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-460.html"]{Xen
4.6} using
@hyperlink["http://wiki.xenproject.org/wiki/Paravirtualization_(PV)"]{paravirtualization}.
Users have full root access with their VMs via @tt{sudo}.
......@@ -180,29 +199,52 @@ project-specific permissions features in the future.
}
@clab-only{
While @(tb) does have the ability to provision virtual machines itself
(using the Xen hypervisor), we expect that the dominant use of @(tb) is
While @(tb) does have the ability to provision virtual machines
(using the Xen hypervisor) and containers (using Docker), we expect that the dominant use of @(tb) is
that users will provision @seclink["physical-machines"]{physical machines}.
Users (or the cloud software stacks that they run) may build their own
virtual machines on these physical nodes using whatever hypervisor they
wish.
wish. However, if your experiment could still benefit from use of virtual
machines or containers (e.g. to form a scalable pool of clients issuing
requests to your cloud software stack), you can find more detail in
@seclink["virtual-machines-advanced"]{the advanced topics section}.
}
@section[#:tag "physical-machines"]{Physical Machines}
Users of @(tb) may get exclusive, root-level control over @italic{physical
machines}. When allocated this way, no layers of virtualization or indirection
get in the way of the way of performance, and users can be sure that no other
users have access to the machines at the same time. This is an ideal situation
for @seclink["repeatable-research"]{repeatable research}.
Physical machines are @seclink["disk-images"]{re-imaged} between users, so you
can be sure that your physical machines don't have any state left around from
the previous user. You can find descriptions of the
hardware in @(tb)'s clusters in the @seclink["hardware"]{hardware} chapter.
@apt-only{
Physical machines are relatively scarce, and getting access to large numbers of
them, or holding them for a long time, may require
@seclink["getting-help"]{contacting @(tb) staff}.
}
@not-apt-clab-only{
To support experiments that must scale to large numbers of nodes, @(tb)
provides @italic{virtual nodes}. A @(tb) virtual node is a virtual
machine or container running on top of a regular operating system. If an
experiment's per-node CPU, memory and network requirements are modest,
the use of virtual nodes allows an experiment to scale to a total node
size that is a factor of tens or hundreds times as many nodes as there
are available physical machines in @(tb). Virtual nodes are also useful
for prototyping experiments and debugging code without tying up
significant amounts of physical resources.
@(tb) virtual nodes are based on the Xen hypervisor or Docker
containers. With some limitations, virtual nodes can act in any role
that a normal @(tb) node can: edge node, router, traffic generator,
etc. You can run startup commands, remotely login over ssh, run software
as root, use common networking tools like tcpdump or traceroute, modify
routing tables, capture and load custom images, and reboot. You can
construct arbitrary topologies of links and LANs mixing virtual and real
nodes.
@(tb) supports the use of native Docker images (which use a different
format than other @(tb) images). You can either use external,
publicly-available images or Dockerfiles; or you can use and
automatically create @seclink["docker-augmentation"]{augmented disk
images}, which are external Docker images that are automatically
repackaged with the @(tb) software and its dependencies, so that all
@(tb) features can be supported inside the container.
Virtual nodes in @(tb) are hosted on either @italic{dedicated} or
@italic{shared} physical machines. In dedicated mode, you may login to
the physical machines hosting your VMs; in shared mode, no one else has
access to your VMs, but there are other users on the same hardware whose
activities may affect the performance of your VMs.
To learn how to allocate and configure virtual nodes, see the
@seclink["virtual-machines-advanced"]{the advanced topics section}.
}
......@@ -50,6 +50,7 @@ control system can be found on CloudLab's @hyperlink[(apturl
@include-section["basic-concepts.scrbl"]
@include-section["reservations.scrbl"]
@include-section["geni-lib.scrbl"]
@include-section["virtual-machines.scrbl"]
@include-section["advanced-topics.scrbl"]
@include-section["hardware.scrbl"]
@include-section["planned.scrbl"]
......
"""An example of a Docker container that mounts a remote blockstore."""
import geni.portal as portal
import geni.rspec.pg as rspec
import geni.rspec.igext as ig
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
# Create an interface to connect to the link from the container to the
# blockstore host.
myintf =
# Create the blockstore host.
bsnode = ig.RemoteBlockstore("bsnode","/mnt/blockstore")
# Map your remote blockstore to the blockstore host
bsnode.dataset = \
"urn:publicid:IDN+emulab.net:emulab-ops+ltdataset+johnsond-bs-foo"
bsnode.readonly = False
# Connect the blockstore host to the container.
bslink = pg.Link("bslink")
bslink.addInterface(node.addInterface("ifbs0"))
bslink.addInterface(bsnode.interface)
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_dockerfile = "https://github.com/docker-library/httpd/raw/38842a5d4cdd44ff4888e8540c0da99009790d01/2.4/Dockerfile"
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_extimage = "ubuntu:16.04"
portal.context.printRequestRSpec()
"""An example of constructing a profile with ten Docker containers in a LAN.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a LAN to put containers into.
lan = request.LAN("lan")
# Create ten Docker containers.
for i in range(0,10):
node = request.DockerContainer("node-%d" % (i))
# Create an interface.
iface = node.addInterface("if1")
# Add the interface to the LAN.
lan.addInterface(iface)
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image, and customizing its remote access."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_extimage = "ubuntu:16.04"
node.docker_ssh_style = "exec"
node.docker_exec_shell = "/bin/bash"
portal.context.printRequestRSpec()
"""An example of a Docker container running a standard, augmented system image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops//docker-ubuntu16-std"
portal.context.printRequestRSpec()
"""An example of a Docker container that mounts a remote blockstore."""
import geni.portal as portal
import geni.rspec.pg as rspec
import geni.rspec.igext as ig
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
bs = node.Blockstore("temp-bs","/mnt/tmp")
bs.size = "8GB"
bs.placement "any"
portal.context.printRequestRSpec()
"""An example of constructing a profile with 20 Docker containers in a LAN,
divided across two container hosts.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import the Emulab specific extensions.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a LAN to put containers into.
lan = request.LAN("lan")
# Create two container hosts, each with ten Docker containers.
for j in range(0,2):
# Create a container host.
host = request.RawPC("host-%d" % (j))
# Select a specific hardware type for the container host.
host.hardware_type = "d430"
for i in range(0,10):
# Create a container.
node = request.DockerContainer("node-%d-%d" % (j,i))
# Create an interface.
iface = node.addInterface("if1")
# Add the interface to the LAN.
lan.addInterface(iface)
# Set this container to be instantiated on the host created in
# the outer loop.
node.InstantiateOn(host.client_id)
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM in HVM mode.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import Emulab-specific extensions so we can set node attributes.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Set the XEN_FORCE_HVM custom node attribute to 1 to enable HVM mode:
node.Attribute('XEN_FORCE_HVM','1')
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Docker container.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a Docker container.
node = request.DockerContainer("node")
# Request a container hosted on a shared container host; you will not
# have access to the underlying physical host, and your container will
# not be privileged. Note that if there are no shared hosts available,
# your experiment will be assigned a physical machine to host your container.
node.exclusive = True
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Request a specific number of VCPUs.
node.cores = 4
# Request a specific amount of memory (in GB).
node.ram = 4096
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM with extra fs space.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import Emulab-specific extensions so we can set node attributes.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Set the XEN_EXTRAFS to request 8GB of extra space in the 4th partition.
node.Attribute('XEN_EXTRAFS','8')
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
......@@ -86,6 +86,12 @@
(define (wireless-only . stuff)
(apt-vs-clab #:pnet stuff #:powder stuff))
(define (not-apt-clab-only . stuff)
(case (tb-mode)
('apt "")
('clab "")
(else stuff)))
(define apt-base-url
(case (tb-mode)
('apt "https://www.aptlab.net/")
......
......@@ -6,7 +6,7 @@
#:date (date->string (current-date))]{The Emulab Manual}
@author[
"Eric Eide" "Robert Ricci" "Jacobus (Kobus) Van der Merwe" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler"
"Eric Eide" "Robert Ricci" "Jacobus (Kobus) Van der Merwe" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler" "David Johnson"
]
@;{
......@@ -44,6 +44,7 @@ you can apply to start a new project.
@include-section["emulab-transition.scrbl"]
@include-section["reservations.scrbl"]
@include-section["geni-lib.scrbl"]
@include-section["virtual-machines.scrbl"]
@include-section["advanced-topics.scrbl"]
@include-section["emulab-hardware.scrbl"]
@include-section["planned.scrbl"]
......
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment