Commit 8a67200b authored by David Johnson's avatar David Johnson

Add Docker documentation.

Also refactor VM stuff a little, and add a small Xen section.
parent a1372771
......@@ -6,7 +6,7 @@
@section[#:tag "disk-images"]{Disk Images}
Disk images in @(tb) are stored and distributed in the
Most disk images in @(tb) are stored and distributed in the
@hyperlink["http://www.flux.utah.edu/paper/hibler-atc03"]{Frisbee} disk
image format. They are stored at block level, meaning that, in theory,
any filesystem can be used. In practice, Frisbee's filesystem-aware
......
......@@ -155,13 +155,32 @@ belong to. Currently, the only such permission is the ability to make a profile
visible onto to the owning project. We expect to introduce more
project-specific permissions features in the future.
@section[#:tag "virtual-machines"]{Virtual Machines}
@section[#:tag "physical-machines"]{Physical Machines}
Users of @(tb) may get exclusive, root-level control over @italic{physical
machines}. When allocated this way, no layers of virtualization or indirection
get in the way of the way of performance, and users can be sure that no other
users have access to the machines at the same time. This is an ideal situation
for @seclink["repeatable-research"]{repeatable research}.
Physical machines are @seclink["disk-images"]{re-imaged} between users, so you
can be sure that your physical machines don't have any state left around from
the previous user. You can find descriptions of the
hardware in @(tb)'s clusters in the @seclink["hardware"]{hardware} chapter.
@apt-only{
Physical machines are relatively scarce, and getting access to large numbers of
them, or holding them for a long time, may require
@seclink["getting-help"]{contacting @(tb) staff}.
}
@section[#:tag "virtual-machines"]{Virtual Machines and Containers}
@apt-only{
The default node type in @(tb) is a @italic{virtual machine}, or VM. VMs in
@(tb) are currently implemented on
@hyperlink["http://blog.xen.org/index.php/2013/07/09/xen-4-3-0-released/"]{Xen
4.3} using
@hyperlink["https://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-460.html"]{Xen
4.6} using
@hyperlink["http://wiki.xenproject.org/wiki/Paravirtualization_(PV)"]{paravirtualization}.
Users have full root access with their VMs via @tt{sudo}.
......@@ -180,29 +199,52 @@ project-specific permissions features in the future.
}
@clab-only{
While @(tb) does have the ability to provision virtual machines itself
(using the Xen hypervisor), we expect that the dominant use of @(tb) is
While @(tb) does have the ability to provision virtual machines
(using the Xen hypervisor) and containers (using Docker), we expect that the dominant use of @(tb) is
that users will provision @seclink["physical-machines"]{physical machines}.
Users (or the cloud software stacks that they run) may build their own
virtual machines on these physical nodes using whatever hypervisor they
wish.
wish. However, if your experiment could still benefit from use of virtual
machines or containers (e.g. to form a scalable pool of clients issuing
requests to your cloud software stack), you can find more detail in
@seclink["virtual-machines-advanced"]{the advanced topics section}.
}
@section[#:tag "physical-machines"]{Physical Machines}
Users of @(tb) may get exclusive, root-level control over @italic{physical
machines}. When allocated this way, no layers of virtualization or indirection
get in the way of the way of performance, and users can be sure that no other
users have access to the machines at the same time. This is an ideal situation
for @seclink["repeatable-research"]{repeatable research}.
Physical machines are @seclink["disk-images"]{re-imaged} between users, so you
can be sure that your physical machines don't have any state left around from
the previous user. You can find descriptions of the
hardware in @(tb)'s clusters in the @seclink["hardware"]{hardware} chapter.
@apt-only{
Physical machines are relatively scarce, and getting access to large numbers of
them, or holding them for a long time, may require
@seclink["getting-help"]{contacting @(tb) staff}.
}
@not-apt-clab-only{
To support experiments that must scale to large numbers of nodes, @(tb)
provides @italic{virtual nodes}. A @(tb) virtual node is a virtual
machine or container running on top of a regular operating system. If an
experiment's per-node CPU, memory and network requirements are modest,
the use of virtual nodes allows an experiment to scale to a total node
size that is a factor of tens or hundreds times as many nodes as there
are available physical machines in @(tb). Virtual nodes are also useful
for prototyping experiments and debugging code without tying up
significant amounts of physical resources.
@(tb) virtual nodes are based on the Xen hypervisor or Docker
containers. With some limitations, virtual nodes can act in any role
that a normal @(tb) node can: edge node, router, traffic generator,
etc. You can run startup commands, remotely login over ssh, run software
as root, use common networking tools like tcpdump or traceroute, modify
routing tables, capture and load custom images, and reboot. You can
construct arbitrary topologies of links and LANs mixing virtual and real
nodes.
@(tb) supports the use of native Docker images (which use a different
format than other @(tb) images). You can either use external,
publicly-available images or Dockerfiles; or you can use and
automatically create @seclink["docker-augmentation"]{augmented disk
images}, which are external Docker images that are automatically
repackaged with the @(tb) software and its dependencies, so that all
@(tb) features can be supported inside the container.
Virtual nodes in @(tb) are hosted on either @italic{dedicated} or
@italic{shared} physical machines. In dedicated mode, you may login to
the physical machines hosting your VMs; in shared mode, no one else has
access to your VMs, but there are other users on the same hardware whose
activities may affect the performance of your VMs.
To learn how to allocate and configure virtual nodes, see the
@seclink["virtual-machines-advanced"]{the advanced topics section}.
}
......@@ -50,6 +50,7 @@ control system can be found on CloudLab's @hyperlink[(apturl
@include-section["basic-concepts.scrbl"]
@include-section["reservations.scrbl"]
@include-section["geni-lib.scrbl"]
@include-section["virtual-machines.scrbl"]
@include-section["advanced-topics.scrbl"]
@include-section["hardware.scrbl"]
@include-section["planned.scrbl"]
......
"""An example of a Docker container that mounts a remote blockstore."""
import geni.portal as portal
import geni.rspec.pg as rspec
import geni.rspec.igext as ig
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
# Create an interface to connect to the link from the container to the
# blockstore host.
myintf =
# Create the blockstore host.
bsnode = ig.RemoteBlockstore("bsnode","/mnt/blockstore")
# Map your remote blockstore to the blockstore host
bsnode.dataset = \
"urn:publicid:IDN+emulab.net:emulab-ops+ltdataset+johnsond-bs-foo"
bsnode.readonly = False
# Connect the blockstore host to the container.
bslink = pg.Link("bslink")
bslink.addInterface(node.addInterface("ifbs0"))
bslink.addInterface(bsnode.interface)
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_dockerfile = "https://github.com/docker-library/httpd/raw/38842a5d4cdd44ff4888e8540c0da99009790d01/2.4/Dockerfile"
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_extimage = "ubuntu:16.04"
portal.context.printRequestRSpec()
"""An example of constructing a profile with ten Docker containers in a LAN.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a LAN to put containers into.
lan = request.LAN("lan")
# Create ten Docker containers.
for i in range(0,10):
node = request.DockerContainer("node-%d" % (i))
# Create an interface.
iface = node.addInterface("if1")
# Add the interface to the LAN.
lan.addInterface(iface)
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of a Docker container running an external, unmodified image, and customizing its remote access."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.docker_extimage = "ubuntu:16.04"
node.docker_ssh_style = "exec"
node.docker_exec_shell = "/bin/bash"
portal.context.printRequestRSpec()
"""An example of a Docker container running a standard, augmented system image."""
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops//docker-ubuntu16-std"
portal.context.printRequestRSpec()
"""An example of a Docker container that mounts a remote blockstore."""
import geni.portal as portal
import geni.rspec.pg as rspec
import geni.rspec.igext as ig
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
bs = node.Blockstore("temp-bs","/mnt/tmp")
bs.size = "8GB"
bs.placement "any"
portal.context.printRequestRSpec()
"""An example of constructing a profile with 20 Docker containers in a LAN,
divided across two container hosts.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import the Emulab specific extensions.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a LAN to put containers into.
lan = request.LAN("lan")
# Create two container hosts, each with ten Docker containers.
for j in range(0,2):
# Create a container host.
host = request.RawPC("host-%d" % (j))
# Select a specific hardware type for the container host.
host.hardware_type = "d430"
for i in range(0,10):
# Create a container.
node = request.DockerContainer("node-%d-%d" % (j,i))
# Create an interface.
iface = node.addInterface("if1")
# Add the interface to the LAN.
lan.addInterface(iface)
# Set this container to be instantiated on the host created in
# the outer loop.
node.InstantiateOn(host.client_id)
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM in HVM mode.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import Emulab-specific extensions so we can set node attributes.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Set the XEN_FORCE_HVM custom node attribute to 1 to enable HVM mode:
node.Attribute('XEN_FORCE_HVM','1')
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Docker container.
Instructions: Wait for the profile instance to start, and then log in to
the container via the ssh port specified below. By default, your
container will run a standard Ubuntu image with the Emulab software
preinstalled.
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a Docker container.
node = request.DockerContainer("node")
# Request a container hosted on a shared container host; you will not
# have access to the underlying physical host, and your container will
# not be privileged. Note that if there are no shared hosts available,
# your experiment will be assigned a physical machine to host your container.
node.exclusive = True
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Request a specific number of VCPUs.
node.cores = 4
# Request a specific amount of memory (in GB).
node.ram = 4096
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""An example of constructing a profile with a single Xen VM with extra fs space.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Import Emulab-specific extensions so we can set node attributes.
import geni.rspec.emulab as emulab
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
# Set the XEN_EXTRAFS to request 8GB of extra space in the 4th partition.
node.Attribute('XEN_EXTRAFS','8')
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
......@@ -86,6 +86,12 @@
(define (wireless-only . stuff)
(apt-vs-clab #:pnet stuff #:powder stuff))
(define (not-apt-clab-only . stuff)
(case (tb-mode)
('apt "")
('clab "")
(else stuff)))
(define apt-base-url
(case (tb-mode)
('apt "https://www.aptlab.net/")
......
......@@ -6,7 +6,7 @@
#:date (date->string (current-date))]{The Emulab Manual}
@author[
"Eric Eide" "Robert Ricci" "Jacobus (Kobus) Van der Merwe" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler"
"Eric Eide" "Robert Ricci" "Jacobus (Kobus) Van der Merwe" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler" "David Johnson"
]
@;{
......@@ -44,6 +44,7 @@ you can apply to start a new project.
@include-section["emulab-transition.scrbl"]
@include-section["reservations.scrbl"]
@include-section["geni-lib.scrbl"]
@include-section["virtual-machines.scrbl"]
@include-section["advanced-topics.scrbl"]
@include-section["emulab-hardware.scrbl"]
@include-section["planned.scrbl"]
......
#lang scribble/manual
@(require "defs.rkt")
@title[#:tag "virtual-machines-advanced" #:style main-style #:version apt-version]{Virtual Machines and Containers}
A @(tb) virtual node is a virtual machine or container running on top of
a regular operating system. @(tb) virtual nodes are based on the
@seclink["xen-virtual-machines"]{Xen hypervisor} or on
@seclink["docker-containers"]{Docker containers}. Both types of
virtualization allow groups of processes to be isolated from each other
while running on the same physical machine. @(tb) virtual nodes provide
isolation of the filesystem, process, network, and account
namespaces. Thus, each virtual node has its own private filesystem,
process hierarchy, network interfaces and IP addresses, and set of users
and groups. This level of virtualization allows unmodified applications
to run as though they were on a real machine. Virtual network interfaces
support an arbitrary number of virtual network links. These links may be
individually shaped according to user-specified link parameters, and may
be multiplexed over physical links or used to connect to virtual nodes
within a single physical node.
There are a few specific differences between virtual and physical nodes.
First, @(tb) physical nodes have a routable, public IPv4 address
allowing direct remote access (unless the @(tb) installation has been
configured to use unroutable control network IP addresses, which is very
rare). However, virtual nodes are assigned control network IP addresses
on a private network (typically the @tt{172.16/12} subnet) and are
remotely accessible over ssh via DNAT (destination network-address
translation) to the physical host's public control network IP address,
to a high-numbered port. Depending on local configuration, it may be
possible to @seclink["public-ip-access"]{request routable IP addresses}
for specific virtual nodes to enable direct remote access. Note that
virtual nodes are always able to access the public Internet via SNAT
(source network-address translation; nearly identical to masquerading).
Second, virtual nodes and their virtual network interfaces are connected
by virtual links built atop physical links and physical interfaces. The
virtualization of a physical device/link decreases the fidelity of the
network emulation. Moreover, several virtual links may share the same
physical links via multiplexing. Individual links are isolated at layer
2, but they are not isolated in terms of performance. If you request a
specific bandwidth for a given set of links, our resource mapper will
ensure that if multiple virtual links are mapped to a single physical
link, the sum of the bandwidths of the virtual links will not exceed the
capacity of the physical link (unless you also specify that this
constraint can be ignored by setting the @tt{best_effort} link parameter to
@tt{True}). For example, no more than ten 1Gbps virtual links can be
mapped to a 10Gbps physical link.
Finally, when you allocate virtual nodes, you can specify the amount of
CPU and RAM (and, for Xen VMs, virtual disk space) each node will be
allocated. @(tb)'s resource assigner will not oversubscribe these quantities.
@section[#:tag "xen-virtual-machines"]{Xen VMs}
These examples show the basics of allocating Xen VMs:
@seclink["geni-lib-example-single-vm"]{a single Xen VM node},
@seclink["geni-lib-example-two-vm-lan"]{two Xen VMs in a LAN},
@seclink["geni-lib-example-single-vm-sized"]{a Xen VM with custom disk size}.
In the sections below, we discuss advanced Xen VM allocation features.
@subsection[#:tag "xen-cores-ram"]{Controlling CPU and Memory}
You can control the number of cores and the amount of memory allocated
to each VM by setting the @tt{cores} and @tt{ram} instance variables of
a @tt{XenVM} object, as shown in the following example:
@code-sample["geni-lib-xen-cores-ram.py"]
@subsection[#:tag "xen-extrafs"]{Controlling Disk Space}
Each Xen VM is given enough disk space to hold the requested image.
Most @(tb) images are built with a 16 GB root partition, typically with
about 25% of the disk space used by the operating system. If the
remaining space is not enough for your needs, you can request additional
disk space by setting a @tt{XEN_EXTRAFS} node attribute, as shown in the
following example.
@code-sample["geni-lib-xen-extrafs.py"]
This attribute's unit is in GB. As with @(tb) physical nodes, the extra
disk space will appear in the fourth partition of your VM's disk. You can
turn this extra space into a usable file system by logging into your
VM and doing:
@codeblock{
mynode> sudo mkdir /dirname
mynode> sudo /usr/local/etc/emulab/mkextrafs.pl /dirname
}
where @tt{dirname} is the directory you want your newly-formatted file
system to be mounted.
@code-sample["geni-lib-xen-cores-ram.py"]
@subsection[#:tag "xen-hvm"]{Setting HVM Mode}
By default, all Xen VMs are @hyperlink["https://wiki.xen.org/wiki/Paravirtualization_(PV)"]{paravirtualized}.
If you need @hyperlink["https://wiki.xen.org/wiki/Xen_Project_Software_Overview#HVM_and_its_variants_.28x86.29"]{hardware virtualization}
instead, you must set a @tt{XEN_FORCE_HVM} node attribute, as shown in
this example:
@code-sample["geni-lib-single-hvm.py"]
You can set this attribute only for dedicated-mode VMs. Shared VMs are
available only in paravirtualized mode.
@section[#:tag "docker-containers"]{Docker Containers}
@(tb) supports experiments that use Docker containers as virtual nodes.
In this section, we first describe how to build simple profiles that
create Docker containers, and then demonstrate more advanced features.
The @(tb)-Docker container integration has been designed to enable easy
image onboarding, and to allow users to continue to work naturally with
the standard Docker API or CLI. However, because @(tb) is itself an
orchestration engine, it does not support any of the Docker orchestration
tools or platforms, such as Docker Swarm.
You can request a @(tb) Docker container in a geni-lib script like this:
@codeblock{
import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")
}
You can use the returned @tt{node} object
(a @(geni-lib "geni.rspec.igext.DockerContainer" "DockerContainer")
instance) similarly to other kinds of node objects, like
@(geni-lib "geni.rspec.pg.RawPC" "RawPC") or
@(geni-lib "geni.rspec.igext.XenVM" "XenVM"). However, Docker nodes have
@seclink["docker-member-variables"]{several custom member variables} you can
set to control their behavior and Docker-specific features. We
demonstrate the usage of these member variables in the following subsections
and @seclink["docker-member-variables"]{summarize them at the end of this section}.
@subsection[#:tag "docker-basic-examples"]{Basic Examples}
@code-sample["geni-lib-single-shared-container.py"]
It is easy to extend this profile slightly to allocate 10 containers in
a LAN, and to switch them to @seclink["docker-shared-mode"]{dedicated
mode}. Note that in this case, the
@(geni-lib "geni.rspec.pg.Node.exclusive" "exclusive")
member variable is not specified, and it defaults to @tt{False}):
@code-sample["geni-lib-docker-lan.py"]
Here is a more complex profile that creates 20 containers, binds 10 of
them to a physical host machine of a particular type, and binds the
other 10 to a second machine of the same type:
@code-sample["geni-lib-docker-vhost-lan.py"]
@subsection[#:tag "docker-disk-images"]{Disk Images}
Docker containers use a different
@hyperlink["https://docs.docker.com/engine/reference/builder/"]{disk
image format} than @seclink["disk-images"]{@(tb) physical machines or
Xen virtual machines}, which means that you cannot use the same images
on both a container and a raw PC. However, @(tb) supports native Docker
images in several modes and workflows. @(tb) hosts a private Docker
registry, and the standard @(tb) image-deployment and -capture
mechanisms support capturing container disk images into it. @(tb) also
supports the use of externally hosted, unmodified Docker images and
Dockerfiles for image onboarding and dynamic image creation. Finally,
since some @(tb) features require in-container support (e.g., user
accounts, SSH pubkeys, syslog, scripted program execution), we also
provide an optional @seclink["docker-augmentation"]{automated process},
called augmentation, through which an external image can be customized
with the @(tb) software and dependencies.
@(tb) supports both augmented and unmodified Docker images, but some
features require augmentation (e.g., that the @(tb) client-side software
is installed and running in the container). Unmodified images support
these @(tb) features: network links, link shaping, remote access, remote
storage (e.g. remote block stores), and image capture. Unmodified
images do not support user accounts, SSH pubkeys, or scripted program
execution.
@;{
First, @(tb) hosts a private Docker registry, and the standard @(tb) image-loading and -capturing mechanisms have been modified to use it, so you can create custom images based on our standard Docker system images. When you capture an image, it will be stored in our private registry.so you can load and store custom
images in it
and import external Docker images from other registries. You can
use unmodified external images, or you can tell @(tb) to
@seclink["docker-augmentation"]{@italic{augment} them} automatically
with the Emulab software so that they can more broadly support the @(tb)
feature set (unaugmented images do support some @(tb) features; see
below for more detail). Finally, you can modify and capture new
versions of your disk images using the standard @(tb) image-capturing
process. Or, you can use manual @tt{docker commit} invocations on the
physical host node to capture your own images, and manually push them to
other registries. Finally, you can use your @(tb) credentials to login
to the @(tb) private Docker registry, and pull (download) your images
for use elsewhere.
}
@(tb)'s disk image naming and versioning scheme is slightly different
from Docker's content-addressable model. A @(tb) disk image is
identified by a project and name tuple (typically encoded as a URN), or
by a UUID, as well as a version number that starts at @tt{0}. Each time
you capture a new version of an image, the image's version number is
incremented by one. @(tb) does not support the use of arbitrary
alphanumeric tags to identify image versions, as Docker does.
Thus, when you capture an @(tb) disk image of a @(tb) Docker container,
and give it a name, the local @(tb) registry will contain an image
(repository) of that name, in the project (and group---nearly always the
project name) your experiment was created within---and thus the full
image name is @tt{<project-name>/<group-name>/<image-name>}. The tags
within that repository correspond to the integer version numbers of the
@(tb) disk image. For example, if you have created an @(tb) image named
@tt{docker-my-research} in project @tt{myproject}, and you have created
three versions (@tt{0}, @tt{1}, @tt{2}) and want to pull the latest
version (@tt{2}) to your computer, you could run this command:
@codeblock{
docker pull ops.emulab.net:5080/myproject/myproject/docker-my-research:2
}
You will be prompted for username and password; use your @(tb) credentials.
The following code fragment creates a Docker container that uses a
standard @(tb) Docker disk image, @tt{docker-ubuntu16-std}. This image
is based on the
@hyperlink["https://hub.docker.com/_/ubuntu/"]{@tt{ubuntu:16.04}} Docker
image, with the Emulab client software installed (meaning it is
@seclink["docker-augmentation"]{augmented}) along with dependencies and
other utilities.
@code-sample["geni-lib-docker-stdimage.py"]
@subsection[#:tag "docker-ext-images"]{External Images}
@(tb) supports the use of publicly accessible Docker images in other
registries. It does not currently support username/password access to
images. By default, if you simply specify a repository and tag, as in
the example below, @(tb) assumes the image is in the standard Docker
registry; but you can instead specify a complete URL pointing to a
different registry.
@code-sample["geni-lib-docker-extimage.py"]
By default, @(tb) assumes that an external, non-augmented image does not run
its own @tt{sshd} to support remote login. Instead, it facilitates
remote access to a container by running an alternate @tt{sshd} on the
container host and executing a shell (by default @tt{/bin/sh}) in the
container associated with a specific port (the port in the @tt{ssh} URL
shown on your experiment's page). See
@seclink["docker-remote-access"]{the section on remote access} below for
more detail.
@subsection[#:tag "docker-dockerfiles"]{Dockerfiles}
You can also create images dynamically (at experiment runtime) by
specifying a @tt{Dockerfile} for each container. Note that if multiple
containers hosted on the same physical machine reference the same
@tt{Dockerfile}, the dynamically built image will only be built once.
Here is a simple example of a @tt{Dockerfile} that builds @tt{httpd}
from source:
@code-sample["geni-lib-docker-dockerfile.py"]
You should not assume you have access to the image build environment
(you only do if your container(s) are running in
@seclink["docker-shared-mode"]{dedicated mode})---you should test your
@tt{Dockerfile} on your local machine first to ensure it works.