-
David Johnson authored
The docker VM server-side goo is mostly identical to Xen, with slightly different handling for parent images. We also support loading external Docker images (i.e. those without a real imageid in our DB; in that case, user has to set a specific stub image, and some extra per-vnode metadata (a URI that points to a Docker registry/image repo/tag); the Docker clientside handles the rest. Emulab Docker images map to a Emulab imageid:version pretty seamlessly. For instance, the Emulab `emulab-ops/docker-foo-bar:1` image would map to `<local-registry-URI>/emulab-ops/emulab-ops/docker-foo-bar:1`; the mapping is `<local-registry-URI>/pid/gid/imagename:version`. Docker repository names are lowercase-only, so we handle that for the user; but I would prefer that users use lowercase Emulab imagenames for all Docker images; that will help us. That is not enforced in the code; it will appear in the documentation, and we'll see. Full Docker imaging relies on several other libraries (https://gitlab.flux.utah.edu/emulab/pydockerauth, https://gitlab.flux.utah.edu/emulab/docker-registry-py). Each Emulab-based cluster must currently run its own private registry to support image loading/capture (note however that if capture is unnecessary, users can use the external images path instead). The pydockerauth library is a JWT token server that runs out of boss's Apache and implements authn/authz for the per-Emulab Docker registry (probably running on ops, but could be anywhere) that stores images and arbitrates upload/download access. For instance, nodes in an experiment securely pull images using their pid/eid eventkey; and the pydockerauth emulab authz module knows what images the node is allowed to pull (i.e. sched_reloads, the current image the node is running, etc). Real users can also pull images via user/pass, or bogus user/pass + Emulab SSL cert. GENI credential-based authn/z was way too much work, sadly. There are other auth/z paths (i.e. for admins, temp tokens for secure operations) as well. As far as Docker image distribution in the federation, we use the same model as for regular ndz images. Remote images are pulled in to the local cluster's Docker registry on-demand from their source cluster via admin token auth (note that all clusters in the federation have read-only access to the entire registries of any other cluster in the federation, so they can pull images). Emulab imageid handling is the same as the existing ndz case. For instance, image versions are lazily imported, on-demand; local version numbers may not match the remote image source cluster's version numbers. This will potentially be a bigger problem in the Docker universe; Docker users expect to be able to reference any image version at any time anywhere. But that is of course handleable with some ex post facto synchronization flag day, at least for the Docker images. The big new thing supporting native Docker image usage is the guts of a refactor of the utils/image* scripts into a new library, libimageops; this is necessary to support Docker images, which are stored in their own registry using their own custom protocols, so not amenable to our file-based storage. Note: the utils/image* scripts currently call out to libimageops *only if* the image format is docker; all other images continue on the old paths in utils/image*, which all still remain intact, or minorly-changed to support libimageops. libimageops->New is the factory-style mechanism to get a libimageops that works for your image format or node type. Once you have a libimageops instance, you can invoke normal image logical operations (CreateImage, ImageValidate, ImageRelease, et al). I didn't do every single operation (for instance, I haven't yet dealt with image_import beyond essentially generalizing DownLoadImage by image format). Finally, each libimageops is stateless; another design would have been some statefulness for more complicated operations. You will see that CreateImage, for instance, is written in a helper-subclass style that blurs some statefulness; however, it was the best match for the existing body of code. We can revisit that later if the current argument-passing convention isn't loved. There are a couple outstanding issues. Part of the security model here is that some utils/image* scripts are setuid, so direct libimageops library calls are not possible from a non-setuid context for some operations. This is non-trivial to resolve, and might not be worthwhile to resolve any time soon. Also, some of the scripts write meaningful, traditional content to stdout/stderr, and this creates a tension for direct library calls that is not entirely resolved yet. Not hard, just only partly resolved. Note that tbsetup/libimageops_ndz.pm.in is still incomplete; it needs imagevalidate support. Thus, I have not even featurized this yet; I will get to that as I have cycles.
David Johnson authoredThe docker VM server-side goo is mostly identical to Xen, with slightly different handling for parent images. We also support loading external Docker images (i.e. those without a real imageid in our DB; in that case, user has to set a specific stub image, and some extra per-vnode metadata (a URI that points to a Docker registry/image repo/tag); the Docker clientside handles the rest. Emulab Docker images map to a Emulab imageid:version pretty seamlessly. For instance, the Emulab `emulab-ops/docker-foo-bar:1` image would map to `<local-registry-URI>/emulab-ops/emulab-ops/docker-foo-bar:1`; the mapping is `<local-registry-URI>/pid/gid/imagename:version`. Docker repository names are lowercase-only, so we handle that for the user; but I would prefer that users use lowercase Emulab imagenames for all Docker images; that will help us. That is not enforced in the code; it will appear in the documentation, and we'll see. Full Docker imaging relies on several other libraries (https://gitlab.flux.utah.edu/emulab/pydockerauth, https://gitlab.flux.utah.edu/emulab/docker-registry-py). Each Emulab-based cluster must currently run its own private registry to support image loading/capture (note however that if capture is unnecessary, users can use the external images path instead). The pydockerauth library is a JWT token server that runs out of boss's Apache and implements authn/authz for the per-Emulab Docker registry (probably running on ops, but could be anywhere) that stores images and arbitrates upload/download access. For instance, nodes in an experiment securely pull images using their pid/eid eventkey; and the pydockerauth emulab authz module knows what images the node is allowed to pull (i.e. sched_reloads, the current image the node is running, etc). Real users can also pull images via user/pass, or bogus user/pass + Emulab SSL cert. GENI credential-based authn/z was way too much work, sadly. There are other auth/z paths (i.e. for admins, temp tokens for secure operations) as well. As far as Docker image distribution in the federation, we use the same model as for regular ndz images. Remote images are pulled in to the local cluster's Docker registry on-demand from their source cluster via admin token auth (note that all clusters in the federation have read-only access to the entire registries of any other cluster in the federation, so they can pull images). Emulab imageid handling is the same as the existing ndz case. For instance, image versions are lazily imported, on-demand; local version numbers may not match the remote image source cluster's version numbers. This will potentially be a bigger problem in the Docker universe; Docker users expect to be able to reference any image version at any time anywhere. But that is of course handleable with some ex post facto synchronization flag day, at least for the Docker images. The big new thing supporting native Docker image usage is the guts of a refactor of the utils/image* scripts into a new library, libimageops; this is necessary to support Docker images, which are stored in their own registry using their own custom protocols, so not amenable to our file-based storage. Note: the utils/image* scripts currently call out to libimageops *only if* the image format is docker; all other images continue on the old paths in utils/image*, which all still remain intact, or minorly-changed to support libimageops. libimageops->New is the factory-style mechanism to get a libimageops that works for your image format or node type. Once you have a libimageops instance, you can invoke normal image logical operations (CreateImage, ImageValidate, ImageRelease, et al). I didn't do every single operation (for instance, I haven't yet dealt with image_import beyond essentially generalizing DownLoadImage by image format). Finally, each libimageops is stateless; another design would have been some statefulness for more complicated operations. You will see that CreateImage, for instance, is written in a helper-subclass style that blurs some statefulness; however, it was the best match for the existing body of code. We can revisit that later if the current argument-passing convention isn't loved. There are a couple outstanding issues. Part of the security model here is that some utils/image* scripts are setuid, so direct libimageops library calls are not possible from a non-setuid context for some operations. This is non-trivial to resolve, and might not be worthwhile to resolve any time soon. Also, some of the scripts write meaningful, traditional content to stdout/stderr, and this creates a tension for direct library calls that is not entirely resolved yet. Not hard, just only partly resolved. Note that tbsetup/libimageops_ndz.pm.in is still incomplete; it needs imagevalidate support. Thus, I have not even featurized this yet; I will get to that as I have cycles.
APT_Rspec.pm.in 57.14 KiB
#!/usr/bin/perl -wT
#
# Copyright (c) 2007-2018 University of Utah and the Flux Group.
#
# {{{EMULAB-LICENSE
#
# This file is part of the Emulab network testbed software.
#
# This file is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at
# your option) any later version.
#
# This file is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this file. If not, see <http://www.gnu.org/licenses/>.
#
# }}}
#
package APT_Rspec;
use strict;
use Data::Dumper;
use Scalar::Util qw(blessed);
use HTML::Entities;
use Carp;
use Exporter;
use vars qw(@ISA @EXPORT);
@ISA = "Exporter";
@EXPORT = qw ( );
# Must come after package declaration!
use emdb;
use GeniXML;
use GeniHRN;
# Configure variables
my $TB = "@prefix@";
my $OURDOMAIN = "@OURDOMAIN@";
# This is a global instead of class.
my $verbose = 0;
# Protos;
sub CompareHashes($$$);
sub CompareLists($$$);
#
# Parse an rspec into nice perl things.
#
sub new($$;$$)
{
my ($class, $rspecfile, $permissive, $verbose_mode) = @_;
my %namespaces = ();
my $rspec;
if ($rspecfile =~ m{<.*?>}s) {
$rspec = GeniXML::Parse($rspecfile);
}
else {
$rspec = GeniXML::ParseFile($rspecfile);
}
if (! defined($rspec)) {
fatal("Could not parse rspec");
}