- 06 Jun, 2018 1 commit
-
-
David Johnson authored
-
- 04 Jun, 2018 1 commit
-
-
David Johnson authored
The docker VM server-side goo is mostly identical to Xen, with slightly different handling for parent images. We also support loading external Docker images (i.e. those without a real imageid in our DB; in that case, user has to set a specific stub image, and some extra per-vnode metadata (a URI that points to a Docker registry/image repo/tag); the Docker clientside handles the rest. Emulab Docker images map to a Emulab imageid:version pretty seamlessly. For instance, the Emulab `emulab-ops/docker-foo-bar:1` image would map to `<local-registry-URI>/emulab-ops/emulab-ops/docker-foo-bar:1`; the mapping is `<local-registry-URI>/pid/gid/imagename:version`. Docker repository names are lowercase-only, so we handle that for the user; but I would prefer that users use lowercase Emulab imagenames for all Docker images; that will help us. That is not enforced in the code; it will appear in the documentation, and we'll see. Full Docker imaging relies on several other libraries (https://gitlab.flux.utah.edu/emulab/pydockerauth, https://gitlab.flux.utah.edu/emulab/docker-registry-py). Each Emulab-based cluster must currently run its own private registry to support image loading/capture (note however that if capture is unnecessary, users can use the external images path instead). The pydockerauth library is a JWT token server that runs out of boss's Apache and implements authn/authz for the per-Emulab Docker registry (probably running on ops, but could be anywhere) that stores images and arbitrates upload/download access. For instance, nodes in an experiment securely pull images using their pid/eid eventkey; and the pydockerauth emulab authz module knows what images the node is allowed to pull (i.e. sched_reloads, the current image the node is running, etc). Real users can also pull images via user/pass, or bogus user/pass + Emulab SSL cert. GENI credential-based authn/z was way too much work, sadly. There are other auth/z paths (i.e. for admins, temp tokens for secure operations) as well. As far as Docker image distribution in the federation, we use the same model as for regular ndz images. Remote images are pulled in to the local cluster's Docker registry on-demand from their source cluster via admin token auth (note that all clusters in the federation have read-only access to the entire registries of any other cluster in the federation, so they can pull images). Emulab imageid handling is the same as the existing ndz case. For instance, image versions are lazily imported, on-demand; local version numbers may not match the remote image source cluster's version numbers. This will potentially be a bigger problem in the Docker universe; Docker users expect to be able to reference any image version at any time anywhere. But that is of course handleable with some ex post facto synchronization flag day, at least for the Docker images. The big new thing supporting native Docker image usage is the guts of a refactor of the utils/image* scripts into a new library, libimageops; this is necessary to support Docker images, which are stored in their own registry using their own custom protocols, so not amenable to our file-based storage. Note: the utils/image* scripts currently call out to libimageops *only if* the image format is docker; all other images continue on the old paths in utils/image*, which all still remain intact, or minorly-changed to support libimageops. libimageops->New is the factory-style mechanism to get a libimageops that works for your image format or node type. Once you have a libimageops instance, you can invoke normal image logical operations (CreateImage, ImageValidate, ImageRelease, et al). I didn't do every single operation (for instance, I haven't yet dealt with image_import beyond essentially generalizing DownLoadImage by image format). Finally, each libimageops is stateless; another design would have been some statefulness for more complicated operations. You will see that CreateImage, for instance, is written in a helper-subclass style that blurs some statefulness; however, it was the best match for the existing body of code. We can revisit that later if the current argument-passing convention isn't loved. There are a couple outstanding issues. Part of the security model here is that some utils/image* scripts are setuid, so direct libimageops library calls are not possible from a non-setuid context for some operations. This is non-trivial to resolve, and might not be worthwhile to resolve any time soon. Also, some of the scripts write meaningful, traditional content to stdout/stderr, and this creates a tension for direct library calls that is not entirely resolved yet. Not hard, just only partly resolved. Note that tbsetup/libimageops_ndz.pm.in is still incomplete; it needs imagevalidate support. Thus, I have not even featurized this yet; I will get to that as I have cycles.
-
- 12 Feb, 2018 1 commit
-
-
Leigh Stoller authored
-
- 25 Jan, 2018 1 commit
-
-
Leigh Stoller authored
munge it, and made it too long.
-
- 23 Dec, 2017 1 commit
-
-
Mike Hibler authored
-
- 26 Jul, 2017 1 commit
-
-
Mike Hibler authored
-
- 24 Jul, 2017 1 commit
-
-
Leigh Stoller authored
sites can work around being offline.
-
- 02 Jun, 2017 1 commit
-
-
Leigh Stoller authored
-
- 26 Apr, 2017 1 commit
-
-
Leigh Stoller authored
the amount of data we try to pull down, to avoid sucking the entire web into a log file.
-
- 06 Mar, 2017 1 commit
-
-
Leigh Stoller authored
cause dumpdescriptor at the other cluster is returning a bogus XML file.
-
- 21 Feb, 2017 1 commit
-
-
Leigh Stoller authored
initial import of an image, Normally this is done in create_image, but need to do it here as well.
-
- 18 Oct, 2016 1 commit
-
-
Leigh Stoller authored
be coming from a testbed with image versioning on, and so its name will change (the version number).
-
- 06 Oct, 2016 1 commit
-
-
Leigh Stoller authored
OSinfo and Image into a single object for the benefit of the perl code. The database tables have not changed though.
-
- 29 Aug, 2016 1 commit
-
-
Leigh Stoller authored
not happy with that part yet.
-
- 11 Feb, 2016 1 commit
-
-
Leigh Stoller authored
image_import by URL to get initial images.
-
- 15 Oct, 2015 1 commit
-
-
Leigh Stoller authored
import as the updater does not work in this case, so use the creator which is always going to have local groups (since it is geniuser or a real user). I need to think about why we do not do this, it dates back to the original stuff we did for the Probe cluster, but we are now using nonlocal users and projects n a different way for the Cloudlab portal.
-
- 12 Oct, 2015 1 commit
-
-
Leigh Stoller authored
user doing the image snapshot. At the image origin cluster, use this URN to set the updater urn for the new image version, and if that urn is fot a real local user, also set the updater,updater_idx accordingly. Lastly, the image import *must* be done in the context of a user in the project of the image, so fall back to using the image creator.
-
- 21 Aug, 2015 1 commit
-
-
Leigh Stoller authored
-
- 07 Jul, 2015 1 commit
-
-
Leigh Stoller authored
-
- 19 Jun, 2015 1 commit
-
-
Leigh Stoller authored
is just like importing images (by using a url instead of a urn), which makes sense since image backed datasets are just images with a flag set. Key differences: 1. You cannot snapshot a new version of the dataset on a cluster it has been imported to. The snapshot has to be done where the dataset was created initially. This is slightly inconvenient and will perhaps confuse users, but it is far less confusing that then datasets getting out of sync. 2. No image versioning of datasets. We can add that later if we want to.
-
- 18 May, 2015 1 commit
-
-
Leigh Stoller authored
types in the images/default_typelist sitevar.
-
- 15 May, 2015 2 commits
-
-
Leigh Stoller authored
-
Leigh Stoller authored
Soon, we will have images with both full images and deltas, for the same image version. To make this possible, the image path will now be a directory instead of a file, and all of the versions (ndz,sig,sha1,delta) files will reside in the directory. A new config variable IMAGEDIRECTORIES turns this on, there is also a check for the ImageDiretories feature. This is applied only when a brand new image is created; a clone version of the image inherits the path it started with. Yes, you can have a mix of directory based and file based image descriptors. When it is time to convert all images over, there is a script called imagetodir that will go through all image descriptors, create the directory, move/rename all the files, and update the descriptors. Ultimately, we will not support file based image paths. I also added versioning to the image metadata descriptors so that going forward, old clients can handle a descriptor from a new server.
-
- 07 May, 2015 1 commit
-
-
Leigh Stoller authored
but only surfaced as a problem on Clemson. Odd.
-
- 24 Apr, 2015 2 commits
-
-
Leigh Stoller authored
Also fix bug in how imagevalidate is invoked, which resulted in the hash being computed for the previous version of the image, not the new version.
-
Leigh Stoller authored
descriptor does not indicate that one exists (but I have a plan for this). So for now, we try to fetch it and if it fails, we ignore the error. Mike says we can build the sig file offline if we have to.
-
- 01 Apr, 2015 1 commit
-
-
Mike Hibler authored
-
- 17 Mar, 2015 1 commit
-
-
Leigh Stoller authored
-
- 10 Mar, 2015 1 commit
-
-
Leigh Stoller authored
-
- 30 Jan, 2015 1 commit
-
-
Leigh Stoller authored
-
- 22 Jan, 2015 1 commit
-
-
Leigh Stoller authored
-
- 21 Jan, 2015 1 commit
-
-
Leigh Stoller authored
Update image_import to handle image refresh more easily with -r option. Had this in my devel tree for a long time, time to try it out for real.
-
- 07 Jan, 2015 1 commit
-
-
Leigh Stoller authored
can be set later via the web interface.
-
- 02 Dec, 2014 1 commit
-
-
Leigh Stoller authored
-
- 05 Nov, 2014 1 commit
-
-
Leigh Stoller authored
-
- 04 Nov, 2014 1 commit
-
-
Leigh Stoller authored
Mark image update time when importing; more important to know when the image was brought in then when it was updated on the remote site.
-
- 08 Oct, 2014 1 commit
-
-
Leigh Stoller authored
import image, create a new version of the image but with a null parent pointer to indicate a new image not based on the previous or any other local image. I know I said I wanted to just delete the current image but decided I really didn't like that idea.
-
- 12 Sep, 2014 1 commit
-
-
Leigh Stoller authored
-
- 25 Aug, 2014 1 commit
-
-
Leigh Stoller authored
-
- 09 May, 2014 1 commit
-
-
Mike Hibler authored
This should be run whenever an image is created or updated and possibly periodically over existing images. It makes sure that various image metadata fields are up to date: * hash: the SHA1 hash of the image. This field has been around for awhile and was previously maintained by "imagehash". * size: the size of the image file. * range: the sector range covered by the uncompressed image data. * mtime: modification time of the image. This is the "updated" datetime field in the DB. Its intent was always to track the update time of the image, but it wasn't always exact (create-image would update this with the current time at the start of the image capture process). Documentation? Umm...the usage message is comprehensive! It sports a variety of useful options, but the basics are: * imagevalidate -p <image> ... Print current DB metadata for indicated images. <image> can either be a <pid>/<imagename> string or the numeric imageid. * imagevalidate <image> ... Check the mtime, size, hash, and image range of the image file and compare them to the values in the DB. Whine for ones which are out of date. * imagevalidate -u <image> ... Compare and then update DB metadata fields that are out of date. Fixed a variety of scripts that either used imagehash or computed the SHA1 hash directly to now use imagevalidate.
-