1. 08 Oct, 2018 1 commit
  2. 01 Oct, 2018 2 commits
  3. 28 Sep, 2018 6 commits
  4. 26 Sep, 2018 1 commit
  5. 21 Sep, 2018 2 commits
  6. 17 Sep, 2018 2 commits
  7. 04 Sep, 2018 2 commits
  8. 13 Aug, 2018 1 commit
  9. 09 Aug, 2018 1 commit
  10. 08 Aug, 2018 2 commits
    • Leigh Stoller's avatar
      Left this out of previous commit. · ef517168
      Leigh Stoller authored
      ef517168
    • Leigh Stoller's avatar
      Big set of changes for deferred/scheduled/offline aggregates: · 6f17de73
      Leigh Stoller authored
      * I started out to add just deferred aggregates; those that are offline
        when starting an experiment (and marked in the apt_aggregates table as
        being deferable). When an aggregate is offline, we add an entry to the
        new apt_deferred_aggregates table, and periodically retry to start the
        missing slivers. In order to accomplish this, I split create_instance
        into two scripts, first part to create the instance in the DB, and the
        second (create_slivers) to create slivers for the instance. The daemon
        calls create_slivers for any instances in the deferred table, until
        all deferred aggregates are resolved.
      
        On the UI side, there are various changes to deal with allowing
        experiments to be partially create. For example used to wait till we
        have all the manifests until showing the topology. Now we show the
        topo on the first manifest, and then add them as they come in. Various
        parts of the UI had to change to deal with missing aggregates, I am
        sure I did not get them all.
      
      * And then once I had that, I realized that "scheduled" experiments was
        an "easy" addition, its just a degenerate case of deferred. For this I
        added some new slots to the tables to hold the scheduled start time,
        and added a started stamp so we can distinguish between the time it
        was created and the time it was actually started. Lots of data.
      
        On the UI side, there is a new fourth step on the instantiate page to
        give the user a choice of immediate or scheduled start. I moved the
        experiment duration to this step. I was originally going to add a
        calendar choice for termination, but I did not want to change the
        existing 16 hour max duration policy, yet.
      6f17de73
  11. 07 Aug, 2018 2 commits
  12. 30 Jul, 2018 2 commits
  13. 16 Jul, 2018 2 commits
    • Leigh Stoller's avatar
      Image handling changes: · fe8cc493
      Leigh Stoller authored
      1. The primary change is to the Create Image modal; we now allow users
         to optionally specify a description for the image. This needed to be
         plumbed through all the way to the GeniCM CreateImage() API. Since
         the modal is getting kinda overloaded, I rearranged things a bit and
         changed the argument checking and error handling. I think this is the
         limit of what we want to do on this modal, need a better UI in the
         future.
      
      2. Of course, if we let users set descriptions, lets show them on the
         image listing page. While I was there, I made the list look more like
         the classic image list; show the image name and project, and put the
         URN in a tooltip, since in general the URN is noisy to look at.
      
      3. And while I was messing with the image listing, I noticed that we
         were not deleting profiles like we said we would. The problem is that
         when we form the image list, we know the profile versions that can be
         deleted, but when the user actually clicks to delete, I was trying to
         regen that decision, but without asking the cluster for the info
         again. So instead, just pass through the version list from the web
         UI.
      fe8cc493
    • Leigh Stoller's avatar
      Add ReadFile() convenience function. · fc8b83bb
      Leigh Stoller authored
      fc8b83bb
  14. 09 Jul, 2018 1 commit
    • Leigh Stoller's avatar
      Various bits of support for issue #408: · b7fb16a8
      Leigh Stoller authored
      * Add portal url to the existing emulab extension that tells the CM the
        CreateSliver() is coming from the Portal. Always send this info, not
        just for the Emulab Portal.
      
      * Stash that info in the geni slice data structure so we can add links
        back to the portal status page for current slices.
      
      * Add routines to generate a portal URL for the history entries, since
        we will not have those links for historical slices. Add links back to
        the portal on the showslice and slice history pages.
      b7fb16a8
  15. 03 Jul, 2018 1 commit
  16. 18 Jun, 2018 2 commits
  17. 12 Jun, 2018 1 commit
  18. 04 Jun, 2018 2 commits
    • David Johnson's avatar
      Docker server-side core, esp new libimageops support for Docker images. · 66366489
      David Johnson authored
      The docker VM server-side goo is mostly identical to Xen, with slightly
      different handling for parent images.  We also support loading external
      Docker images (i.e. those without a real imageid in our DB; in that
      case, user has to set a specific stub image, and some extra per-vnode
      metadata (a URI that points to a Docker registry/image repo/tag);
      the Docker clientside handles the rest.
      
      Emulab Docker images map to a Emulab imageid:version pretty seamlessly.
      For instance, the Emulab `emulab-ops/docker-foo-bar:1` image would map
      to `<local-registry-URI>/emulab-ops/emulab-ops/docker-foo-bar:1`; the
      mapping is `<local-registry-URI>/pid/gid/imagename:version`.  Docker
      repository names are lowercase-only, so we handle that for the user; but
      I would prefer that users use lowercase Emulab imagenames for all Docker
      images; that will help us.  That is not enforced in the code; it will
      appear in the documentation, and we'll see.
      
      Full Docker imaging relies on several other libraries
      (https://gitlab.flux.utah.edu/emulab/pydockerauth,
      https://gitlab.flux.utah.edu/emulab/docker-registry-py).  Each
      Emulab-based cluster must currently run its own private registry to
      support image loading/capture (note however that if capture is
      unnecessary, users can use the external images path instead).  The
      pydockerauth library is a JWT token server that runs out of boss's
      Apache and implements authn/authz for the per-Emulab Docker registry
      (probably running on ops, but could be anywhere) that stores images and
      arbitrates upload/download access.  For instance, nodes in an experiment
      securely pull images using their pid/eid eventkey; and the pydockerauth
      emulab authz module knows what images the node is allowed to pull
      (i.e. sched_reloads, the current image the node is running, etc).  Real
      users can also pull images via user/pass, or bogus user/pass + Emulab
      SSL cert.  GENI credential-based authn/z was way too much work, sadly.
      There are other auth/z paths (i.e. for admins, temp tokens for secure
      operations) as well.
      
      As far as Docker image distribution in the federation, we use the same
      model as for regular ndz images.  Remote images are pulled in to the
      local cluster's Docker registry on-demand from their source cluster via
      admin token auth (note that all clusters in the federation have
      read-only access to the entire registries of any other cluster in the
      federation, so they can pull images).  Emulab imageid handling is the
      same as the existing ndz case.  For instance, image versions are lazily
      imported, on-demand; local version numbers may not match the remote
      image source cluster's version numbers.  This will potentially be a
      bigger problem in the Docker universe; Docker users expect to be able to
      reference any image version at any time anywhere.  But that is of course
      handleable with some ex post facto synchronization flag day, at least
      for the Docker images.
      
      The big new thing supporting native Docker image usage is the guts of a
      refactor of the utils/image* scripts into a new library, libimageops;
      this is necessary to support Docker images, which are stored in their
      own registry using their own custom protocols, so not amenable to our
      file-based storage.  Note: the utils/image* scripts currently call out
      to libimageops *only if* the image format is docker; all other images
      continue on the old paths in utils/image*, which all still remain
      intact, or minorly-changed to support libimageops.
      
      libimageops->New is the factory-style mechanism to get a libimageops
      that works for your image format or node type.  Once you have a
      libimageops instance, you can invoke normal image logical operations
      (CreateImage, ImageValidate, ImageRelease, et al).  I didn't do every
      single operation (for instance, I haven't yet dealt with image_import
      beyond essentially generalizing DownLoadImage by image format).
      Finally, each libimageops is stateless; another design would have been
      some statefulness for more complicated operations.   You will see that
      CreateImage, for instance, is written in a helper-subclass style that
      blurs some statefulness; however, it was the best match for the existing
      body of code.  We can revisit that later if the current argument-passing
      convention isn't loved.
      
      There are a couple outstanding issues.  Part of the security model here
      is that some utils/image* scripts are setuid, so direct libimageops
      library calls are not possible from a non-setuid context for some
      operations.  This is non-trivial to resolve, and might not be worthwhile
      to resolve any time soon.  Also, some of the scripts write meaningful,
      traditional content to stdout/stderr, and this creates a tension for
      direct library calls that is not entirely resolved yet.  Not hard, just
      only partly resolved.
      
      Note that tbsetup/libimageops_ndz.pm.in is still incomplete; it needs
      imagevalidate support.  Thus, I have not even featurized this yet; I
      will get to that as I have cycles.
      66366489
    • Leigh Stoller's avatar
      bbf42391
  19. 01 Jun, 2018 1 commit
  20. 31 May, 2018 3 commits
  21. 30 May, 2018 3 commits
    • Leigh Stoller's avatar
      Several backend/RPC changes for reservations: · 8266ae51
      Leigh Stoller authored
      1. Return current set of reservations (if any) for a user when getting
         the max extension (piggy backing on the call to reduce overhead).
      
      2. Add RPC to get the reservation history for a user (all past
         reservations that were approved).
      
         Aside; the reservation_history table was not being updated properly,
         only expired reservations were saved, not deleted (but used)
         reservations, so we lost a lot of history. We could regen some of it
         from the history tables I added at the Portal for Dmitry, but not
         sure it is worth the trouble.
      
      3. And then the main content of this commit is that for both of the
         lists above, also return the experiment usage history for the project
         an dthe user who created the reservation. This takes the form of a
         time line of allocation changes so that we can graph node usage
         against the reservation bounds, to show graphically how well utilized
         the reservation is.
      8266ae51
    • Leigh Stoller's avatar
      b338ccaf
    • Leigh Stoller's avatar