1. 12 Oct, 2016 4 commits
  2. 11 Oct, 2016 2 commits
    • David Johnson's avatar
      Let experimenters customize prepare, and interface and hosts file setup. · dd4c67d0
      David Johnson authored
      The prepare script now supports pre and post hooks.  It runs all hooks
      in rc order, from the DYNRUNDIR/prepare.pre.d and BINDIR/prepare.pre.d
      dirs (rc order in this case is the BSD order, or my version of it ---
      any file prefixed with a number is run in numeric order; other files are
      run sorted alphabetically following numeric files).  Post hooks are in
      prepare.post.d, and are run at the end of prepare.
      (DYNRUNDIR is always /var/run/emulab .  STATICRUNDIR is usually
      /etc/emulab/run but could be /etc/testbed/run, depending on the
      clientside installation.)
      We now allow users to override our default interface configuration --
      and if they do, and tell us about it by writing a file in either
      $DYNRUNDIR or $STATICRUNDIR named interface-done-$mac , we will not
      attempt to configure it, and will assume they have done it!  If they are
      nice to us and write
        $iface $ipaddr $mac
      into the file, we will parse that and put it into the @ifacemap and
      %mac2iface structures in doboot().  We do *not* attempt to provide them
      the ifconfig info in env vars or anything; they have to grok our
      ifconfig file format, in all its potential glory.
      We read the hosts.head file(s) from /etc, DYNRUNDIR, and STATICRUNDIR,
      and prepend them to our Emulab hosts content.  Then, we append the
      content of the hosts.tail file(s) from /etc, DYNRUNDIR, and STATICDIR
      --- and that file becomes the new /etc/hosts file.
      getmanifest() has become getrcmanifest() to avoid confusion with the
      GENI manifest.  Also, it now supports local manifests embedded in the
      filesystem from $DYNRUNDIR and $STATICRUNDIR (priority is manifest from
      exp, then DYNRUNDIR, then STATICRUNDIR).  All manifests read and
      applied.  Local manifests may also reference local files instead of blob
      ids, of course.  It is important to support local manifests so that
      experimenters can hook our services by default in the disk image.
    • David Johnson's avatar
      Add Linux mkextrafs support for GPT partitions. · 73e5be91
      David Johnson authored
      This mostly involves handling GPT GUID types.  Oh, we *do* use sfdisk to
      set the part type for GPT disks.  We stopped doing it for MBR disks
      because there was an observed behavior that sfdisk would whack the BSD
      disklabel when setting the partition type.  But, I assume we'll never
      have a BSD-partitioned disk in a GPT table, given our current partition
      (I also added a few optional (off by default) partprobes to deal with
      some funny behavior when testing (i.e., setting part types from 0 to X
      to 0, over and over).  The kernel is currently kind of funny.  It
      creates /dev entries for block devices that have part type 0 at boot; it
      creates /dev entries for block devs that you haven't edited when you
      simply runs partprobe (or leaves them intact); but if you make a type
      change from 0 to X and back to 0, partprobe /dev/foo does *not* create
      the device.  I'm sure this behavior has to do with the limits the kernel
      will accept for making changes to a disk with mounted partitions; but it
      is nonetheless strange.  Anyway, this is optional because on some
      kernels, at least, a forced partprobe will result in any 0-typed
      partitions not showing up in /dev, which is not very helpful.  So it's
      there if it's helpful during testing, I guess.)
  3. 10 Oct, 2016 4 commits
  4. 07 Oct, 2016 3 commits
  5. 06 Oct, 2016 4 commits
  6. 05 Oct, 2016 4 commits
  7. 04 Oct, 2016 4 commits
  8. 03 Oct, 2016 7 commits
  9. 29 Sep, 2016 5 commits
    • Mike Hibler's avatar
      Performance improvements to the vnode startup path. · 87eed168
      Mike Hibler authored
      The bigest improvement happened on day one when I took out the 20 second sleep
      between vnode starts in bootvnodes. That appears to have been an artifact of
      an older time and an older Xen. Or, someone smarter than me saw the potential
      of getting bogged down for, oh say three weeks, trying to micro-optimize the
      process and instead just went for the conservative fix!
      Following day one, the ensuing couple of weeks was a long strange trip to
      find the maximum number of simultaneous vnode creations that could be done
      without failure. In that time I tried a lot of things, generated a lot of
      graphs, produced and tweaked a lot of new constants, and in the end, wound
      up with the same two magic numbers (3 and 5) that were in the original code!
      To distinguish myself, I added a third magic number (1, the loneliest of
      them all).
      All I can say is that now, the choice of 3 or 5 (or 1), is based on more
      solid evidence than before. Previously it was 5 if you had a thin-provisioning
      LVM, 3 otherwise. Now it is based more directly on host resources, as
      described in a long comment in the code, the important part of which is:
       # if (dom0 physical RAM < 1GB) MAX = 1;
       # if (any swap activity) MAX = 1;
       #    This captures pc3000s/other old machines and overloaded (RAM) machines.
       # if (# physical CPUs <= 2) MAX = 3;
       # if (# physical spindles == 1) MAX = 3;
       # if (dom0 physical RAM <= 2GB) MAX = 3;
       #    This captures d710s, Apt r320, and Cloudlab m510s. We may need to
       #    reconsider the latter since its single drive is an NVMe device.
       #    But first we have to get Xen working with them (UEFI issues)...
       # else MAX = 5;
      In my defense, I did fix some bugs and stuff too (and did I mention
      the cool graphs?) See comments in the code and gitlab emulab/emulab-devel
      issue #148.
    • Mike Hibler's avatar
      Fix the wording of a warning message. · ee854767
      Mike Hibler authored
    • Keith Downie's avatar
    • Mike Hibler's avatar
      Machinery for supporting multiple RO/RW clones of a dataset in one experiment. · 72fb6763
      Mike Hibler authored
      Mostly ptopgen/libvtop changes to get things through assign.
      Added a new virt_blockstore_attribute, 'prereserve' that can be applied to
      a RW clone to pre-allocate the full amount of space allocated to the volume
      being cloned. This is instead of the default "sparse" clone which could run
      out of space at an inopportune time if the containing pool runs out of space.
      But it doesn't work yet.
      Everything is there in the front end to do the necessary capacity checks and
      allocations of space, but then I discovered that ZFS doesn't readily support
      a non-sparse clone! You can do this, I think, by tweaking the "refreserved"
      attribute of the volume after it is created but that would have to be done
      behind the back of FreeNAS and I would have to do some more testing before I
      am willing to go here.
      So for now, all clones are sparse and no one is charged for their usage.
    • Mike Hibler's avatar
      Make sure persistent blockstores are not "sparse" volumes. · bf41a4ce
      Mike Hibler authored
      Also, use new API call to create zvols.
  10. 28 Sep, 2016 3 commits