1. 03 Oct, 2016 5 commits
  2. 29 Sep, 2016 4 commits
    • Mike Hibler's avatar
      Performance improvements to the vnode startup path. · 87eed168
      Mike Hibler authored
      The bigest improvement happened on day one when I took out the 20 second sleep
      between vnode starts in bootvnodes. That appears to have been an artifact of
      an older time and an older Xen. Or, someone smarter than me saw the potential
      of getting bogged down for, oh say three weeks, trying to micro-optimize the
      process and instead just went for the conservative fix!
      
      Following day one, the ensuing couple of weeks was a long strange trip to
      find the maximum number of simultaneous vnode creations that could be done
      without failure. In that time I tried a lot of things, generated a lot of
      graphs, produced and tweaked a lot of new constants, and in the end, wound
      up with the same two magic numbers (3 and 5) that were in the original code!
      To distinguish myself, I added a third magic number (1, the loneliest of
      them all).
      
      All I can say is that now, the choice of 3 or 5 (or 1), is based on more
      solid evidence than before. Previously it was 5 if you had a thin-provisioning
      LVM, 3 otherwise. Now it is based more directly on host resources, as
      described in a long comment in the code, the important part of which is:
      
       #
       # if (dom0 physical RAM < 1GB) MAX = 1;
       # if (any swap activity) MAX = 1;
       #
       #    This captures pc3000s/other old machines and overloaded (RAM) machines.
       #
       # if (# physical CPUs <= 2) MAX = 3;
       # if (# physical spindles == 1) MAX = 3;
       # if (dom0 physical RAM <= 2GB) MAX = 3;
       #
       #    This captures d710s, Apt r320, and Cloudlab m510s. We may need to
       #    reconsider the latter since its single drive is an NVMe device.
       #    But first we have to get Xen working with them (UEFI issues)...
       #
       # else MAX = 5;
      
      In my defense, I did fix some bugs and stuff too (and did I mention
      the cool graphs?) See comments in the code and gitlab emulab/emulab-devel
      issue #148.
      87eed168
    • Mike Hibler's avatar
      Fix the wording of a warning message. · ee854767
      Mike Hibler authored
      ee854767
    • Mike Hibler's avatar
      Machinery for supporting multiple RO/RW clones of a dataset in one experiment. · 72fb6763
      Mike Hibler authored
      Mostly ptopgen/libvtop changes to get things through assign.
      
      Added a new virt_blockstore_attribute, 'prereserve' that can be applied to
      a RW clone to pre-allocate the full amount of space allocated to the volume
      being cloned. This is instead of the default "sparse" clone which could run
      out of space at an inopportune time if the containing pool runs out of space.
      But it doesn't work yet.
      
      Everything is there in the front end to do the necessary capacity checks and
      allocations of space, but then I discovered that ZFS doesn't readily support
      a non-sparse clone! You can do this, I think, by tweaking the "refreserved"
      attribute of the volume after it is created but that would have to be done
      behind the back of FreeNAS and I would have to do some more testing before I
      am willing to go here.
      
      So for now, all clones are sparse and no one is charged for their usage.
      72fb6763
    • Mike Hibler's avatar
      Make sure persistent blockstores are not "sparse" volumes. · bf41a4ce
      Mike Hibler authored
      Also, use new API call to create zvols.
      bf41a4ce
  3. 28 Sep, 2016 3 commits
  4. 27 Sep, 2016 1 commit
  5. 26 Sep, 2016 5 commits
  6. 22 Sep, 2016 8 commits
  7. 21 Sep, 2016 2 commits
  8. 20 Sep, 2016 8 commits
  9. 19 Sep, 2016 4 commits