All new accounts created on Gitlab now require administrator approval. If you invite any collaborators, please let Flux staff know so they can approve the accounts.

  1. 12 Sep, 2019 7 commits
  2. 10 Sep, 2019 3 commits
  3. 09 Sep, 2019 1 commit
  4. 08 Sep, 2019 1 commit
  5. 05 Sep, 2019 2 commits
  6. 04 Sep, 2019 1 commit
    • Mike Hibler's avatar
      Make vnodesetup and mkvnode consistent in their use of signals. · 62865d37
      Mike Hibler authored
      The meanings of USR1 and HUP were reversed between the two.
      In particular, HUP meant "destroy vnode" to mkvnode instead of USR1.
      This had a particularly bad side-effect since HUPs tend to get flung
      around willy-nilly when the physical machine goes down.
      This caused a vnode to get destroyed when we rebooted a shared vnode
      host. See emulab-devel issue 521 for details.
  7. 03 Sep, 2019 1 commit
  8. 29 Aug, 2019 3 commits
  9. 28 Aug, 2019 1 commit
  10. 27 Aug, 2019 4 commits
  11. 26 Aug, 2019 3 commits
  12. 22 Aug, 2019 4 commits
  13. 21 Aug, 2019 3 commits
    • Mike Hibler's avatar
      Scaling work on blockstore setup and teardown along with a couple of bug fixes. · eb0ff3b8
      Mike Hibler authored
      Previously we were failing on an experiment with 40 blockstores. Now we can
      handle at least 75. We still fail at 100 due to client-side timeouts, but that
      would be addressed with a longer timeout (which involves making new images).
      Attacked this on a number of fronts.
      On the infrastructure side:
      - Batch destruction calls. We were making one ssh call per blockstore
        to tear these down, leaving a lot of dead time. Now we batch them up
        in groups of 10 per call just like creation. They still serialize on
        the One True Lock, but switching is much faster.
      - Don't create new snapshots after destroying a clone. This was a bug.
        If the use of a persistent blockstore was read-write, we forced a new
        snapshot even if it was a RW clone. This resulted in two calls from
        boss to the storage server and two API calls: one to destroy the old
        snapshot and one to create the new one.
      - Increase the timeout on first attach to iSCSI. One True Lock in
        action again. In the case where the storage server has a lot of
        blockstores to create, they would serialize with each blockstore
        taking 8-10 seconds to create. Meanwhile the node attaching to the
        blockstore would timeout after two minutes in the "login" call.
        Normally we would not hit this as the server would probably only
        be setting up 1-3 blockstores and the nodes would most likely first
        need to load an image and do a bunch or other boot time operations
        before attempting the login. There is now a loop around the iSCSI
        login operation that will try up to five times (10 minutes total)
        before giving up. This is completely arbitrary, but making it much
        longer will lead to triggering the node reboot timeout anyway.
      - Cache the results of the libfreenas freenasVolumeList call.
        The call can take a second or more as it can make up to three API
        calls plus a ZFS CLI call. On blockstore VM creation, we were calling
        this twice through different paths. Now the second call will use
        the cached results. The cache is invalidated whenever we drop the
        global lock or make a POST-style API call (that might change the
        returned values).
      - Get rid of gratuitous synchronization. There was a stub vnode function
        on the creation path that was grabbing the lock, doing nothing, and then
        freeing the lock. This caused all the vnodes to pile up and then be
        released to pile up again.
      - Properly identify all the clones of a snapshot so that they all get
        torn down correctly. The ZFS get command we were using to read the
        "clones" property of a snapshot will return at most 1024 bytes of
        property value. When the property is a comma separated list of ZFS
        names, you hit that limit with about 50-60 clones (given our naming
        conventions). Now we have to do a get of every volume and look at the
        "origin" property which identifies any snapshot the volume is associated
      - Properly synchronize overlapping destruction/setup. We call snmpit to
        remove switch VLANs before we start tearing down nodes. This potentially
        allows the VLAN tags to become free for reuse by other blockstore
        experiments before we have torn down the old vnodes (and their VLAN
        devices) on the storage server. This was creating chaos on the server.
        Now we identify this situation and stall any new creations until the
        previous VLAN devices goes away. Again, this is an arbitrary
        wait/timeout (10 minutes now) and can still fail. But this only comes
        into play if a new blockstore allocation comes immediately on the heels
        of a large deallocation.
      - Wait longer to get the One True Lock during teardown. Failure to get
        the lock at the beginning of the teardown process would result in all
        the iSCSI and ZFS state getting left behind, but all the vnode state
        being removed. Hence, a great deal of manual cleanup on the server
        was required. The solution? You guessed it, another arbitrary timeout,
        longer than before.
    • Mike Hibler's avatar
      Shorten the interval at which we check for vnodesetup termination. · ac31c487
      Mike Hibler authored
      When we kill a vnode, we invoke a new instance of vnodesetup which
      signals the real instance and then waits for it to die. Every 15 seconds
      it checks for death and resignals if it is still alive. 15 seconds is
      very coarse grained and could lead to unnecessary delay for anyone above
      waiting. Now we check every 5 seconds, while still only resignalling
      every 15 seconds.
    • Leigh B Stoller's avatar
  14. 20 Aug, 2019 1 commit
  15. 19 Aug, 2019 5 commits