1. 10 Mar, 2015 5 commits
  2. 05 Mar, 2015 1 commit
  3. 25 Feb, 2015 1 commit
  4. 04 Feb, 2015 3 commits
  5. 31 Jan, 2015 1 commit
  6. 28 Jan, 2015 1 commit
    • Mike Hibler's avatar
      Implement "plan 1" for dataset sharing: "ephemeral RO snapshots". · 7aefdaa1
      Mike Hibler authored
      You can now simultaneously RW and RO map a dataset because all the RO
      mappings use copies (clones) of a snapshot. Only a single RW mapping
      of course.
      
      When the RW mapping swaps out it automatically creates a new snapshot.
      So there is currently no user control over when a version of the dataset
      is "published", it just happens everytime you swapout an experiment with
      a RW mapping.
      
      A new RW mapping does not affect current RO mappings of course as they
      continue to use whatever snapshot they were created with. New RO mappings
      with get the most recent snapshot, which we currently track in the DB via
      the per-lease attribute "last_snapshot".
      
      You can also now declare a lease to be "exclusive use" by setting the
      "exclusive_use" lease attribute (via modlease). This means that it follows
      the old semantics of only one mapping at a time, whether it be RO or RW.
      This is an alternative to the "simultaneous_ro_datasets" sitevar which
      enforces the old behavior globally. Primarily, I put this attribute in to
      prevent an unexpected failure in the snapshot/clone path from wreaking
      havoc over time. I don't know if there is any value in exposing this to
      the user.
      7aefdaa1
  7. 27 Jan, 2015 2 commits
  8. 26 Jan, 2015 3 commits
  9. 22 Jan, 2015 3 commits
  10. 18 Jan, 2015 1 commit
    • Mike Hibler's avatar
      Change tiplines urlstamp to be an expiration time for the urlhash. · a40fb744
      Mike Hibler authored
      Previously it was the creation stamp for the hash. By making it the
      expiration time, we can do different times for different nodes.
      
      Note that there is no serious compatibility issue with re-purposing
      the DB field. It is almost always zero (since they are only valid
      for 5 minutes) and if it isn't zero when the new code is installed,
      the hash will just immediately become invalid. So what? Big deal!
      a40fb744
  11. 12 Jan, 2015 3 commits
  12. 09 Jan, 2015 2 commits
  13. 08 Jan, 2015 2 commits
    • Kirk Webb's avatar
      Backend support for simultaneous read-only dataset access. · 9b6e1a59
      Kirk Webb authored
      Any number of users/experiments can mount a given dataset (given that
      they have permission) in read-only mode.  Attempts to mount RW will
      fail if the dataset is currently in use.  Attempts to mount RO while
      the dataset is in use RW are also prohibited.
      
      Under the hood, iSCSI lease exports (targets) are now managed per-lease
      instead of per-experiment.  The set of authorized initiators (based
      on network) is manipulated as consumers come and go.  When the last
      consumer goes, the export is torn down. Likewise, if there are no
      current consumers, a new consumer will cause an iSCSI export to be
      created for the lease.
      
      Also included in this commit is a small tweak to implicit lease permissions.
      9b6e1a59
    • Leigh B Stoller's avatar
      55388928
  14. 03 Jan, 2015 1 commit
  15. 29 Dec, 2014 1 commit
  16. 14 Dec, 2014 2 commits
  17. 06 Dec, 2014 1 commit
  18. 05 Dec, 2014 1 commit
    • Mike Hibler's avatar
      Support dynamically created NFS-root filesystems for admin MFS. · f36bcfab
      Mike Hibler authored
      Significant hackary involved. Similar to exports_setup, there is a boss-side
      script and an ops-side script to handle creation and destruction of the ZFS
      clones that are used for the NFS filesystem. The rest was all about when to
      invoke said scripts.
      
      Creation is easy, we just do a clone whenever the TBAdminMfsSelect is called
      to "turn on" node admin mode. Destruction is not so simple. If we destroyed
      the clone on the corresponding TBAdminMfsSelect "off" call, then we could
      yank the filesystem out from under the node if it was still running in the
      MFS (e.g., "node_admin -n off node"). While that would probably be okay in
      most uses, where at worst we would have to apod or power cycle the node, we
      try to do better. TBAdminMfsSelect "off" instead just renames the clone
      (to "<nodeid>-DEAD") so that it stays available if the node is running on
      it at the time, but ensures that it will not get accidentally used by any
      future boot. We check for, and d...
      f36bcfab
  19. 03 Dec, 2014 1 commit
  20. 02 Dec, 2014 1 commit
  21. 01 Dec, 2014 1 commit
  22. 25 Nov, 2014 3 commits