1. 04 Mar, 2015 1 commit
  2. 23 Feb, 2015 1 commit
  3. 19 Feb, 2015 1 commit
  4. 17 Feb, 2015 1 commit
    • Mike Hibler's avatar
      Major overhaul to support thin snapshot volumes and also fixup locking. · a9e75f33
      Mike Hibler authored
      A "thin volume" is one in which storage allocation is done on demand; i.e.,
      space is not pre-allocated, hence the "thin" part. If thin snapshots and
      the associated base volume are all part of a "thin pool", then all snapshots
      and the base share blocks from that pool. If there are N snapshots of the
      base, and none have written a particular block, then there is only one copy
      of that block in the pool that everyone shares.
      
      Anyway, we now create a global thin pool in which the thin snapshots can be
      created. We currently allocate up to 75% of the available space in the VG
      to the pool (note: space allocated to the thin pool IS statically allocated).
      The other 25% is for Things That Will Not Be Shared and as fallback in case
      something on the thin volume path fails. That is, we can disable thin
      volume creation and go back to the standard path.
      
      Images are still downloaded and saved in compressed form in individual
      LVs. These LVs are not allocated from the pool since they are TTWNBS.
      
      When the first vnode comes along that needs an image, we imageunzip the
      compressed version to create a "golden disk" LV in the pool. That first
      node and all subsequent nodes get thin snapshots of that volume.
      
      When the last vnode that uses a golden disk goes away we...well,
      do nothing. Unless $REAP_GDS (linux/xen/libvnode_xen.pm) is set non-zero,
      in which case we reap the golden disk. We always leave the compressed
      image LV around. Leigh says he is going to write a daemon to GC all these
      things when we start to run short of VG space...
      
      This speed up for creation of vnodes that shared an image turned up some
      more rack conditions, particularly around iptables. I close a couple more
      holes (in particular, ensuring that we lock iptables when setting up
      enet interfaces as we do for the cnet interface) and added some optional
      lock debug logging (turned off right now).
      
      Timestamped those messages and a variety of other important messages
      so that we could merge (important parts of) the assorted logfiles and
      get a sequential picture of what happened:
      
          grep TIMESTAMP *.log | sort +2
      
      (Think of it as Weir lite!)
      a9e75f33
  5. 06 Aug, 2014 1 commit
  6. 28 Jul, 2014 1 commit
  7. 09 May, 2014 1 commit
  8. 31 Mar, 2014 1 commit
  9. 21 Jan, 2014 1 commit
  10. 31 Dec, 2013 1 commit
  11. 18 Dec, 2013 1 commit
  12. 16 Dec, 2013 2 commits
  13. 18 Nov, 2013 1 commit
  14. 06 Nov, 2013 1 commit
  15. 19 Sep, 2013 1 commit
  16. 22 Aug, 2013 1 commit
  17. 24 Jul, 2013 1 commit
  18. 28 Jun, 2013 1 commit
    • Leigh B Stoller's avatar
      Changes for *remote* XEN shared nodes, as on the I2 pcpg nodes. · 4c091c96
      Leigh B Stoller authored
      Since the pcpg-i2 nodes are so very flaky, lets try something that
      does not require them to rebooted or imaged!
      
      The key change is that on these remote nodes, we do not bridge the
      physical control interface to the VM control interfaces. There is no
      point since there are no routable IPs we can use, nor is there a
      192.168 network that would be useful.
      
      However, we still want to give the VMs their 192.168 address and we
      still want multiple VMs on the same host to talk to each other, and we
      we still want the VMs to be able to access the outside world with NAT.
      So we still create the xenbr0 bridge, and give it the router address
      (192.168.0.1). Any traffic heading out will be NAT's as normal, and
      you can ssh into the VM using the physical host IP and the per VM
      sshd port number.
      4c091c96
  19. 31 May, 2013 1 commit
  20. 14 May, 2013 1 commit
  21. 30 Jan, 2013 1 commit
    • Kirk Webb's avatar
      Refactor generic vnode setup code a bit for OS independence · f7c51ea6
      Kirk Webb authored
      In order to hook in via the "generic vnode" path for setting up
      blockstores under FreeNAS, I've done a bit of shuffling in order to
      make things more OS-independent and reusable.
      
      * mkvnode.pl
      
      Moved to clientside/tmcc/common.  OS-dependent bits (really only some
      IPtables stuff) abstracted, and moved to tmcc/linux/libvnode.pm.
      
      * libvnode.pm
      
      Moved generic vnode stuff to a new module.  Moved miscellaneous
      utility functions to a new module.  Left OS-specific stuff.  Not
      really sure if what is left should be merged into libsetup/liblocsetup
      or left here - deferring this decision for now.
      
      * libgenvnode.pm
      
      New module currently containing generic vnode stuff.  Currently, the
      VNODE_* predicates are here.
      
      * libutil.pm
      
      New module containing miscellaneous utility functions (fatal,
      mysystem, mysystem2, setState, etc.)
      
      Files referencing libvnode.pm have been updated, as have the relevant
      Makefiles.
      f7c51ea6
  22. 28 Nov, 2012 1 commit
  23. 28 Sep, 2012 1 commit
  24. 25 Sep, 2012 1 commit