1. 04 Mar, 2015 2 commits
  2. 03 Mar, 2015 1 commit
    • Mike Hibler's avatar
      Numerous fixes to vnode code based on testing. · ee834259
      Mike Hibler authored
      Create a modest partition 4 (1G) since we need some space for local FSes.
      Rename many of the LVM routines to follow a common naming scheme.
      Make extra effort to remove partitions on an LV; something about a FreeBSD
          VM makes kpartx forget one of its partitions.
      Handle deltas on whole-disk images; some code was missing for this case.
      Make sure multi-image case works; the golden images was being named
          incorrectly.
      Make sure extra FS case works; for the golden image case we were using
          the golden image and ignoring what was specified for the extra FS.
      ee834259
  3. 24 Feb, 2015 1 commit
  4. 23 Feb, 2015 1 commit
  5. 20 Feb, 2015 1 commit
  6. 19 Feb, 2015 3 commits
  7. 17 Feb, 2015 1 commit
    • Mike Hibler's avatar
      Major overhaul to support thin snapshot volumes and also fixup locking. · a9e75f33
      Mike Hibler authored
      A "thin volume" is one in which storage allocation is done on demand; i.e.,
      space is not pre-allocated, hence the "thin" part. If thin snapshots and
      the associated base volume are all part of a "thin pool", then all snapshots
      and the base share blocks from that pool. If there are N snapshots of the
      base, and none have written a particular block, then there is only one copy
      of that block in the pool that everyone shares.
      
      Anyway, we now create a global thin pool in which the thin snapshots can be
      created. We currently allocate up to 75% of the available space in the VG
      to the pool (note: space allocated to the thin pool IS statically allocated).
      The other 25% is for Things That Will Not Be Shared and as fallback in case
      something on the thin volume path fails. That is, we can disable thin
      volume creation and go back to the standard path.
      
      Images are still downloaded and saved in compressed form in individual
      LVs. These LVs are not allocated from the pool since they are TTWNBS.
      
      When the first vnode comes along that needs an image, we imageunzip the
      compressed version to create a "golden disk" LV in the pool. That first
      node and all subsequent nodes get thin snapshots of that volume.
      
      When the last vnode that uses a golden disk goes away we...well,
      do nothing. Unless $REAP_GDS (linux/xen/libvnode_xen.pm) is set non-zero,
      in which case we reap the golden disk. We always leave the compressed
      image LV around. Leigh says he is going to write a daemon to GC all these
      things when we start to run short of VG space...
      
      This speed up for creation of vnodes that shared an image turned up some
      more rack conditions, particularly around iptables. I close a couple more
      holes (in particular, ensuring that we lock iptables when setting up
      enet interfaces as we do for the cnet interface) and added some optional
      lock debug logging (turned off right now).
      
      Timestamped those messages and a variety of other important messages
      so that we could merge (important parts of) the assorted logfiles and
      get a sequential picture of what happened:
      
          grep TIMESTAMP *.log | sort +2
      
      (Think of it as Weir lite!)
      a9e75f33
  8. 01 Feb, 2015 1 commit
  9. 30 Jan, 2015 1 commit
    • Mike Hibler's avatar
      Preliminary "golden image" support using thin volumes in LVM. · 660b8e45
      Mike Hibler authored
      Disabled for now. This is a checkpoint. This version still downloads
      the compressed image into a volume and imageunzips into another volume.
      The difference is that only one client does the imageunzip and then
      everyone makes a snapshot of that.
      
      On to getting rid of the initial download of the compressed image...
      660b8e45
  10. 02 Dec, 2014 1 commit
  11. 07 Nov, 2014 1 commit
    • Mike Hibler's avatar
      The latest in logic to have findSpareDisks not use the system disk. · 2eab9b24
      Mike Hibler authored
      If an available partition device (aka, the 4th partition on the system disk)
      represents less than 5% of the spare space we have found, ignore it.
      
      This will allow us to continue to use the 4th partition on the system
      disk of the d710s (450GB or so) and the second disk (250GB), but not use
      the 2nd partition (3GB), which would make us thrash about on the system
      disk even more than usual.
      
      Mostly this is for the new HP server boxes, so it doesn't pick up the 10GB
      left over on the (virtual) system disk when we have 21TB available on the
      second (virtual) disk.
      
      Another hack til blockstores rule the world...
      2eab9b24
  12. 02 Oct, 2014 1 commit
  13. 14 Aug, 2014 1 commit
  14. 31 Jul, 2014 1 commit
  15. 25 Jul, 2014 1 commit
  16. 20 Jun, 2014 1 commit
  17. 26 May, 2014 1 commit
  18. 16 May, 2014 1 commit
    • Mike Hibler's avatar
      Add a variable to control what space is included in the xen_vg VG. · 2fbfc5dd
      Mike Hibler authored
      Setting LVM_FULLDISKONLY will effectively tell it to not use extra
      space on the system disk when constructing the VG used for vnode disk
      creation. This can affect performace for everybody, especially if it
      uses multiple little partitions on the system disk. However, this is
      turned off by default since, on our d710 nodes, most of the extra
      space is on the system disk. Setting this should be plumbed through
      to the user somehow so they can choose.
      2fbfc5dd
  19. 09 May, 2014 4 commits
  20. 08 May, 2014 1 commit
    • Mike Hibler's avatar
      Support for MBR v3 images in Xen VMs. · ef69e78a
      Mike Hibler authored
      If the Xen dom0 base image is properly (1M) aligned and the VM images
      are properly aligned as well, we should avoid or at least minimize any
      anomolous effects due to mismatches along the path involving the guest OS,
      guest disk layout, LVM LVs, LVM PVs, and the underlying physical disks.
      
      Also, random change to add a min size parameter to the findSpareDisks
      function. I was going to use this to avoid sucking up the little 3GB
      unused partition in MBR 3, but then decided against it.
      ef69e78a
  21. 01 May, 2014 2 commits
  22. 30 Apr, 2014 1 commit
  23. 23 Apr, 2014 1 commit
  24. 22 Apr, 2014 1 commit
  25. 14 Apr, 2014 1 commit
  26. 08 Apr, 2014 1 commit
  27. 03 Apr, 2014 1 commit
  28. 31 Mar, 2014 1 commit
  29. 14 Mar, 2014 1 commit
  30. 07 Mar, 2014 3 commits
  31. 27 Feb, 2014 1 commit