1. 23 Jul, 2013 2 commits
  2. 11 Jul, 2013 2 commits
  3. 28 Jun, 2013 1 commit
    • Leigh B Stoller's avatar
      Changes for *remote* XEN shared nodes, as on the I2 pcpg nodes. · 4c091c96
      Leigh B Stoller authored
      Since the pcpg-i2 nodes are so very flaky, lets try something that
      does not require them to rebooted or imaged!
      
      The key change is that on these remote nodes, we do not bridge the
      physical control interface to the VM control interfaces. There is no
      point since there are no routable IPs we can use, nor is there a
      192.168 network that would be useful.
      
      However, we still want to give the VMs their 192.168 address and we
      still want multiple VMs on the same host to talk to each other, and we
      we still want the VMs to be able to access the outside world with NAT.
      So we still create the xenbr0 bridge, and give it the router address
      (192.168.0.1). Any traffic heading out will be NAT's as normal, and
      you can ssh into the VM using the physical host IP and the per VM
      sshd port number.
      4c091c96
  4. 20 Jun, 2013 1 commit
    • Leigh B Stoller's avatar
      A big set of changes to how we create XEN guest disks. · fbc26aea
      Leigh B Stoller authored
      Prior to this commit, XEN guests disks were single partition, no MBR,
      the bits dumped into the lvm. This makes a snapshot of a XEN node,
      look completely different then a physical disk image, especially if
      users want more disk space (mkextrafs) inside the guest, and then want
      to take a snapshot of that, and then run it on a physical node (which
      was not possible).
      
      With these changes, guests now use the same MBR layout as our version
      two MBR, which makes them interchangeable with physical disk images.
      In fact, the goal is to be able to switch back and forth as needed,
      based on physical resource availability.
      fbc26aea
  5. 12 Jun, 2013 1 commit
  6. 06 Jun, 2013 2 commits
  7. 31 May, 2013 5 commits
    • Leigh B Stoller's avatar
      Minor fix to ramdisk fixing code. · 895a7e94
      Leigh B Stoller authored
      895a7e94
    • Leigh B Stoller's avatar
      Checkpoint! · 4957bc04
      Leigh B Stoller authored
      Make use of (localized) pygrub to find the boot kernel and ramdisk
      inside the guest. For BSD guests, we mount the FS and look for a
      xen kernel or then the standard kernel. We faill back to the old
      method if we cannot find a kernel.
      
      Localize guests just like we do with slicefix.
      
      For ubuntu kernels, we have to fix the ramdisk to local the xen
      block driver (which is in the ramdisk, but not used). 
      
      Use IFBS for BW capping in dom0. Guests can still do their own link
      shaping, but we cap bandwith outside the guest.
      
      A bunch of non functional gre tunnel stuff based on the openvz code.
      This is all going to come out and be replaced with openvswitch, but
      want it in the repo.
      4957bc04
    • Leigh B Stoller's avatar
      Do not NAT traffic to jail network. · cc1d620c
      Leigh B Stoller authored
      cc1d620c
    • Leigh B Stoller's avatar
      Localized and fixed version of pygrub that can handle submenus and our · 3c4ba2e9
      Leigh B Stoller authored
      whacky slice images.
      3c4ba2e9
    • Leigh B Stoller's avatar
      Do not default XEN guest images to "packages". Lets make the default · 3b352486
      Leigh B Stoller authored
      a single slice image, since we can now pull the kernel (ramdisk) out
      from the guest filesystem (using pygrub for linux, or just mounting
      BSD filesystems). This is a lot faster and easier to deal with. I
      added an option to the newimage page so that people can set this, but
      in general we need a better way to guess that we need it. Always set
      for EC2 images.
      3b352486
  8. 22 May, 2013 1 commit
  9. 14 May, 2013 6 commits
  10. 07 Apr, 2013 1 commit
  11. 30 Jan, 2013 1 commit
    • Kirk Webb's avatar
      Refactor generic vnode setup code a bit for OS independence · f7c51ea6
      Kirk Webb authored
      In order to hook in via the "generic vnode" path for setting up
      blockstores under FreeNAS, I've done a bit of shuffling in order to
      make things more OS-independent and reusable.
      
      * mkvnode.pl
      
      Moved to clientside/tmcc/common.  OS-dependent bits (really only some
      IPtables stuff) abstracted, and moved to tmcc/linux/libvnode.pm.
      
      * libvnode.pm
      
      Moved generic vnode stuff to a new module.  Moved miscellaneous
      utility functions to a new module.  Left OS-specific stuff.  Not
      really sure if what is left should be merged into libsetup/liblocsetup
      or left here - deferring this decision for now.
      
      * libgenvnode.pm
      
      New module currently containing generic vnode stuff.  Currently, the
      VNODE_* predicates are here.
      
      * libutil.pm
      
      New module containing miscellaneous utility functions (fatal,
      mysystem, mysystem2, setState, etc.)
      
      Files referencing libvnode.pm have been updated, as have the relevant
      Makefiles.
      f7c51ea6
  12. 02 Jan, 2013 1 commit
  13. 19 Dec, 2012 2 commits
  14. 12 Dec, 2012 1 commit
  15. 10 Dec, 2012 1 commit
  16. 28 Nov, 2012 2 commits
  17. 27 Nov, 2012 2 commits
  18. 03 Oct, 2012 3 commits
  19. 28 Sep, 2012 4 commits
  20. 25 Sep, 2012 1 commit
    • Leigh B Stoller's avatar
      Changes to support XEN shared nodes and guest snapshots. · 2489c09b
      Leigh B Stoller authored
      Snapshots are done a little differently then openvz of course, since
      there are potentially multiple disk partitions and a kernel. The basic
      operation is:
      
      1. Fire off reboot_prepare from boss. Changes to reboot_prepare result
         in the guest "halting" insted of rebooting.
      
      2. Fire off the create-image client script, which will take imagezips
         of all of the disks (except the swap partition), and grab a copy of
         the kernel. A new xm.conf file is written, and then the directory
         is first tar'ed and then we imagezip that bundle for upload.
      
      3. When booting a guest, we now look for guest images that are
         packaged in this way, although we still support the older method
         for backwards compatability. All of the disks are restored, and a
         new xm.conf created that points to the new kernel.
      2489c09b