1. 22 Dec, 2011 1 commit
  2. 15 Dec, 2011 3 commits
  3. 13 Dec, 2011 4 commits
  4. 12 Dec, 2011 1 commit
  5. 06 Dec, 2011 1 commit
  6. 05 Dec, 2011 2 commits
    • David Johnson's avatar
      e3a2d756
    • David Johnson's avatar
      Fixup the last commit so newer-style linux shaping with netem works. · 72fb9e3a
      David Johnson authored
      I didn't know this initially, but it turns out that with newer netems,
      you can't add another netem qdisc instance (nor an htb instance)
      inside another netem instance.  The linux maintainers removed
      "classful" qdisc support from the netem qdisc (which makes it
      possible for another qdisc to be "nested" inside a netem qdisc)
      because 1) netem couldn't even nest an instance of itself inside
      itself -- which isn't stricly necessary for us because we can do
      both delay and plr in one netem instance), and 2) because apparently
      non-work-conserving qdiscs already didn't work inside netem (a
      work-conserving qdisc is one that always has a packet ready when its
      underlying device is ready to transmit a packet -- thus, a
      bandwidth-shaping qdisc that might not have a packet ready because
      it's slowing down the send rate is non-work-conserving), and 3) to
      support code cleanups.
      
      So -- what this means for us is that by using modern netem, we are now
      doing bandwidth shaping first, then plr and delay.  With our old
      custom kernel modules, we were doing plr, delay, then bandwidth.
      
      I talked this strategy over with Jon (because adding classful support
      back to netem is nontrivial and defeats the point of trying to use
      what's in the kernel directly without patching it more), and we believe
      it's ok to do -- one because it doesn't always change the shaped rate
      from the old way we used to do things, and second because using these
      params *in tandem* to do link shaping is kind of a poor man's way
      of actually modeling real link behavior -- a la flexlab.
      
      So we'll just document it for users, call it beta for now, and test
      it against the old way and BSD.  If it looks reasonable, we'll stick
      with it; otherwise we'll look at reviving the old style.
      72fb9e3a
  7. 02 Dec, 2011 13 commits
  8. 01 Dec, 2011 2 commits
  9. 30 Nov, 2011 7 commits
  10. 29 Nov, 2011 5 commits
    • Leigh Stoller's avatar
    • Leigh Stoller's avatar
      Fix bug that was causing reserved vlantags to be left behind, causing · 235db86c
      Leigh Stoller authored
      snmmpit to fail at seemingly random times. Also add an update script
      to delete the stale tags.
      235db86c
    • David Johnson's avatar
      Support using Linux netem modules for delay and loss shaping. · 35f1deaa
      David Johnson authored
      ... instead of using our custom kernel modules.  I got tired of
      pulling our patches forward and adapting to the packet sched API
      changes in the kernel!  netem is more advanced than our stuff,
      anyway, and should do a fine job.
      35f1deaa
    • David Johnson's avatar
      Lots of changes: debug; macvlans; details below. · fdf97b51
      David Johnson authored
      I added debug options for each LVM and vzctl call; you can toggle it
      on by touching /vz/.lvmdebug, /vz.save/.lvmdebug, /.lvmdebug, and
      /vz/.vzdebug, /vz.save/.vzdebug, /.vzdebug.  I also added dates to
      debug timestamps for debugging longer-term shared node problems.
      
      I added support for using macvlan devices instead of openvz veths
      for experiment interfaces.  Basically, you can add macvlan devices
      atop any other ethernet device to "virtualize" it using fake mac
      addresses.  We use them like this: if the virtual link/lan needs to
      leave the vhost on a phys device or vlan device, we attach the macvlan
      devices to the appropriate real device.  If the virtlan is completely
      internal to the vhost, we create a dummy ethernet device and attach
      the macvlan devices to that.
      
      The difference between macvlan devices and veths is that macvlan
      devices are created only in the root context, and are moved into
      the container context when the vnodes boot.  There is no "root
      context" half -- the device is fully in the container's network
      namespace.  BUT, the underlying device is in the root network
      namespace.
      
      We use macvlans in "bridge" mode, so that when one macvlan device sends
      a packet, the device driver checks any other macvlan devices attached
      to the underlying physical, vlan, or dummy device, and delivers the packet
      accordingly.  The difference between this fake bridge and a real bridge
      is that the macvlan driver knows the mac of each attached interface,
      and does not have to do any learning whatsoever.  I haven't looked at
      the code, but it should be a very, very simple, fast, and zero-copy
      transmit from one macvlan device onto another.
      
      This is essentially the same as the planetlab shortbridge, but since
      I haven't looked at the code, I can't say that there aren't more
      opportunities to optimize.  Still, this should hopefully be faster
      than openvz veths.
      
      Oh, and I also added support for using Linux tc's netem modules
      for doing delay and loss shaping, instead of using our custom
      kernel modules.  I got tired of pulling our patches forward and
      adapting to the packet sched API changes in the kernel!  netem is
      more advanced than our stuff, anyway, and should do a fine job.
      fdf97b51
    • David Johnson's avatar
      07f685b3
  11. 28 Nov, 2011 1 commit