1. 28 Aug, 2013 1 commit
  2. 27 Aug, 2013 1 commit
    • Leigh B Stoller's avatar
      Another Kludge for returning mounts to VMs. What a pain. Here · f1249179
      Leigh B Stoller authored
      are the details, so they are recorded someplace.
      
      The Racks do not have a real 172 router for the "jail" network.
      This is a mild pain, and one possibility would be to make the
      router be the physical node, so that each set of VMs is using its own
      router thus spreading the load.
      
      Well, that does not work because we use bridge mode on the physical
      host, and so the packets leave the node before they have a chance to
      go through the routing code. Yes, iptables does have something called
      a brouter via etables, but I could not make that work after a lot of
      trying and tearing my hair out
      
      So the next not so best thing is to make the control node be the
      router by sticking an alias on xenbr0 for 172.16.0.1. Fine, that works
      although performance could suffer.
      
      But what about NFS traffic to ops? It would be really silly to send
      that through the routing code on the control node, just to end up
      bridging into into the ops VM. So figured I would optimize that by
      changing domounts to return mounts that reference ops address on the
      jail network. And in fact this worked fine, but only for shared
      nodes.
      
      But it failed for exclusive VMs! In this case, we add a SNAT rule on
      the physical host that changes the source IP to be that of the
      physical host so that users cannot spoof a VM on a shared node and
      mount an NFS filesystem they should not have access to. In fact, it
      failed for UDP mounts but not for TCP mounts. When I looked at the
      traffic with tcpdump, it appeared that return TCP traffic from ops was
      using its jail IP, but return UDP traffic was using the public IP.
      This confuses SNAT and so the packets never get back into the VM.
      
      So, this change basically looks at the sharing mode of the node, and
      if its shared we use the jailip in the mounts, and if it is exclusive
      we use the public IP (and thus, that traffic gets routed through the
      control node). This sucks, but I am worn down on this.
      f1249179
  3. 16 Aug, 2013 1 commit
  4. 15 Aug, 2013 1 commit
    • Gary Wong's avatar
      Add tmcd support for the proposed "geni-get" GENI client side. · f1120a88
      Gary Wong authored
      This allows nodes in GENI slices to retrieve information about their
      sliver and slice via tmcc (or equivalent client-side support).  The
      set of queries available and their names were agreed upon in GEC 17
      sessions and subsequent discussions.
      f1120a88
  5. 13 Aug, 2013 1 commit
  6. 23 Jul, 2013 18 commits
  7. 22 Jul, 2013 1 commit
  8. 01 Jul, 2013 1 commit
  9. 27 Jun, 2013 1 commit
  10. 17 Jun, 2013 1 commit
  11. 13 Jun, 2013 2 commits
  12. 04 Jun, 2013 1 commit
    • Leigh B Stoller's avatar
      No longer return tunnel info to containers; just plain interfaces. · bd2964e2
      Leigh B Stoller authored
      Neither OpenVZ or XEN containers can do anything with the tunnel info,
      since tunnels are created in the root context and all the container
      sees is an interface. We have a hack in the client side for openvz,
      but rather then try to duplicate that hack for every XEN guest, lets
      do this the right way, and return plain ifconfig lines from tmcd and
      config them like any other interface. Since we rely on MAC addresses
      to do this, we now return MACs to the root context when it gets the
      tunnel info.
      
      To do this we need to know the difference between the root context
      asking for info *about* the container, and the container asking for
      its *own* info. Since both XEN and OpenVZ containers are redirected
      through the tmcc proxy, I changed the protocol so tmcd can tell who is
      asking. This is imperfect, since we might someday want the container
      to bypass the proxy, but for now it will do.
      
      The other consideration is that a XEN container might have requested a
      public IP, in which case it could actually do all of the tunnel stuff
      itself, but then again we have to worry about all of the guests being
      able to do tunnels, and so the easiest thing to do is just always do
      it in the root context for the container.
      bd2964e2
  13. 28 May, 2013 1 commit
    • Leigh B Stoller's avatar
      Woeful genirack hack; return mounts on the 172 network to avoid going · ce3d8572
      Leigh B Stoller authored
      through the 172 phony router we have setup on the control node. This
      is silly to do for local traffic, but getting XEN guests to not do it,
      turned into a pit that I didn't want to enter. We want this so that
      arplockdown works properly; the mac address is really the client not a
      router. Revisit later.
      ce3d8572
  14. 22 May, 2013 2 commits
  15. 15 May, 2013 1 commit
  16. 14 May, 2013 1 commit
    • Leigh B Stoller's avatar
      Add new script to do arp lockdown on boss. · f5cc889a
      Leigh B Stoller authored
      The other version is only for the client side (subboss,ops), but does
      not work on real boss. Also hooked into tbswap so that the arps are
      updated during swapin/swapout. Also change tmcd to return arp
      directives for all containers, not just on shared nodes.
      f5cc889a
  17. 10 May, 2013 1 commit
  18. 02 May, 2013 1 commit
  19. 01 May, 2013 1 commit
  20. 30 Apr, 2013 2 commits
    • Kirk Webb's avatar
      Add complete local node storage support from parser down to tcmd. · dab52801
      Kirk Webb authored
      Doing this required adding columns to the virt and physical blockstores
      tables to mark the attributes that will be considered for mapping.
      Unmarked entries just flow through to the client-side.
      
      This commit also introduces filesystem support in the form of passing
      through a mount point to the client-side.  It is left to the client to
      decide what filesystem and fs options to use to setup the space, including
      any logical volume aggregation required to support the request.
      dab52801
    • Mike Hibler's avatar
      Avoid redundent output in hwinfo command. · d468c60f
      Mike Hibler authored
      d468c60f