1. 17 Jul, 2014 1 commit
    • Mike Hibler's avatar
      Retroactively add a version number check before returning BOOTPART= · ea0d1fcc
      Mike Hibler authored
      From the comment:
           * "BOOTPART=" confuses the old rc.frisbee argument parsing
           * which looks for "PART=" with the RE ".*PART=" which will
           * match BOOTPART= instead. Thus an old script loading a
           * whole disk image (PART=0) winds up trying to load it in
           * partition 2 (BOOTPART=2). So we can pick one of two
           * versions, the one in effect when rc.frisbee changed its
           * argument parsing (v30, circa 6/28/2010) or the version
           * in effect when BOOTPART was added (v36, circa 6/13/2013).
           * We choose the latter.
      ea0d1fcc
  2. 30 May, 2014 1 commit
  3. 07 May, 2014 1 commit
    • Mike Hibler's avatar
      Introducing TMCD version 38! Returns additional "loadinfo" info. · 4a8604b1
      Mike Hibler authored
      New loadinfo returns:
      
      IMAGELOW, IMAGEHIGH: range of sectors covered by the image.
          This is NOT the same as what imageinfo or imagedump will show.
          For partition images, these low and high values are adjusted
          for the MBR offset of the partition in question. So when loading
          a Linux image, expect values like 6G and 12G. The intent here
          (not yet realized) is that these values will be used to construct
          an MBR/GPT on the fly, rather than using hardcode magic MBR versions.
          You can get the uncompressed size of the image with (high - low + 1).
      
      IMAGESSIZE: the units of the low/high values.
          Always 512 right now, may be 4096 someday.
      
      IMAGERELOC: non-zero if the image can be placed at an offset other
          than IMAGELOW (i.e., it can be relocated). This may or may not
          prove useful for dynamic MBR construction...we will see.
      
      Probably didn't need to bump the version here, but I am playing it safe.
      4a8604b1
  4. 07 Apr, 2014 1 commit
  5. 03 Apr, 2014 2 commits
    • Gary Wong's avatar
      Increase the limit on certain geni-get response lengths (esp. manifests). · 21c48518
      Gary Wong authored
      The old limit (2K) was big enough that essentially any hand-written
      rspec would work fine, but also small enough that pretty much any manifest
      for a Flack-generated request rspec would fail.
      21c48518
    • Leigh B Stoller's avatar
      Return the root password hash in the jailconfig call. · fddcd467
      Leigh B Stoller authored
      This allows dom0 to set the password of the guest at creation time, so
      that if something goes wrong, we can get in on the console. This also
      fixes an error where on a shared node, we were returning the password
      hash for the physical host. Return a per-node hash instead.
      
      Also abstract out the various places we get read from /dev/urandom.
      fddcd467
  6. 25 Mar, 2014 3 commits
    • Leigh B Stoller's avatar
      Minor fix to previous revision. · 46cc4ef7
      Leigh B Stoller authored
      46cc4ef7
    • Leigh B Stoller's avatar
      Minor fix to previous revision. · ac13f646
      Leigh B Stoller authored
      ac13f646
    • Leigh B Stoller's avatar
      Server side of firewall support for XEN containers. · 2faea2f3
      Leigh B Stoller authored
      This differs from the current firewall support, which assumes a single
      firewall for an entire experiment, hosted on a dedicated physical
      node. At some point, it would be better to host the dedicated firewall
      inside a XEN container, but that is a project for another day (year).
      
      Instead, I added two sets of firewall rules to the default_firewall_rules
      table, one for dom0 and another for domU. These follow the current
      style setup of open,basic,closed, while elabinelab is ignored since it
      does not make sense for this yet.
      
      These two rules sets are independent, the dom0 rules can be applied to
      the physical host, and domU rules can be applied to specific
      containers.
      
      My goal is that all shared nodes will get the dom0 closed rules (ssh
      from local boss only) to avoid the ssh attacks that all of the racks
      are seeing.
      
      DomU rules can be applied on a per-container (node) basis. As
      mentioned above this is quite different, and needed minor additions to
      the virt_nodes table to allow it.
      2faea2f3
  7. 10 Mar, 2014 1 commit
    • Mike Hibler's avatar
      Support "no NFS mount" experiments. · 5446760e
      Mike Hibler authored
      We have had the mechanism implemented in the client for some time and
      available at the site-level or, in special cases, at the node level.
      New NS command:
      
          tb-set-nonfs 1
      
      will ensure that no nodes in the experiment attempt to mount shared
      filesystems from ops (aka, "fs"). In this case, a minimal homdir is
      created on each node with basic dotfiles and your .ssh keys. There will
      also be empty /proj, /share, etc. directories created.
      
      One additional mechanism that we have now is that we do not export filesystems
      from ops to those nodes. Previously, it was all client-side and you could
      mount the shared FSes if you wanted to. By prohibiting the export of these
      filesystems, the mechanism is more suitable for "security" experiments.
      5446760e
  8. 22 Jan, 2014 1 commit
    • Mike Hibler's avatar
      Pass PERMS= param in storageconfig command. · 59b1c489
      Mike Hibler authored
      For persistent blockstores, is based on the value of the "readonly"
      virt_blockstore_attributes attribute if it exists. The RO attribute
      is set by libvtop when an attempt is made to use a lease that is in
      the 'grace' state.
      59b1c489
  9. 10 Jan, 2014 2 commits
  10. 08 Jan, 2014 1 commit
  11. 18 Dec, 2013 1 commit
  12. 16 Dec, 2013 1 commit
  13. 11 Dec, 2013 1 commit
    • Mike Hibler's avatar
      Pass PERSIST=1 when the blockstore is persistent. · 8ac5ad30
      Mike Hibler authored
      This is a bit hacky as noted in the comment:
      
                     * XXX we only put out the PERSIST flag if it is set.
                     * Since the client-side is stupid-picky about unknown
                     * attributes, this will cause an older client to fail
                     * when the attribute is passed. Believe it or not,
                     * that is a good thing! This will cause an older
                     * client to fail if presented with a persistent
                     * blockstore. If it did not fail, the client would
                     * proceed to unconditionally create a filesystem on
                     * the blockstore, wiping out what was previously
                     * there.
      8ac5ad30
  14. 22 Nov, 2013 1 commit
  15. 07 Nov, 2013 1 commit
  16. 09 Sep, 2013 3 commits
  17. 28 Aug, 2013 1 commit
  18. 27 Aug, 2013 1 commit
    • Leigh B Stoller's avatar
      Another Kludge for returning mounts to VMs. What a pain. Here · f1249179
      Leigh B Stoller authored
      are the details, so they are recorded someplace.
      
      The Racks do not have a real 172 router for the "jail" network.
      This is a mild pain, and one possibility would be to make the
      router be the physical node, so that each set of VMs is using its own
      router thus spreading the load.
      
      Well, that does not work because we use bridge mode on the physical
      host, and so the packets leave the node before they have a chance to
      go through the routing code. Yes, iptables does have something called
      a brouter via etables, but I could not make that work after a lot of
      trying and tearing my hair out
      
      So the next not so best thing is to make the control node be the
      router by sticking an alias on xenbr0 for 172.16.0.1. Fine, that works
      although performance could suffer.
      
      But what about NFS traffic to ops? It would be really silly to send
      that through the routing code on the control node, just to end up
      bridging into into the ops VM. So figured I would optimize that by
      changing domounts to return mounts that reference ops address on the
      jail network. And in fact this worked fine, but only for shared
      nodes.
      
      But it failed for exclusive VMs! In this case, we add a SNAT rule on
      the physical host that changes the source IP to be that of the
      physical host so that users cannot spoof a VM on a shared node and
      mount an NFS filesystem they should not have access to. In fact, it
      failed for UDP mounts but not for TCP mounts. When I looked at the
      traffic with tcpdump, it appeared that return TCP traffic from ops was
      using its jail IP, but return UDP traffic was using the public IP.
      This confuses SNAT and so the packets never get back into the VM.
      
      So, this change basically looks at the sharing mode of the node, and
      if its shared we use the jailip in the mounts, and if it is exclusive
      we use the public IP (and thus, that traffic gets routed through the
      control node). This sucks, but I am worn down on this.
      f1249179
  19. 16 Aug, 2013 1 commit
  20. 15 Aug, 2013 1 commit
    • Gary Wong's avatar
      Add tmcd support for the proposed "geni-get" GENI client side. · f1120a88
      Gary Wong authored
      This allows nodes in GENI slices to retrieve information about their
      sliver and slice via tmcc (or equivalent client-side support).  The
      set of queries available and their names were agreed upon in GEC 17
      sessions and subsequent discussions.
      f1120a88
  21. 13 Aug, 2013 1 commit
  22. 22 Jul, 2013 1 commit
  23. 01 Jul, 2013 1 commit
  24. 27 Jun, 2013 1 commit
  25. 17 Jun, 2013 1 commit
  26. 13 Jun, 2013 2 commits
  27. 04 Jun, 2013 1 commit
    • Leigh B Stoller's avatar
      No longer return tunnel info to containers; just plain interfaces. · bd2964e2
      Leigh B Stoller authored
      Neither OpenVZ or XEN containers can do anything with the tunnel info,
      since tunnels are created in the root context and all the container
      sees is an interface. We have a hack in the client side for openvz,
      but rather then try to duplicate that hack for every XEN guest, lets
      do this the right way, and return plain ifconfig lines from tmcd and
      config them like any other interface. Since we rely on MAC addresses
      to do this, we now return MACs to the root context when it gets the
      tunnel info.
      
      To do this we need to know the difference between the root context
      asking for info *about* the container, and the container asking for
      its *own* info. Since both XEN and OpenVZ containers are redirected
      through the tmcc proxy, I changed the protocol so tmcd can tell who is
      asking. This is imperfect, since we might someday want the container
      to bypass the proxy, but for now it will do.
      
      The other consideration is that a XEN container might have requested a
      public IP, in which case it could actually do all of the tunnel stuff
      itself, but then again we have to worry about all of the guests being
      able to do tunnels, and so the easiest thing to do is just always do
      it in the root context for the container.
      bd2964e2
  28. 28 May, 2013 1 commit
    • Leigh B Stoller's avatar
      Woeful genirack hack; return mounts on the 172 network to avoid going · ce3d8572
      Leigh B Stoller authored
      through the 172 phony router we have setup on the control node. This
      is silly to do for local traffic, but getting XEN guests to not do it,
      turned into a pit that I didn't want to enter. We want this so that
      arplockdown works properly; the mac address is really the client not a
      router. Revisit later.
      ce3d8572
  29. 22 May, 2013 2 commits
  30. 15 May, 2013 1 commit
  31. 14 May, 2013 1 commit
    • Leigh B Stoller's avatar
      Add new script to do arp lockdown on boss. · f5cc889a
      Leigh B Stoller authored
      The other version is only for the client side (subboss,ops), but does
      not work on real boss. Also hooked into tbswap so that the arps are
      updated during swapin/swapout. Also change tmcd to return arp
      directives for all containers, not just on shared nodes.
      f5cc889a
  32. 10 May, 2013 1 commit