1. 12 Sep, 2017 1 commit
  2. 10 Sep, 2017 1 commit
  3. 07 Sep, 2017 1 commit
  4. 01 Sep, 2017 2 commits
  5. 23 Aug, 2017 1 commit
    • Leigh B Stoller's avatar
      Several changes: · a6cd8ee2
      Leigh B Stoller authored
      1. Get rid of direct queries to wires and interfaces, use library.
      
      2. Allow node:iface on the command line for ports.
      
      3. Add -i option to print out results in node:iface. Eventually we want
         to flush card.port output, but lets wait on that for a while.
      
      4. Switch from card,port to iface lookups.
      
      5. The DB change adds iface to the port_counters table, no longer using
         card,port. Eventually flush them.
      a6cd8ee2
  6. 18 Aug, 2017 1 commit
  7. 17 Aug, 2017 1 commit
  8. 14 Aug, 2017 2 commits
  9. 11 Aug, 2017 1 commit
  10. 08 Aug, 2017 1 commit
  11. 27 Jul, 2017 2 commits
  12. 26 Jul, 2017 2 commits
    • Leigh B Stoller's avatar
      Changes to apt_announcements table: · 4408843a
      Leigh B Stoller authored
      1. Add a unique uuid for a shared lookup token with the web UI.
      
      2. Add pid_idx for targeting announcements to projects (issue #258).
      4408843a
    • Mike Hibler's avatar
      Support for per-experiment root keypairs (Round 1). See issue #302. · c6150425
      Mike Hibler authored
      Provide automated setup of an ssh keypair enabling root to login without
      a password between nodes. The biggest challenge here is to get the private
      key onto nodes in such a way that a non-root user on those nodes cannot
      obtain it. Otherwise that user would be able to ssh as root to any node.
      This precludes simple distribution of the private key using tmcd/tmcc as
      any user can do a tmcc (tmcd authentication is based on the node, not the
      user).
      
      This version does a post-imaging "push" of the private key from boss using
      ssh. The key is pushed from tbswap after nodes are imaged but before the
      event system, and thus any user startup scripts, are started. We actually
      use "pssh" (really "pscp") to scale a bit better, so YOU MUST HAVE THE
      PSSH PACKAGE INSTALLED. So be sure to do a:
      
          pkg install -r Emulab pssh
      
      on your boss node. See the new utils/pushrootkeys.in script for more.
      
      The public key is distributed via the "tmcc localization" command which
      was already designed to handle adding multiple public keys to root's
      authorized_keys file on a node.
      
      This approach should be backward compatible with old images. I BUMPED THE
      VERSION NUMBER OF TMCD so that newer clients can also get back (via
      rc.localize) a list of keys and the names of the files they should be stashed
      in. This is used to allow us to pass along the SSL and SSH versions of the
      public key so that they can be placed in /root/.ssl/<node>.pub and
      /root/.ssh/id_rsa.pub respectively. Note that this step is not necessary for
      inter-node ssh to work.
      
      Also passed along is an indication of whether the returned key is encrypted.
      This might be used in Round 2 if we securely implant a shared secret on every
      node at imaging time and then use that to encrypt the ssh private key such
      that we can return it via rc.localize. But the client side script currently
      does not implement any decryption, so the client side would need to be changed
      again in this future.
      
      The per experiment root keypair mechanism has been exposed to the user via
      old school NS experiments right now by adding a node "rootkey" method. To
      export the private key to "nodeA" and the public key to "nodeB" do:
      
          $nodeA rootkey private 1
          $nodeB rootkey public 1
      
      This enables an asymmetric relationship such that "nodeA" can ssh into
      "nodeB" as root but not vice-versa. For a symmetric relationship you would do:
      
          $nodeA rootkey private 1
          $nodeB rootkey private 1
          $nodeA rootkey public 1
          $nodeB rootkey public 1
      
      These user specifications will be overridden by hardwired Emulab restrictions.
      The current restrictions are that we do *not* distribute a root pubkey to
      tainted nodes (as it opens a path to root on a node where no one should be
      root) or any keys to firewall nodes, virtnode hosts, delay nodes, subbosses,
      storagehosts, etc. which are not really part of the user topology.
      
      For more on how we got here and what might happen in Round 2, see:
      
          #302
      c6150425
  13. 13 Jul, 2017 1 commit
    • Leigh B Stoller's avatar
      Work on issue #302: · 92c8e4ba
      Leigh B Stoller authored
      Add new table experiment_keys to hold RSA priv/pub key pair and an SSH
      public key derived from the private key.
      
      Initialized when experiment is first created, I have not done anything
      to set the keys for existing experiments yet.
      
      But for testing, you can do this:
      
      	use lib "/usr/testbed/lib";
      	use Experiment;
      
      	my $experiment = Experiment->Lookup("testbed", "layers");
      	$experiment->GenerateKeys();
      92c8e4ba
  14. 06 Jul, 2017 1 commit
  15. 20 Jun, 2017 1 commit
  16. 12 Jun, 2017 1 commit
  17. 06 Jun, 2017 2 commits
  18. 05 Jun, 2017 1 commit
  19. 04 Jun, 2017 1 commit
  20. 31 May, 2017 1 commit
  21. 30 May, 2017 4 commits
    • Leigh B Stoller's avatar
    • Leigh B Stoller's avatar
      Amend last commit. · 991986c5
      Leigh B Stoller authored
      991986c5
    • Leigh B Stoller's avatar
      Rework how we store the sliver/slice status from the clusters: · e5d36e0d
      Leigh B Stoller authored
      In the beginning, the number and size of experiments was small, and so
      storing the entire slice/sliver status blob as json in the web task was
      fine, even though we had to lock tables to prevent races between the
      event updates and the local polling.
      
      But lately the size of those json blobs is getting huge and the lock is
      bogging things down, including not being able to keep up with the number
      of events coming from all the clusters, we get really far behind.
      
      So I have moved the status blobs out of the per-instance web task and
      into new tables, once per slice and one per node (sliver). This keeps
      the blobs very small and thus the lock time very small. So now we can
      keep up with the event stream.
      
      If we grow big enough that this problem comes big enough, we can switch
      to innodb for the per-sliver table and do row locking instead of table
      locking, but I do not think that will happen
      e5d36e0d
    • Leigh B Stoller's avatar
      Possible fix for issue #296 · 60e65004
      Leigh B Stoller authored
      60e65004
  22. 16 May, 2017 1 commit
  23. 04 May, 2017 1 commit
  24. 02 May, 2017 2 commits
  25. 19 Apr, 2017 1 commit
  26. 17 Apr, 2017 2 commits
  27. 29 Mar, 2017 1 commit
  28. 24 Mar, 2017 1 commit
  29. 22 Mar, 2017 1 commit
  30. 17 Mar, 2017 1 commit