1. 30 Aug, 2017 1 commit
  2. 18 Aug, 2017 1 commit
  3. 27 Jul, 2017 1 commit
  4. 26 Jul, 2017 1 commit
    • Mike Hibler's avatar
      Support for per-experiment root keypairs (Round 1). See issue #302. · c6150425
      Mike Hibler authored
      Provide automated setup of an ssh keypair enabling root to login without
      a password between nodes. The biggest challenge here is to get the private
      key onto nodes in such a way that a non-root user on those nodes cannot
      obtain it. Otherwise that user would be able to ssh as root to any node.
      This precludes simple distribution of the private key using tmcd/tmcc as
      any user can do a tmcc (tmcd authentication is based on the node, not the
      user).
      
      This version does a post-imaging "push" of the private key from boss using
      ssh. The key is pushed from tbswap after nodes are imaged but before the
      event system, and thus any user startup scripts, are started. We actually
      use "pssh" (really "pscp") to scale a bit better, so YOU MUST HAVE THE
      PSSH PACKAGE INSTALLED. So be sure to do a:
      
          pkg install -r Emulab pssh
      
      on your boss node. See the new utils/pushrootkeys.in script for more.
      
      The public key is distributed via the "tmcc localization" command which
      was already designed to handle adding multiple public keys to root's
      authorized_keys file on a node.
      
      This approach should be backward compatible with old images. I BUMPED THE
      VERSION NUMBER OF TMCD so that newer clients can also get back (via
      rc.localize) a list of keys and the names of the files they should be stashed
      in. This is used to allow us to pass along the SSL and SSH versions of the
      public key so that they can be placed in /root/.ssl/<node>.pub and
      /root/.ssh/id_rsa.pub respectively. Note that this step is not necessary for
      inter-node ssh to work.
      
      Also passed along is an indication of whether the returned key is encrypted.
      This might be used in Round 2 if we securely implant a shared secret on every
      node at imaging time and then use that to encrypt the ssh private key such
      that we can return it via rc.localize. But the client side script currently
      does not implement any decryption, so the client side would need to be changed
      again in this future.
      
      The per experiment root keypair mechanism has been exposed to the user via
      old school NS experiments right now by adding a node "rootkey" method. To
      export the private key to "nodeA" and the public key to "nodeB" do:
      
          $nodeA rootkey private 1
          $nodeB rootkey public 1
      
      This enables an asymmetric relationship such that "nodeA" can ssh into
      "nodeB" as root but not vice-versa. For a symmetric relationship you would do:
      
          $nodeA rootkey private 1
          $nodeB rootkey private 1
          $nodeA rootkey public 1
          $nodeB rootkey public 1
      
      These user specifications will be overridden by hardwired Emulab restrictions.
      The current restrictions are that we do *not* distribute a root pubkey to
      tainted nodes (as it opens a path to root on a node where no one should be
      root) or any keys to firewall nodes, virtnode hosts, delay nodes, subbosses,
      storagehosts, etc. which are not really part of the user topology.
      
      For more on how we got here and what might happen in Round 2, see:
      
          #302
      c6150425
  5. 31 May, 2017 1 commit
  6. 22 May, 2017 1 commit
  7. 01 Feb, 2017 1 commit
  8. 12 Oct, 2016 1 commit
  9. 06 Oct, 2016 1 commit
  10. 12 Sep, 2016 1 commit
  11. 10 Jun, 2016 1 commit
  12. 06 May, 2016 1 commit
    • Mike Hibler's avatar
      Add a node/node_type "cyclewhenoff" attribute. · c29cc790
      Mike Hibler authored
      This will be used by the power command to tell it to try to power on a
      machine that fails to "cycle". ipmitool (or IPMI) seems to fail by default
      if you try to cycle a powered-off node.
      c29cc790
  13. 25 Apr, 2016 1 commit
  14. 04 Apr, 2016 2 commits
  15. 28 Mar, 2016 1 commit
  16. 22 Feb, 2016 1 commit
  17. 05 Feb, 2016 3 commits
  18. 04 Feb, 2016 1 commit
    • Gary Wong's avatar
      Fix Node::HaveRoutableIPs. · f2e7e6f3
      Gary Wong authored
      It was checking the count of database rows (which would always have
      been 1), not the count of free addresses.
      f2e7e6f3
  19. 03 Feb, 2016 1 commit
    • Leigh Stoller's avatar
      Add support for multiple pre-reservations per project: · 103e0385
      Leigh Stoller authored
      When creating a pre-reserve, new -n option to specify a name for the
      reservation, defaults to "default". All other operations require an
      -n option to avoid messing with the wrong reservation. You are not allowed
      to reuse a reservation name in a project, of course. Priorities are
      probably more important now, we might want to change the default from 0 to
      some thing higher, and change all the current priorities.
      
      For bookkeeping, the nodes table now has a reservation_name slot that is
      set with the reserved_pid. This allows us to revoke the nodes associated
      with a specific reservation. Bonus feature is that when setting the
      reserved_pid via the web interface, we leave the reservation_name null, so
      those won't ever be revoked by the prereserve command line tool.
      
      New feature; when revoking a pre-reserve, we now look to see if nodes being
      revoked are free and can be assigned to other pre-reserves. We used to not
      do anything, and so had to wait until that node was allocated and released
      later, to see if it could move into a pre-reserve.
      
      Also a change required by node specific reservations; when we free a node,
      need to make sure we actually use that node, so have to cycle through all
      reservations in priority order until it can used. We did not need to do
      this before.
      103e0385
  20. 01 Feb, 2016 1 commit
  21. 29 Jan, 2016 1 commit
    • Leigh Stoller's avatar
      New syntax for pre-reserving specific nodes: · 6be50741
      Leigh Stoller authored
      	boss> wap perl prereserve lbsbox pcxxx pcyyy ...
      
      Overall pre-reserve handling is unchanged; if there is a another higher
      priority type pre-reserve, it will be filled first. Moral, be sure to think
      about the priority argument, which you had to do anyway.
      6be50741
  22. 28 Jan, 2016 1 commit
  23. 27 Jan, 2016 1 commit
  24. 16 Dec, 2015 1 commit
  25. 16 Nov, 2015 1 commit
  26. 10 Nov, 2015 1 commit
  27. 27 Aug, 2015 1 commit
  28. 29 Jul, 2015 1 commit
  29. 13 Jul, 2015 1 commit
  30. 04 May, 2015 1 commit
  31. 13 Mar, 2015 2 commits
    • Gary Wong's avatar
      7a76cc98
    • Leigh Stoller's avatar
      Checkpoint various changes. · 0d09773b
      Leigh Stoller authored
      * Various UI tweaks for profile versioning.
      
      * Roll out profile versioning for all users.
      
      * Disable/Hide publishing for now.
      
      * Move profile/version URLs into a modal that is invoked by a new Share
        button, that explains things a little better.
      
      * Unify profile permissions between APT/Cloudlab. Users now see just two
        choices; project or anyone, where anyone includes guest users in the APT
        interface, for now.
      
      * Get rid of "List on the front page" checkbox, all public profiles will be
        listed, but red-dot can still set that bit.
      
      * Return the publicURL dynamically in the status blob, and set/show the
        sliver info button as soon as we get it.
      
      * Console password support; if the aggregate returns the console password,
        add an item to the context menu to show it.
      
      * Other stuff.
      0d09773b
  32. 18 Jan, 2015 1 commit
    • Mike Hibler's avatar
      Change tiplines urlstamp to be an expiration time for the urlhash. · a40fb744
      Mike Hibler authored
      Previously it was the creation stamp for the hash. By making it the
      expiration time, we can do different times for different nodes.
      
      Note that there is no serious compatibility issue with re-purposing
      the DB field. It is almost always zero (since they are only valid
      for 5 minutes) and if it isn't zero when the new code is installed,
      the hash will just immediately become invalid. So what? Big deal!
      a40fb744
  33. 05 Dec, 2014 1 commit
    • Mike Hibler's avatar
      Support dynamically created NFS-root filesystems for admin MFS. · f36bcfab
      Mike Hibler authored
      Significant hackary involved. Similar to exports_setup, there is a boss-side
      script and an ops-side script to handle creation and destruction of the ZFS
      clones that are used for the NFS filesystem. The rest was all about when to
      invoke said scripts.
      
      Creation is easy, we just do a clone whenever the TBAdminMfsSelect is called
      to "turn on" node admin mode. Destruction is not so simple. If we destroyed
      the clone on the corresponding TBAdminMfsSelect "off" call, then we could
      yank the filesystem out from under the node if it was still running in the
      MFS (e.g., "node_admin -n off node"). While that would probably be okay in
      most uses, where at worst we would have to apod or power cycle the node, we
      try to do better. TBAdminMfsSelect "off" instead just renames the clone
      (to "<nodeid>-DEAD") so that it stays available if the node is running on
      it at the time, but ensures that it will not get accidentally used by any
      future boot. We check for, and destroy, any previous versions for a node
      every time we invoke the nfsmfs_setup code for that node. We also destroy
      live or dead clones whenever we call nfree. This ensures that all MFSes
      get cleaned up at experiment swapout time.
      f36bcfab
  34. 25 Nov, 2014 2 commits
  35. 11 Nov, 2014 1 commit
    • Kirk Webb's avatar
      More TaintState management updates. · d24df9d2
      Kirk Webb authored
      * Do not "reset" taint states to match partitions after OS load.
      
      Encumber node with any additional taint states found across the
      OSes loaded on a node's partitions (union of states).  Change the
      name of the associated Node object method to better represent the
      functionality.
      
      * Clear all taint states when a node exits "reloading"
      
      When the reload_daemon is finished with a node and ready to release it,
      it will now clear any/all taint states set on the node.  This is the
      only automatic way to have a node's taint states cleared.  Users
      cannot clear node taint states by os_load'ing away all tainted
      partitions after this commit; nodes must travel through reloading
      to get cleared.
      d24df9d2