1. 29 Jul, 2015 1 commit
  2. 13 Jul, 2015 1 commit
  3. 04 May, 2015 1 commit
  4. 13 Mar, 2015 2 commits
    • Gary Wong's avatar
      7a76cc98
    • Leigh Stoller's avatar
      Checkpoint various changes. · 0d09773b
      Leigh Stoller authored
      * Various UI tweaks for profile versioning.
      
      * Roll out profile versioning for all users.
      
      * Disable/Hide publishing for now.
      
      * Move profile/version URLs into a modal that is invoked by a new Share
        button, that explains things a little better.
      
      * Unify profile permissions between APT/Cloudlab. Users now see just two
        choices; project or anyone, where anyone includes guest users in the APT
        interface, for now.
      
      * Get rid of "List on the front page" checkbox, all public profiles will be
        listed, but red-dot can still set that bit.
      
      * Return the publicURL dynamically in the status blob, and set/show the
        sliver info button as soon as we get it.
      
      * Console password support; if the aggregate returns the console password,
        add an item to the context menu to show it.
      
      * Other stuff.
      0d09773b
  5. 18 Jan, 2015 1 commit
    • Mike Hibler's avatar
      Change tiplines urlstamp to be an expiration time for the urlhash. · a40fb744
      Mike Hibler authored
      Previously it was the creation stamp for the hash. By making it the
      expiration time, we can do different times for different nodes.
      
      Note that there is no serious compatibility issue with re-purposing
      the DB field. It is almost always zero (since they are only valid
      for 5 minutes) and if it isn't zero when the new code is installed,
      the hash will just immediately become invalid. So what? Big deal!
      a40fb744
  6. 05 Dec, 2014 1 commit
    • Mike Hibler's avatar
      Support dynamically created NFS-root filesystems for admin MFS. · f36bcfab
      Mike Hibler authored
      Significant hackary involved. Similar to exports_setup, there is a boss-side
      script and an ops-side script to handle creation and destruction of the ZFS
      clones that are used for the NFS filesystem. The rest was all about when to
      invoke said scripts.
      
      Creation is easy, we just do a clone whenever the TBAdminMfsSelect is called
      to "turn on" node admin mode. Destruction is not so simple. If we destroyed
      the clone on the corresponding TBAdminMfsSelect "off" call, then we could
      yank the filesystem out from under the node if it was still running in the
      MFS (e.g., "node_admin -n off node"). While that would probably be okay in
      most uses, where at worst we would have to apod or power cycle the node, we
      try to do better. TBAdminMfsSelect "off" instead just renames the clone
      (to "<nodeid>-DEAD") so that it stays available if the node is running on
      it at the time, but ensures that it will not get accidentally used by any
      future boot. We check for, and destroy, any previous versions for a node
      every time we invoke the nfsmfs_setup code for that node. We also destroy
      live or dead clones whenever we call nfree. This ensures that all MFSes
      get cleaned up at experiment swapout time.
      f36bcfab
  7. 25 Nov, 2014 2 commits
  8. 11 Nov, 2014 1 commit
    • Kirk Webb's avatar
      More TaintState management updates. · d24df9d2
      Kirk Webb authored
      * Do not "reset" taint states to match partitions after OS load.
      
      Encumber node with any additional taint states found across the
      OSes loaded on a node's partitions (union of states).  Change the
      name of the associated Node object method to better represent the
      functionality.
      
      * Clear all taint states when a node exits "reloading"
      
      When the reload_daemon is finished with a node and ready to release it,
      it will now clear any/all taint states set on the node.  This is the
      only automatic way to have a node's taint states cleared.  Users
      cannot clear node taint states by os_load'ing away all tainted
      partitions after this commit; nodes must travel through reloading
      to get cleared.
      d24df9d2
  9. 20 Oct, 2014 1 commit
  10. 04 Sep, 2014 1 commit
  11. 26 Aug, 2014 1 commit
  12. 11 Jul, 2014 1 commit
  13. 01 Jul, 2014 1 commit
  14. 06 Jun, 2014 1 commit
  15. 04 Jun, 2014 1 commit
  16. 13 May, 2014 1 commit
  17. 12 May, 2014 1 commit
    • Leigh Stoller's avatar
      Fix for loading an image on a remoteded pg node. This is a kludge, the · 15dce279
      Leigh Stoller authored
      notion of "dedicated" is currently a type specific attribute, but we
      also have "shared" nodes running on "dedicated" nodes, which messes
      everything up. I am not inclined to fix the underlying problem since
      Utah is the only site that uses this stuff, and these nodes are slowly
      dying out anyway.
      15dce279
  18. 16 Apr, 2014 1 commit
  19. 15 Apr, 2014 2 commits
  20. 03 Apr, 2014 1 commit
  21. 20 Mar, 2014 1 commit
  22. 17 Mar, 2014 2 commits
    • Kirk Webb's avatar
      Refactor taintstate code and move final taint updates to stated. · 662972cd
      Kirk Webb authored
      Can't do the untainting for all cases in libosload*.  The untainting
      is now hooked into stated, where we catch the nodes as they send
      along their "RELOADDONE" events to update their taint state according
      to the final state of their partitions.
      662972cd
    • Kirk Webb's avatar
      Add taint state tracking for OSes and Nodes. · 1de4e516
      Kirk Webb authored
      Emulab can now propagate OS taint traits on to nodes that load these OSes.
      The primary reason for doing this is for loading images which
      require special treatment of the node.  For example, an OS that has
      proprietary software, and which will be used as an appliance (blackbox)
      can be marked (tainted) as such.  Code that manages user accounts on such
      OSes, along with other side channel providers (console, node admin, image
      creation) can key off of these taint states to prevent or alter access.
      
      Taint states are defined as SQL sets in the 'os_info' and 'nodes' tables,
      kept in the 'taint_states' column in both.  Currently these sets are comprised
      of the following entries:
      
      * usermode: OS/node should only allow user level access (not root)
      * blackbox: OS/node should allow no direct interaction via shell, console, etc.
      * dangerous: OS image may contain malicious software.
      
      Taint states are inherited by a node from OSes it loads during the OS load
      process.  Similarly, they are cleared from nodes as these OSes are removed.
      Any taint state applied to a node will currently enforce disk zeroing.
      
      No other tools/subsystems consider the taint states currently, but that will
      change soon.
      
      Setting taint states for an OS has to be done via SQL presently.
      1de4e516
  23. 08 Jan, 2014 1 commit
  24. 31 Dec, 2013 1 commit
  25. 16 Dec, 2013 1 commit
  26. 19 Sep, 2013 2 commits
  27. 09 Sep, 2013 1 commit
  28. 09 Aug, 2013 1 commit
    • Leigh Stoller's avatar
      I added two new actions to PerformOperationalAction, which appear to · cfd1974a
      Leigh Stoller authored
      work fine when the nodes are behaving themselves.
      
      1) geni_update_users: Takes a slice credential and a keys argument. Can
        only be invoked when the sliver is in the started/geni_ready state.
        Moves the slice to the geni_updating_users state until all of the
        nodes have completed the update, at which time the sliver moves back
        to started/geni_ready.
      
      2) geni_updating_users_cancel: We can assume that some nodes will be whacky
        and will not perform the update when told to. This cancels the
        update and moves the sliver back to started/geni_ready.
      
      A couple of notes:
      
      * The current emulab node update time is about three minutes; the
        sliver is in this new state for that time and cannot be restarted or
        stopped. It can of course be deleted.
      
      * Should we allow restart while in the updating phase? We could, but
        then I need more bookkeeping.
      
      * Some nodes might not be running the watch dog, or might not even be
        an emulab image, so the operation will never end, not until
        canceled. I could add a timeout, but that will require a monitor or
        adding DB state to store the start time.
      cfd1974a
  29. 21 May, 2013 2 commits
  30. 14 May, 2013 1 commit
  31. 30 Apr, 2013 1 commit
    • Leigh Stoller's avatar
      Add physical memory accounting for openvz/xen nodes. The total · 11752432
      Leigh Stoller authored
      amount a physical has is stored in the node types table, and the
      per-vm memory requirement is stored in the nodes table. ptopgen
      adds up usage, and subtracts from the total for the ptop file.
      The vtop number comes from a virt_node_attribute table, and we
      pass this through to the client side. Note that this is less
      important for openvz, more so for XEN.
      
      In the NS file:
      
      	tb-set-node-memory-size $node 1024
      
      Number is in MBs. The mapper defaults this to 128 for openvz and 256
      for xen. Maximum is hardwired to 256 and 512 respectively. Need to
      think about a good way to configure this in.
      11752432
  32. 22 Apr, 2013 1 commit
  33. 10 Apr, 2013 1 commit
  34. 25 Mar, 2013 1 commit