- 16 Dec, 2015 1 commit
-
-
Leigh B Stoller authored
unlocked checked, used in the Geni getticket() routine.
-
- 16 Nov, 2015 1 commit
-
-
Leigh B Stoller authored
Need to change this entirely.
-
- 10 Nov, 2015 1 commit
-
-
Leigh B Stoller authored
-
- 27 Aug, 2015 1 commit
-
-
Kirk Webb authored
Stopgap solution - somewhat risky. What we would prefer is a solution that requires users to authenticate and grab a new keyed URL each time they want to connect to a console.
-
- 29 Jul, 2015 1 commit
-
-
Leigh B Stoller authored
-
- 13 Jul, 2015 1 commit
-
-
Leigh B Stoller authored
-
- 04 May, 2015 1 commit
-
-
Leigh B Stoller authored
change had some problems.
-
- 13 Mar, 2015 2 commits
-
-
Gary Wong authored
-
Leigh B Stoller authored
* Various UI tweaks for profile versioning. * Roll out profile versioning for all users. * Disable/Hide publishing for now. * Move profile/version URLs into a modal that is invoked by a new Share button, that explains things a little better. * Unify profile permissions between APT/Cloudlab. Users now see just two choices; project or anyone, where anyone includes guest users in the APT interface, for now. * Get rid of "List on the front page" checkbox, all public profiles will be listed, but red-dot can still set that bit. * Return the publicURL dynamically in the status blob, and set/show the sliver info button as soon as we get it. * Console password support; if the aggregate returns the console password, add an item to the context menu to show it. * Other stuff.
-
- 18 Jan, 2015 1 commit
-
-
Mike Hibler authored
Previously it was the creation stamp for the hash. By making it the expiration time, we can do different times for different nodes. Note that there is no serious compatibility issue with re-purposing the DB field. It is almost always zero (since they are only valid for 5 minutes) and if it isn't zero when the new code is installed, the hash will just immediately become invalid. So what? Big deal!
-
- 05 Dec, 2014 1 commit
-
-
Mike Hibler authored
Significant hackary involved. Similar to exports_setup, there is a boss-side script and an ops-side script to handle creation and destruction of the ZFS clones that are used for the NFS filesystem. The rest was all about when to invoke said scripts. Creation is easy, we just do a clone whenever the TBAdminMfsSelect is called to "turn on" node admin mode. Destruction is not so simple. If we destroyed the clone on the corresponding TBAdminMfsSelect "off" call, then we could yank the filesystem out from under the node if it was still running in the MFS (e.g., "node_admin -n off node"). While that would probably be okay in most uses, where at worst we would have to apod or power cycle the node, we try to do better. TBAdminMfsSelect "off" instead just renames the clone (to "<nodeid>-DEAD") so that it stays available if the node is running on it at the time, but ensures that it will not get accidentally used by any future boot. We check for, and destroy, any previous versions for a node every time we invoke the nfsmfs_setup code for that node. We also destroy live or dead clones whenever we call nfree. This ensures that all MFSes get cleaned up at experiment swapout time.
-
- 25 Nov, 2014 2 commits
-
-
Mike Hibler authored
-
Mike Hibler authored
Keeping them up to date throughout the node lifecycle is not a lot of fun...
-
- 11 Nov, 2014 1 commit
-
-
Kirk Webb authored
* Do not "reset" taint states to match partitions after OS load. Encumber node with any additional taint states found across the OSes loaded on a node's partitions (union of states). Change the name of the associated Node object method to better represent the functionality. * Clear all taint states when a node exits "reloading" When the reload_daemon is finished with a node and ready to release it, it will now clear any/all taint states set on the node. This is the only automatic way to have a node's taint states cleared. Users cannot clear node taint states by os_load'ing away all tainted partitions after this commit; nodes must travel through reloading to get cleared.
-
- 20 Oct, 2014 1 commit
-
-
Kirk Webb authored
If a user tries to os_load a virtnode who's physnode is not tainted, skip (deny) it. Also add a second safety check in Node->OSSelect to enforce node tainting.
-
- 04 Sep, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 26 Aug, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 11 Jul, 2014 1 commit
-
-
Leigh B Stoller authored
while creating a set ov new vnodes. Also some minor cleanup.
-
- 01 Jul, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 06 Jun, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 04 Jun, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 13 May, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 12 May, 2014 1 commit
-
-
Leigh B Stoller authored
notion of "dedicated" is currently a type specific attribute, but we also have "shared" nodes running on "dedicated" nodes, which messes everything up. I am not inclined to fix the underlying problem since Utah is the only site that uses this stuff, and these nodes are slowly dying out anyway.
-
- 16 Apr, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 15 Apr, 2014 2 commits
-
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
- 03 Apr, 2014 1 commit
-
-
Leigh B Stoller authored
Doing this here allows jailconfig in tmcd to return it.
-
- 20 Mar, 2014 1 commit
-
-
Kirk Webb authored
It's going to be used by both OSinfo and Node objects. New OSes will want to inherit taint states from the OS they are derived from.
-
- 17 Mar, 2014 2 commits
-
-
Kirk Webb authored
Can't do the untainting for all cases in libosload*. The untainting is now hooked into stated, where we catch the nodes as they send along their "RELOADDONE" events to update their taint state according to the final state of their partitions.
-
Kirk Webb authored
Emulab can now propagate OS taint traits on to nodes that load these OSes. The primary reason for doing this is for loading images which require special treatment of the node. For example, an OS that has proprietary software, and which will be used as an appliance (blackbox) can be marked (tainted) as such. Code that manages user accounts on such OSes, along with other side channel providers (console, node admin, image creation) can key off of these taint states to prevent or alter access. Taint states are defined as SQL sets in the 'os_info' and 'nodes' tables, kept in the 'taint_states' column in both. Currently these sets are comprised of the following entries: * usermode: OS/node should only allow user level access (not root) * blackbox: OS/node should allow no direct interaction via shell, console, etc. * dangerous: OS image may contain malicious software. Taint states are inherited by a node from OSes it loads during the OS load process. Similarly, they are cleared from nodes as these OSes are removed. Any taint state applied to a node will currently enforce disk zeroing. No other tools/subsystems consider the taint states currently, but that will change soon. Setting taint states for an OS has to be done via SQL presently.
-
- 08 Jan, 2014 1 commit
-
-
Leigh B Stoller authored
-
- 31 Dec, 2013 1 commit
-
-
Leigh B Stoller authored
that will allow the caller to access a console line (for a brief moment in time).
-
- 16 Dec, 2013 1 commit
-
-
Leigh B Stoller authored
-
- 19 Sep, 2013 2 commits
-
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
- 09 Sep, 2013 1 commit
-
-
Leigh B Stoller authored
(Do not work cause of firewalling, and they timeout, which takes too long).
-
- 09 Aug, 2013 1 commit
-
-
Leigh B Stoller authored
work fine when the nodes are behaving themselves. 1) geni_update_users: Takes a slice credential and a keys argument. Can only be invoked when the sliver is in the started/geni_ready state. Moves the slice to the geni_updating_users state until all of the nodes have completed the update, at which time the sliver moves back to started/geni_ready. 2) geni_updating_users_cancel: We can assume that some nodes will be whacky and will not perform the update when told to. This cancels the update and moves the sliver back to started/geni_ready. A couple of notes: * The current emulab node update time is about three minutes; the sliver is in this new state for that time and cannot be restarted or stopped. It can of course be deleted. * Should we allow restart while in the updating phase? We could, but then I need more bookkeeping. * Some nodes might not be running the watch dog, or might not even be an emulab image, so the operation will never end, not until canceled. I could add a timeout, but that will require a monitor or adding DB state to store the start time.
-
- 21 May, 2013 2 commits
-
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
- 14 May, 2013 1 commit
-
-
Leigh B Stoller authored
-