1. 14 Sep, 2017 2 commits
  2. 07 Sep, 2017 1 commit
  3. 01 Sep, 2017 1 commit
  4. 18 Aug, 2017 1 commit
  5. 14 Aug, 2017 1 commit
  6. 08 Aug, 2017 8 commits
  7. 10 Jul, 2017 1 commit
  8. 07 Jul, 2017 2 commits
    • Leigh B Stoller's avatar
      Deal with user privs (issue #309): · d1516912
      Leigh B Stoller authored
      * Make user privs work across remote clusters (including stitching). I
        took a severe shortcut on this; I do not expect the Cloudlab portal
        will ever talk to anything but an Emulab based aggregate, so I just
        added the priv indicator to the user keys array we send over. If I am
        ever proved wrong on this, I will come out of retirement and fix
        it (for a nominal fee of course).
      * Do not show the root password for the console to users with user
      * Make sure users with user privs cannot start experiments.
      * Do show the user trust values on the user dashboard membership tab.
      * Update tmcd to use the new privs slot in the nonlocal_user_accounts
      This closes issue #309.
    • Leigh B Stoller's avatar
      Stop leaving temp files behind. · b8a86a6a
      Leigh B Stoller authored
  9. 06 Jul, 2017 5 commits
  10. 28 Jun, 2017 1 commit
  11. 26 Jun, 2017 2 commits
  12. 20 Jun, 2017 1 commit
    • Leigh B Stoller's avatar
      Work on firewall support: · 72fa735e
      Leigh B Stoller authored
      1. Get this working on the NS conversion path.
      2. Add support for additional firewall rules along the path,
         the CM never had support for firewall rules.
      3. Set the security_level to zapdisk when firewalling is on.
  13. 15 Jun, 2017 1 commit
  14. 12 Jun, 2017 1 commit
  15. 09 Jun, 2017 4 commits
  16. 07 Jun, 2017 2 commits
  17. 06 Jun, 2017 1 commit
  18. 05 Jun, 2017 1 commit
  19. 02 Jun, 2017 2 commits
  20. 30 May, 2017 2 commits
    • Leigh B Stoller's avatar
    • Leigh B Stoller's avatar
      Rework how we store the sliver/slice status from the clusters: · e5d36e0d
      Leigh B Stoller authored
      In the beginning, the number and size of experiments was small, and so
      storing the entire slice/sliver status blob as json in the web task was
      fine, even though we had to lock tables to prevent races between the
      event updates and the local polling.
      But lately the size of those json blobs is getting huge and the lock is
      bogging things down, including not being able to keep up with the number
      of events coming from all the clusters, we get really far behind.
      So I have moved the status blobs out of the per-instance web task and
      into new tables, once per slice and one per node (sliver). This keeps
      the blobs very small and thus the lock time very small. So now we can
      keep up with the event stream.
      If we grow big enough that this problem comes big enough, we can switch
      to innodb for the per-sliver table and do row locking instead of table
      locking, but I do not think that will happen