1. 14 Jul, 2019 1 commit
  2. 08 Jul, 2019 6 commits
  3. 25 Jun, 2019 1 commit
  4. 24 Jun, 2019 1 commit
  5. 19 Jun, 2019 1 commit
    • Mike Hibler's avatar
      Further tweaks to jumbo frames code. · 571b4a14
      Mike Hibler authored
      Now use a sitevar, general/allowjumboframes, rather than MAINSITE
      to determine whether we should even attempt any jumbo frames magic.
      Use a per-link/lan setting rather than the hacky per-experiment
      setting to let the user decide if they want to use jumbos. In NS
      world, we already had a link/lan method (set-settings) to specify
      virt_lan_settings which is where it winds up now.
      Client-side fixes to make jumbos work with vnodes.
  6. 12 Jun, 2019 1 commit
    • Leigh Stoller's avatar
      Small set of changes for os_setup on sdr nodes. · 58f2b014
      Leigh Stoller authored
      SDR nodes (type=sdr, but this applies to other similar types) are in the
      "pc" class, but really they are not pcs, they are more like blackboxes
      that can be power cycled and are always ISUP.
      So, I added a "sdr" package to libossetup, that basically just does a
      power cycle to put them into a known state, and makes sure the
      eventstate is ISUP.
      I added "blackbox" to the sdr type definition. Aside; when something is
      a blackbox, we should bypass all image/osinfo handling, but that's a
      tale for another day.
      I added a isblackbox() check in power, to skip any eventstate
      handling. Aside; node_reboot should possibly skip right to power cycle
      for blackbox nodes, instead of trying to ping it or ssh into it.
  7. 11 Jun, 2019 2 commits
  8. 05 Jun, 2019 2 commits
    • chuck cranor's avatar
      two additional updates for EXPIRE_PASSWORDS=0 mode · ee6cf209
      chuck cranor authored
      1. In User.pm Create(), only apply the default expire time of 1 year
      to pswd_expires if EXPIRE_PASSWORDS is true.
      2. In tbacct's passwd command: the current behavior is that we set
      the pswd_expires time to "now" if we are changing the the password
      of someone else's account.   this patch adds a new "-e" flag that,
      if specified, uses the default expiration policy instead of now.
      The rational for this change is to allow scripts to import encrypted
      passwords from external account management systems and apply them
      to emulab using "tbacct passwd" without forcing an immediate change.
    • chuck cranor's avatar
      Allow Create() callers to specify the unix_uid of new accounts · 54cbaa77
      chuck cranor authored
      Modify the Create() call to allow unix_uids to be specified in the hash.
      If a unix_uid is provided in the hash, then we attempt to use that for
      the new account rather than using the "find unused numbers" sql query.
      If the given unix_uid is less than MIN_UNIX_UID or already in use then
      Create() will return undef.
      If no unix_uid is specified then there is no change in Create() behavior,
      so this will not impact any of the code currently in the tree.  The
      intent of this change is to allow Emulab admins the option of managing
      their accounts using data that is external to Emulab so you could have
      scripts that sync the list of active users to an external password file,
      LDAP server, etc.  (For this to work, it will also require a way to turn
      off Emulab's builtin account creation tool and Emulab's sql schema may
      need to be modified to handle larger unix_uids -- current limit is
  9. 03 Jun, 2019 2 commits
  10. 23 May, 2019 1 commit
    • Leigh Stoller's avatar
      Changed related to parameter sets and experiment bindings: · 03e4d8bc
      Leigh Stoller authored
      * Show the parameter bindings on the status page for an experiment, and
        on the memlane page. This is strictly informational so that users can
        quickly see the parameters that are/were chosen at the time the
        experiment was created.
      * Add a Save Parameters button on the memlane and status pages. This
        will generate a json structure and store it in the DB for that profile
        and user. Optionally, mark the parameter set as specific to a profile
        version or repo hash, so a user can quickly link to that version/hash
        and apply the parameter set.
      * On the instantiate page, the parameters step include new buttons to
        1) reset the form to default, 2) apply the parameters used in the most
        recent experiment (current, then history), 3) choose from a dropdown
        of parameters the users has saved for that profile, and 4) take the
        user to their activation history for the profile, to pick one to run
        again or save parameters.
      * Add a new tab to the user dashboard to show the user's saved parameter
      * Lots of changes to the new version of the ppwizard for apply
        parameter sets and showing warnings about them. This code has NOT been
        applied to the old ppwizard.
  11. 08 May, 2019 1 commit
  12. 30 Apr, 2019 1 commit
  13. 26 Apr, 2019 3 commits
  14. 13 Mar, 2019 1 commit
  15. 11 Mar, 2019 2 commits
  16. 06 Mar, 2019 1 commit
  17. 28 Feb, 2019 1 commit
  18. 12 Feb, 2019 1 commit
    • Leigh Stoller's avatar
      Recovery mode: · bde6c94d
      Leigh Stoller authored
      * Add a new Portal context menu option to nodes, to boot into "recovery"
        mode, which will be a Linux MFS (rather then the FreeBSD MFS, which
        99% of user will not know what to do with).
      * Plumb all through to the Geni RPC interface, which invokes node_admin
        with a new option, to use the recovery mfs nodetype attribute.
      * recoverymfs_osid is a distinct osid from adminmfs_osid, we use that in
        the CM to add an Emulab name space attribute to the manifest, that
        tells the Portal that a node supports recovery mode (and thus gets a
        context menu option).
      * Add an inrecovery flag to the sliver status blob, which the Portal
        uses to determine that a node is currently in recovery mode, so that
        we can indicate that in the topology and list tabs.
  19. 08 Feb, 2019 2 commits
  20. 04 Feb, 2019 2 commits
  21. 02 Jan, 2019 1 commit
  22. 13 Dec, 2018 1 commit
  23. 06 Dec, 2018 1 commit
  24. 30 Nov, 2018 1 commit
  25. 28 Nov, 2018 1 commit
  26. 16 Nov, 2018 1 commit
  27. 07 Nov, 2018 1 commit
    • Leigh Stoller's avatar
      Quick fix for watchdog/backup interaction; use a script lock. · 72b4ba32
      Leigh Stoller authored
      From Slack:
      What I notice is that mysqldump is read locking all of the tables for a
      long time. This time gets longer and longer of course as the DB gets
      bigger. Last night enough stuff backed up (trying to get various write
      locks) that we hit the 500 thread limit. I only know this cause mysql
      prints "killing 501" threads at 2:03am. Which makes me wonder if our
      thread limit is too small (but seems like it would have to be much
      bigger) or if our backup strategy is inappropriate for how big the DB is
      and how busy the system is. But to be clear, I am not even sure if
      mysqld throws in the towel when it hits 500 threads, I am in the midst
      of reading obtuse mysql documentation. (edited) There a bunch of other
      error messages that I do not understand yet.
      I can reproduce this in my elabinelab with a 10 line perl script. Two
      problems; one is that we do not use the permission system, so we cannot
      use dynamic permissions, which means that the single thread that is left
      for just this case, can be used by anyone, and so the server is fully
      out of threads. And 2) then the Emulab mysql watchdog cannot perform its
      query, and so it thinks mysqld has gone catatonic and kills it, right in
      the middle of the backup. Yuck * 2. (edited)
      And if anyone is curious about a more typical approach: "If you want to
      do this for MyISAM or mixed tables without any downtime from locking the
      tables, you can set up a slave database, and take your snapshots from
      there. Setting up the slave database, unfortunately, causes some
      downtime to export the live database, but once it's running, you should
      be able to lock it's tables, and export using the methods others have
      described. When this is happening, it will lag behind the master, but
      won't stop the master from updating it's tables, and will catch up as
      soon as the backup is complete"