1. 12 Apr, 2016 1 commit
  2. 11 Apr, 2016 1 commit
  3. 23 Mar, 2016 1 commit
  4. 21 Mar, 2016 1 commit
    • Leigh B Stoller's avatar
      New (test) version of image_import that can import the entire image history · aef647a6
      Leigh B Stoller authored
      from the server, keeping it in sync with the server as new versions of the
      image are added. Also handles importing deltas if the metadata says there
      is a delta.
      
      Note that downloading the image files is still lazy; we will not import all
      15 versions of an image unless they actually are needed.
      
      Lots of work still do. This is a bit of a nightmare cause of client/server
      (backward) compatibility issues wrt provenance/noprovenance and
      deltas/nodeltas. I might change my mind and say the hell with
      compatibility!
      
      Along these same lines, there is an issue of what to do when a site that is
      running with provenance turned on, gets this new code. Up to now, the
      client and server never tried to stay in sync, but now they have to (cause
      of deltas), and so the client image descriptors have to be upgraded. That
      will be a hassle too.
      aef647a6
  5. 08 Dec, 2015 1 commit
  6. 15 May, 2015 1 commit
    • Leigh B Stoller's avatar
      Directory based image paths. · 3a21f39e
      Leigh B Stoller authored
      Soon, we will have images with both full images and deltas, for the same
      image version. To make this possible, the image path will now be a
      directory instead of a file, and all of the versions (ndz,sig,sha1,delta)
      files will reside in the directory.
      
      A new config variable IMAGEDIRECTORIES turns this on, there is also a check
      for the ImageDiretories feature. This is applied only when a brand new
      image is created; a clone version of the image inherits the path it started
      with. Yes, you can have a mix of directory based and file based image
      descriptors.
      
      When it is time to convert all images over, there is a script called
      imagetodir that will go through all image descriptors, create the
      directory, move/rename all the files, and update the descriptors.
      Ultimately, we will not support file based image paths.
      
      I also added versioning to the image metadata descriptors so that going
      forward, old clients can handle a descriptor from a new server.
      3a21f39e
  7. 05 Mar, 2015 1 commit
  8. 27 Jan, 2015 1 commit
    • Leigh B Stoller's avatar
      Two co-mingled sets of changes: · 85cb063b
      Leigh B Stoller authored
      1) Implement the latest dataset read/write access settings from frontend to
         backend. Also updates for simultaneous read-only usage.
      
      2) New configure options: PROTOGENI_LOCALUSER and PROTOGENI_GENIWEBLOGIN.
      
         The first changes the way that projects and users are treated at the
         CM. When set, we create real accounts (marked as nonlocal) for users and
         also create real projects (also marked as nonlocal). Users are added to
         those projects according to their credentials. The underlying experiment
         is thus owned by the user and in the project, although all the work is
         still done by the geniuser pseudo user. The advantage of this approach
         is that we can use standard emulab access checks to control access to
         objects like datasets. Maybe images too at some point.
      
         NOTE: Users are not removed from projects once they are added; we are
         going to need to deal with this, perhaps by adding an expiration stamp
         to the groups_membership tables, and using the credential expiration to
         mark it.
      
         The second new configure option turns on the web login via the geni
         trusted signer. So, if I create a sliver on a backend cluster when both
         options are set, I can use the trusted signer to log into my newly
         created account on the cluster, and see it (via the emulab classic web
         interface).
      
         All this is in flux, might end up being a bogus approach in the end.
      85cb063b
  9. 15 Dec, 2014 1 commit
  10. 25 Nov, 2014 1 commit
  11. 04 Nov, 2014 1 commit
    • Leigh B Stoller's avatar
      Add runsonxen script to set the bits of DB state required. · 04c35b0b
      Leigh B Stoller authored
      	usage: runsonxen [-p <parent>] <imageid>
      	usage: runsonxen -a [-p <parent>]
      	usage: runsonxen -c <imageid>
      	Options:
      	 -n      - Impotent mode
      	 -c      - Clear XEN parent settings completely
      	 -a      - Operate on all current XEN capable images
      	 -p      - Set default parent; currently XEN43-64-STD
      04c35b0b
  12. 28 Oct, 2014 1 commit
  13. 10 Jul, 2014 1 commit
  14. 01 Jul, 2014 1 commit
  15. 13 Jun, 2014 1 commit
  16. 06 Jun, 2014 1 commit
    • Leigh B Stoller's avatar
      New script, analogous to Mike's node_traffic script. Basically, it · b885ce89
      Leigh B Stoller authored
      was driving me nuts that we do not have an easy way to see what is
      going on *inside* the fabric.
      
      So this one reports on traffic across trunk links and interconnects
      out of the fabric.  Basic operation is pretty simple:
      
      	Usage: switch_traffic [-rs] [-i seconds] [switch[:switch] ...]
      	Reports traffic across trunk links and interconnects
      	-h          This message
      	-i seconds  Show stats over a <seconds>-period interval
      
      So with no arguments will give portstats style output of all trunk
      links and interconnects in the database. Trunk links are aggregate
      numbers of all of the trunk wires that connect two switches.
      
      The -i option gives traffic over an interval, which is much more
      useful than the raw packet numbers, since on most of our switches
      those numbers have probably rolled over a few times.
      
      You can optionally specify specific switches and interconnects on the
      command line. For example:
      
      boss> wap switch_traffic -i 10 cisco3 ion
      Trunk                    InOctets      InUpkts   InNUpkts   ...
      ----------------------------------------------------------- ...
      cisco3:cisco10                128            0          1   ...
      cisco3:cisco8                2681            7          4   ...
      cisco3:cisco1                4493           25          7   ...
      cisco3:cisco9                 192            0          1   ...
      cisco3:cisco4                 128            0          2   ...
      pg-atla:ion                     0            0          0   ...
      pg-hous:ion                     0            0          0   ...
      pg-losa:ion                     0            0          0   ...
      pg-salt:ion                  2952            0         42   ...
      pg-wash:ion                     0            0          0   ...
      
      NOTE that the above output is abbreviated so it does not wrap in the
      git log, but you get the idea.
      
      Or you can specify a specific trunk link:
      
      	boss> wap switch_traffic -i 10 cisco3:cisco8
      
      Okay this is all pretty basic and eventually it would be nice to take
      these numbers and feed them into mrtg or rrdtool so we can view pretty
      graphs, but this as far as I can take it for now.
      
      Maybe in the short term it would be enough to record the numbers every
      5 minutes or so and put the results into a file.
      b885ce89
  17. 09 May, 2014 1 commit
    • Mike Hibler's avatar
      New imagevalidate tool for printing/checking/updating image metadata. · 0bb906f4
      Mike Hibler authored
      This should be run whenever an image is created or updated and possibly
      periodically over existing images. It makes sure that various image
      metadata fields are up to date:
      
       * hash: the SHA1 hash of the image. This field has been around for
         awhile and was previously maintained by "imagehash".
      
       * size: the size of the image file.
      
       * range: the sector range covered by the uncompressed image data.
      
       * mtime: modification time of the image. This is the "updated"
         datetime field in the DB. Its intent was always to track the update
         time of the image, but it wasn't always exact (create-image would
         update this with the current time at the start of the image capture
         process).
      
      Documentation? Umm...the usage message is comprehensive!
      It sports a variety of useful options, but the basics are:
      
       * imagevalidate -p <image> ...
          Print current DB metadata for indicated images. <image> can either
          be a <pid>/<imagename> string or the numeric imageid.
      
       * imagevalidate <image> ...
          Check the mtime, size, hash, and image range of the image file and
          compare them to the values in the DB. Whine for ones which are out
          of date.
      
       * imagevalidate -u <image> ...
          Compare and then update DB metadata fields that are out of date.
      
      Fixed a variety of scripts that either used imagehash or computed the
      SHA1 hash directly to now use imagevalidate.
      0bb906f4
  18. 17 Mar, 2014 1 commit
  19. 21 Jan, 2014 1 commit
  20. 06 Jan, 2014 1 commit
    • Mike Hibler's avatar
      Add support for lease extention (renewal). · 9a6cdeae
      Mike Hibler authored
      Add CLI for extending a lease (called extenddataset on ops). The length
      of the extension and the number of times it can be extended are controlled
      by site variables.
      9a6cdeae
  21. 03 Jan, 2014 1 commit
    • Mike Hibler's avatar
      First attempt to cleanup some hack jobs. · c5a1812c
      Mike Hibler authored
      Make a createdataset to handle dataset leases and move dataset specific
      code out of approvelease and into Lease.pm (which is now Lease.pm.in as
      it needs to be configured). Lease.pm still needs a bunch of OO-ification
      to properly make datasets a subclass of leases. But, another day...
      c5a1812c
  22. 11 Dec, 2013 2 commits
    • Mike Hibler's avatar
      Add script to approve a lease and add some locking in other scripts. · 6fef3cce
      Mike Hibler authored
      approvelease is the place where storage actually gets allocated for
      a lease. It uses bscontrol to contact an appropriate freeNAS storage
      server and allocate a ZFS volume.
      
      deletelease is the place where storage is deallocated. Note that once
      a lease has been approved and storage allocated, it cannot be returned
      to the unapproved state. The only way to free storage is to delete the
      lease.
      
      Both approve and delete use an intermediate state, "initializing", to
      signal that the lease is in the middle of a potentially time-consuming
      allocation/deallocation procedure. I probably should have just used the
      lease locking mechanism instead.
      
      Approve, delete, and mod all DO use the locking mechanism when examining
      and manipulating the state of a lease. Nonetheless, I am sure that are
      still plenty of races.
      6fef3cce
    • Mike Hibler's avatar
  23. 23 Jul, 2013 1 commit
  24. 22 Jul, 2013 1 commit
  25. 14 May, 2013 1 commit
    • Leigh B Stoller's avatar
      Add prototype EC2 image import plumbing. · 980aa180
      Leigh B Stoller authored
      To create a new descriptor that will be an import from EC2 (and thus
      run under XEN), add ?ec2=1 to newimage_ez.php3. Eventually will link
      it in someplace. The form will create a XEN based VM, but instead of
      node to snapshot from, provide user@host for the EC2 instance.
      
      On the image snapshot page, instead of node use user@host for the EC2
      instance.
      
      The backend script (create_image) will call over to ops and invoke
      Srikanth's code. I have called that script ec2import-image.pl. See
      create_image for how arguments are passed to the script.
      980aa180
  26. 26 Mar, 2013 1 commit
  27. 14 Jan, 2013 1 commit
  28. 12 Dec, 2012 1 commit
    • Gary Wong's avatar
      Add a "mktestbedtest" script. · 08ca1a04
      Gary Wong authored
      It constructs an experiment including every (available) experimental PC,
      and every relevant link, so that during swap-in linktest will exercise
      as much of the testbed as possible.
      08ca1a04
  29. 04 Dec, 2012 1 commit
    • Leigh B Stoller's avatar
      Add sitecheckin client and server, which will tell Utah (Mother Ship) · 6591e9fd
      Leigh B Stoller authored
      about Emulab sites. Nothing private, just the equivalent of calling
      testbed-version so that we know what sites exist and what software
      they are running.
      
      This is opt-out; sites that do not want to tell Utah about themselves
      can set NOSITECHECKIN in their defs file.
      
      In Utah, there is a new option in the Administration drop down menu to
      print out the list from the DB.
      6591e9fd
  30. 14 Nov, 2012 1 commit
  31. 30 Oct, 2012 1 commit
    • Mike Hibler's avatar
      Remaining infrastructure for control network "ARP lockdown". · 4b5e17b0
      Mike Hibler authored
      It works like this. Certain nodes that are on the node control net
      (right now just subbosses, but ops coming soon) can set static ARP entries
      for the nodes they serve. This raises the bar for (but does not eliminate
      the possibility of) nodes spoofing servers. Currently this is only for
      FreeBSD.
      
      When such a server boots, it will early on run /etc/rc.d/arplock.sh
      which will in turn run /usr/local/etc/emulab/fixarpinfo. fixarpinfo
      asks boss via an SSL tmcc call for "arpinfo" (using SSL ensures that the
      info coming back is really from boss). Tmcd on boss returns such arpinfo
      as appropriate for the node (subboss, ops, fs, etc.) along with the type
      of lockdown being done. The script uses this info to update the ARP
      cache on the machine, adding, removing, or making permanent entries
      as appropriate.
      
      fixarpinfo is intended to be called not just at boot, but also whenever
      we might need to update the ARP info on a server. The only other use right
      now is in subboss_dhcpd_makeconf which is called whenever DHCP info may
      need to be changed on a subboss (we hook this because a call to this script
      might also indicate a change in the set of nodes served by the subboss).
      In the future, fixarpinfo might be called from the newnode path (for ops/fs,
      when a node is added to the testbed), the deletenode path, or maybe from
      the watchdog (if we started locking down arp entries on experiment nodes)
      
      The type of the lockdown is controlled by a sitevar on boss,
      general/arplockdown, which can be set to 'none', 'static' or 'staticonly'.
      'none' means do nothing, 'static' means just create static arp entries
      for the given nodes but continue to dynamically arp for others, and
      'staticonly' means use only this set of static arp entries and disable
      dynamic arp on the control net interface. The last implies that the server
      will only be able to talk to the set of nodes for which it got ARP info.
      
      As mentioned, tmcd is responsible for returning the correct set of arp
      info for a given request. The logic currently is:
      
       * Only return ARP info to nodes which are on the CONTROL_NETWORK.
         If the requester is elsewhere (e.g., Utah's boss and ops are currently
         segregated on different IP subnets) then this whole infrastructure
         does not apply and nothing is returned.
      
       * If the requester is a subboss, return info for all other servers that
         are on the node control network as well as for the set of nodes
         which the subboss serves.
      
       * If the requester is an ops or fs node, again return info for all
         other servers and info for all testnodes or virtnodes whose control
         net IP is on the node control net.
      
       * Otherwise, return nothing.
      
      One final note is that the ARP info for servers such as boss/ops/fs or
      the gateway router is not readily available in most Emulab instances
      since those machines are not in the DB nodes or interfaces tables.
      Eventually we will fix that, but for now the info must come from new
      site variables. To help initially populate those variables, I added
      the utils/update_sitevars script which attempts to determine which
      servers are on the node control net and gathers the appropriate IP and
      MAC info from them.
      4b5e17b0
  32. 16 Oct, 2012 1 commit
  33. 26 Sep, 2012 1 commit
  34. 24 Sep, 2012 1 commit
    • Eric Eide's avatar
      Replace license symbols with {{{ }}}-enclosed license blocks. · 6df609a9
      Eric Eide authored
      This commit is intended to makes the license status of Emulab and
      ProtoGENI source files more clear.  It replaces license symbols like
      "EMULAB-COPYRIGHT" and "GENIPUBLIC-COPYRIGHT" with {{{ }}}-delimited
      blocks that contain actual license statements.
      
      This change was driven by the fact that today, most people acquire and
      track Emulab and ProtoGENI sources via git.
      
      Before the Emulab source code was kept in git, the Flux Research Group
      at the University of Utah would roll distributions by making tar
      files.  As part of that process, the Flux Group would replace the
      license symbols in the source files with actual license statements.
      
      When the Flux Group moved to git, people outside of the group started
      to see the source files with the "unexpanded" symbols.  This meant
      that people acquired source files without actual license statements in
      them.  All the relevant files had Utah *copyright* statements in them,
      but without the expanded *license* statements, the licensing status of
      the source files was unclear.
      
      This commit is intended to clear up that confusion.
      
      Most Utah-copyrighted files in the Emulab source tree are distributed
      under the terms of the Affero GNU General Public License, version 3
      (AGPLv3).
      
      Most Utah-copyrighted files related to ProtoGENI are distributed under
      the terms of the GENI Public License, which is a BSD-like open-source
      license.
      
      Some Utah-copyrighted files in the Emulab source tree are distributed
      under the terms of the GNU Lesser General Public License, version 2.1
      (LGPL).
      6df609a9
  35. 14 Sep, 2012 1 commit
    • Leigh B Stoller's avatar
      "improvements" to prereserve: · f7219346
      Leigh B Stoller authored
      New option -s datetime to specify a starting time for the pre-reserve.
      New option -e datetime to specify a ending time for the pre-reserve.
      
      The idea is that you can schedule a pre-reserve to begin sometime later,
      and you can optionally specify a time for a prereserve to terminate.
      There is a new script that runs from cron that checks for pre-reserves
      that need to be started or terminated.
      
      For example:
      
      boss> wap prereserve -s '2012-09-14 09:08:15' -e '2012-09-15' emulab-ops 50
      
      You can use any datetime string that is valid for str2time. At some point
      it would be nice to allow natural language dates ("tomorrow") but that
      requires a another bunch of perl packages and I didn't want to bother.
      
      NOTE: when using -e, -r is implied; in other words, when the
      pre-reserve is terminated, the table entry is cleared *and* the
      reserved_pid of all of the nodes is cleared. Any experiments using
      those nodes is left alone, although if the user does a swapmod, they
      could easily lose the nodes if another pre-reserve is set up that
      promises those nodes to another project.
      f7219346
  36. 04 Sep, 2012 1 commit
    • Leigh B Stoller's avatar
      Add image import utilities. · e468f885
      Leigh B Stoller authored
      image_setup is run from tbprerun to verify and create image
      descriptors, and then later from tbswap to actually download
      and verify the image (ndz) file.
      
      image_import does the actual work for a specific image (url).
      e468f885
  37. 30 Aug, 2012 2 commits
    • Leigh B Stoller's avatar
      More bits and pieces for exporting images from one Emulab to another. · 4c444cd5
      Leigh B Stoller authored
      image_metadata.php will return an Emulab style image descriptor in XML
      format. A remote emulab, given an image URL, will grab this XML
      description and use it to create a local descriptor. Inside the
      descriptor is an additional URL that is used to download ndz file.
      
      The dumpdescriptor script is now web accessible, and takes a new -e
      (export) option that adds the extra URL and other bits that are needed
      to import the descriptor and the image.
      
      On the Show Image page, show the metadata URL, which is suitable for
      using in an NS file or an rspec (when that code is committed).
      4c444cd5
    • Gary Wong's avatar
      Add a "ctrladdr" utility to show (un)allocated addresses on the control net. · 9047e21a
      Gary Wong authored
      Right now, the only addresses it knows are allocated are anything assigned
      in the interfaces table with a "ctrl" role, and anything in the dynamic
      pool in the virt_node_public_addr table.  (And the reserved network and
      broadcast addresses.)
      
      This needs to be extended to anything else we know about!
      
      By default, the output is supposed to be easy to parse and simply
      displays the first available address.  More than one available address
      can be requested with the "-n" option (e.g. "-n 10" will show the
      first ten unallocated addresses).  "-n 0" will show every free
      address on the subnet.
      
      The "-a" option (meant more for human consumption) also describes
      allocated addresses.  For instance, "ctrladdr -a -n 0" will show
      every address on the control net, and what it's used for (if
      anything).  "-r" will compress ranges of consecutive free addresses
      onto a single line.
      
      To test whether a particular address is in use, invoke it as (e.g.)
      "ctrladdr -t 155.98.36.1".  This will give an exit code of 0 if the
      address is available, and 1 if used.  Any other options are ignored
      if "-t" is specified.
      9047e21a
  38. 29 Aug, 2012 1 commit