1. 17 Oct, 2008 1 commit
  2. 18 Oct, 2007 1 commit
    • Mike Hibler's avatar
      Preliminary support for Ubuntu Linux. · 38bc8fa1
      Mike Hibler authored
       * added new tmcd directory with Ubuntu (really, Debian) specifics
       * fixed up GNUmakefiles to not do "-g wheel" when creating directories
       * other, relatively minor tweaks
      38bc8fa1
  3. 31 Aug, 2007 1 commit
  4. 05 Oct, 2006 1 commit
    • Leigh Stoller's avatar
      More work on "recording" template events. · e9607a77
      Leigh Stoller authored
      * New version of template_record just for ops, since so much is
        different about ops, not bothering to maintain a single version.
      
      * Various fixes to how the recorded events are stored and reconstituted.
        The big fix is to wrap them in a sequence to that they get fired
        properly (waiting for completion of previous event in recording).
      
      * New buttons to Pause and Continue event time, which is used when
        adding recorded events. This allows users to pause time while they
        "think" so when an event is recorded, the thinking time is not actually
        in the timeline. Eventually hope to figure this out automatically, but
        that will take some real, uh, thinking.
      
      * Add a new event editor (linked off the template page) that allows
        you to delete and change the recordings. Note that you can only edit
        the events at the template level; you cannot edit the events of an
        instance (swapped in experiment), and you can only edit the recorded
        events, not any other events. Not sure its useful to be able to do
        either of these yet, but probably not too hard to add at some point.
      e9607a77
  5. 04 Oct, 2006 1 commit
  6. 14 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Checkpoint my dynamic event stuff, crude as it is. The idea for this first · 9d021a07
      Leigh Stoller authored
      draft is that the user will at the end of an experiment run, log into one
      of his nodes and perform some analysis which is intended to be repeated at
      the end of the next run, and in future instantiations of the template.
      
      A new table called experiment_template_events holds the dynamic events for
      the template. Right now I am supporting just program events, but it will be
      easy to support arbitrary events later. As an absurd example:
      
      	node6> /usr/local/bin/template_analyze ~/data_analyze arg arg ...
      
      The user is currently responsible for making sure the output goes into a
      file in the archive. I plan to make the template_analyze wrapper handle
      that automatically later, but for now what you really want is to invoke a
      script that encapsulates that, redirecting output to $ARCHIVE (this
      variable is installed in the environment template_analyze.
      
      The wrapper script will save the current time, and then run the program.
      If the program terminates with a zero exit status, it will ssh over to ops
      and invoke an xmlrpc routine to tell boss to add a program event to both
      the eventlist for the current instance, and to the template_eventlist for
      future instances. The time of the event is the relative start time that was
      saved above (remember, each experiment run replays the event stream from
      time zero).
      
      For the future, we want to allow this to be done on ops as well, but
      that will take more infrastructure, to run "program agents" on ops.
      
      It would be nice to install the ssl xmlrpc client side on our images so
      that we do not have to ssh to ops to invoke the client.
      9d021a07
  7. 13 Feb, 2006 1 commit
  8. 27 Dec, 2005 1 commit
  9. 06 Dec, 2005 1 commit
    • Mike Hibler's avatar
      Phase II in disk state saving for swapout. · ed0d25b4
      Mike Hibler authored
      Exec summary: after this checkin, the infrastructure exists (once enabled)
      to create swapout-time "delta" images for all machines in experiments.
      There is only a single, cumulative swap image per node (i.e., all diffs
      are from the base image, not from the previous swap).
      
      What doesn't yet exist, is the mechanism for reloading the delta at
      swapin time.  That is Phase III.
      
      The nitty-gritty:
      
      1. Keep disk image signature files for all nodes in an experiment.
      
         New fields in the DB to track, for each disk partition, what image the
         partition was loaded from.  This enables us at swapin or os_load time to
         create signature files in /proj/<pid>/exp/<eid>/swapinfo for the current
         contents of a node disk/partition.  All nodes with the same image loaded
         will share (via symlink) the same signature file.  TODO: no longer
         referenced signature files should be removed.
      
         Signature info is only collected in the swapinfo directory if the
         experiment is set to have disk state saving enabled (see #5 below).
         Info consists of the <vname>.sig file, which is the file created
         by imagehash, and <vname>.part which says what the root disk is
         for the node and whether to look at the whole disk or just a single
         partition when crafting the delta image.
      
      2. Swapout-time hook for creating swapout image.
      
         If the experiment is marked as allowing disk state saving, tbswap
         will arrange to run and then monitor the create-swapimage command
         on each node.  This script will run the modified version of imagezip
         which uses the signature file to create a delta image.
      
         The command to run and maximum timeout are specified via sitevars
         (previously checked in).  Note that the tbswap script currently has
         special knowledge of /usr/local/bin/create-swapimage as a swapout
         time script.  If the swap/swapout_command sitevar is set to that,
         Magic Stuff shall occur (i.e. it will monitor the command and make
         periodic reports of progress).  The sitevars are a total hack and
         will disappear at some point.
      
      3. Client-side script for creating swapout image.
      
         os/create-swapimage, very similar to create-image.  Uses the info
         stashed in /proj/..blahblah../swapinfo to create a delta image.
      
         XXX fer now hack: the script first looks in /proj/<pid>/bin for an
         imagezip binary to use.  Failing that, it uses the one in the MFS.
         This allows for easier development of the imagezip changes (i.e.,
         don't have to update the MFS every time.
      
      4. Auto creation of signature files for new images.
      
         The create_image script (the one that runs on boss when creating images
         for users) has been modified to automatically create a signature via
         imagehash.  The .sig file winds up in /usr/testbed/images/sigs or
         in /proj/<pid>/images/sigs.  From there it will be copied at swapin/os_load
         time to the per-expt swapinfo directory for any node that uses the images.
      
         The process for creating standard system images (aka, "Mike") has not
         yet been modified.  When the image creation/installation procedure
         is formalized into a script, this will be done.
      
      5. Web changes to set/clear saving of disk state at swapout time.
      
         Add a checkbox to the experiment create page to allow setting "save
         swap state".  Also added to the experiment modify page, but currently
         "if (0)"ed out as it will need some additional support.  The showstuff
         page will show it.
      
         Taking a page from Leigh's hack book, if EXPOSESTATESAVE in defs.php3
         is set to zero (as it is now), then the checkbox doesn't appear in the
         create experiment page except for STUDLY users.
      ed0d25b4
  10. 17 Nov, 2005 1 commit
    • Mike Hibler's avatar
      1. Beef up "admin mode" support. · 4ec701e7
      Mike Hibler authored
      * Add libadminmfs.pm with routines for entering/exiting and executing
        commands in, the admin MFS.  Node admin and firewall swapout (see
        below) now use this, the image creation process does not yet.
      
      * Add swapout time hooks for running an admin mode process, likely to
        be used to collect swapout time state.  Currently controlled globally
        by two new sitevars.
      
      * Modified node_admin to use the library and added a "-c <command>"
        option to have nodes go into admin mode and run a command.  I don't
        really expect this to be useful, it was just a testing vehicle for
        the library.
      
      2. Improved the swapout process for firewalled experiments.  Largely
         just generalized what we already did for paniced experiments.
         At swapout, firewalled nodes are:
      
         - powered off
         - set to boot into admin mode and run a disk zapper
         - powered on
      
        The swapout process then waits for all nodes to successfully complete
        disk zapage, at which point the nodes are nfree'ed as usual.  Any
        failure of the above process, marks the experiment as panic'ed (to
        ensure that we are involved in cleanup) and sends mail to testbed-ops
        describing the state of the nodes.
      
      3. Added the aforementioned disk zapper, a little C program in the MFS
         which zeroes out the MBR and partition boot blocks (but not the MBR
         partition table or FS superblocks).  This is added insurance that if
         a node somehow gets diverted after being nfree'd but before getting
         the disk reloaded (e.g., goes to hwdown), that we cannot accidentally
         boot from the disk.  This program gets installed in the admin MFS.
      
      4. Related to firewalls, modified swapin to use the new documented
         "snmpit -N" to get the firewall VLAN number rather than parsing the
         output that was a side-effect of VLAN creation.
      4ec701e7
  11. 07 Mar, 2005 1 commit
    • Timothy Stack's avatar
      · 898cf9a2
      Timothy Stack authored
      Checkin some changes related to experiment automation and vnode feedback:
      
      	* configure, configure.in: Add sensors/canaryd/feedbacklogs
      	template.
      
      	* db/libdb.pm.in, db/xmlconvert.in: Add "virt_user_environment"
      	table that holds environment variable names and values.
      
      	* event/lib/event.c: Allocate memory of the right size for
      	event_notifications.
      
      	* event/program-agent/GNUmakefile.in: Add version.c file and
      	add install targets for the man page.
      
      	* event/program-agent/program-agent.8: Man page describing the
      	program-agent daemon.
      
      	* event/program-agent/program-agent.c: Add a bunch of convenience
      	features: let the user specify the working directory for commands;
      	save output to separate files on every invocation of an agent; let
      	the user specify a timeout for a command; make the set of
      	environment variables sane and add vars given in the NS file in
      	the opt array; a "status" file containing process information is
      	written out when children are collected.  Internal changes: child
      	processes are collected immediately, instead of waiting for the
      	next START event, so we can send back COMPLETE events; the daemon
      	now runs with a real-time priority, to increase the chances of
      	receiving events.
      
      	* event/proxy/evproxy.c: Made it bidirectional so the
      	program-agent's COMPLETE events make it back to the scheduler.
      
      	* event/sched/error-record.c: Change the default log directory.
      
      	* event/sched/event-sched.h, event/sched/event-sched.c: Setup an
      	environment similar to a program-agent to run the user's log
      	digester.
      
      	* event/sched/node-agent.cc: Add a handler for the SNAPSHOT event
      	that runs create_image for the node.
      
      	* event/sched/simulator-agent.h, event/sched/simulator-agent.cc:
      	Let the user specify a "DIGESTER" script that digests the log
      	files into a summary of the results.  Add event handler for
      	remapping a vnode experiment.
      
      	* event/sched/timeline-agent.c: Accept the RUN event as well as
      	the START event.
      
      	* os/GNUmakefile.in: Install the install-tarfile.1 man page.
      
      	* os/install-tarfile: Automatically chown/chgrp any files that do
      	not have valid user or group IDs, the new owner will be the user
      	that swapped in the experiment.  Include the install directory in
      	the DB file.  Add a "list" mode that just dumps what files have
      	been installed and where.  Add a "force" option so the user can
      	forcefully install the file, even though the DB says its already
      	there.
      
      	* os/install-tarfile.1: Man page describing the install-tarfile
      	tool.
      
      	* os/syncd/GNUmakefile.in: Install man pages on ops.
      
      	* sensors/canaryd/GNUmakefile.in: Link canaryd statically and
      	install "feedbacklogs" tool.
      
      	* sensors/canaryd/canaryd.c: Dump dummynet pipe data.
      
      	* sensors/canaryd/canarydEvents.c: Log errors.
      
      	* sensors/canaryd/feedbacklogs.in: Tool used to generate feedback
      	data from canaryd log files.
      
      	* sensors/slothd/GNUmakefile.in: Install digest-slothd on ops.
      
      	* sensors/slothd/digest-slothd: Fix some bugs and write out an
      	"alert" file with all the nodes/links that were overloaded.
      
      	* tbsetup/os_load.in, tbsetup/libosload.pm.in: Add "waitmode"
      	argument that lets you specify that you want to wait for the disk
      	to finish loading and/or wait for the node to come back up in the
      	new OS.
      
      	* tbsetup/power.in: Remove debugging printf.
      
      	* tbsetup/ns2ir/node.tcl, tbsetup/ns2ir/program.tcl,
      	tbsetup/ns2ir/sequence.tcl, tbsetup/ns2ir/sim.tcl.in: Fix some
      	quoting problems with event-sequences.  Add -expected-exit-code
      	and -tag options to the "$program run" event.  Add -digester to
      	the "$ns report" event that lets the user specify a program to run
      	to digest the log files.
      
      	* tbsetup/ns2ir/tb_compat.tcl.in: Change the initial scaling
      	factor for feedback nodes to 1%, instead of 100%.
      
      	* tmcd/tmcd.c, tmcd/common/libtmcc.pm: Add "userenv" command that
      	returns the values in "virt_user_environment".  Return new program
      	agent fields: dir, timeout, and expected_exit_code.
      
      	* tmcd/common/GNUmakefile.in: Install rc.canaryd.
      
      	* tmcd/common/bootvnodes: Add hack to boost the program-agents to
      	a real-time priority, they can't do it from inside the jail.
      
      	* tmcd/common/rc.canaryd: Rc script for canaryd.
      
      	* tmcd/common/watchdog: Don't fail outright if there is a bad line
      	in the battery.log
      
      	* tmcd/common/rc.progagent: Append "userenv" data to the
      	program-agent config file.
      
      	* utils/GNUmakefile.in: Install loghole and its man page on ops.
      
      	* utils/loghole.1: Document "clean" command and the change in
      	loghole directories.
      
      	* utils/loghole.in: Add "clean" command and parallelization.
      
      	* xmlrpc/emulabserver.py.in: Add "virt_user_environment" table.
      	Order the eventlist by "idx" and time, needed for sequences.  And
      	removed unnecessary nologin checks.
      898cf9a2
  12. 12 Nov, 2004 1 commit
  13. 03 Nov, 2004 1 commit
  14. 05 Oct, 2004 1 commit
  15. 29 Jul, 2004 1 commit
  16. 14 Jul, 2004 1 commit
  17. 24 Jun, 2004 1 commit
    • Mike Hibler's avatar
      Improve the client-side install. With these changes, it should now be · 976133e4
      Mike Hibler authored
      possible to:
      
      	gmake client
      	sudo gmake client-install
      
      on a FBSD4, FBSD5, RHL7.3, and RHL9.0 client node.
      
      There are still some dependencies that are not explicit and which would
      prevent a build/install from working on a "clean" OS.  Two that I know of are:
      you must install our version of the elvin libraries and you must install boost.
      976133e4
  18. 01 Jun, 2004 1 commit
  19. 25 May, 2004 1 commit
  20. 10 May, 2004 1 commit
  21. 26 Apr, 2004 1 commit
    • Mike Hibler's avatar
      Cleanup Makefiles: · 297019fb
      Mike Hibler authored
      1. "make clean" will just remove stuff built in the process of a regular build
      2. "make distclean" will also clean out configure generated files.
      
      This is how it was always supposed to be, there was just some bitrot.
      297019fb
  22. 09 Oct, 2003 1 commit
    • Leigh Stoller's avatar
      Reorg of two aspects of node update. · 2641af4d
      Leigh Stoller authored
      * install-rpm, install-tarfile, spewrpmtar.php3, spewrpmtar.in: Pumped
        up even more! The db file we store in /var/db now records both the
        timestamp (of the file, or if remote the install time) and the MD5
        of the file that was installed. Locally, we can get this info when
        accessing the file via NFS (copymode on or off). Remote, we use wget
        to get the file, and so pass the timestamp along in the URL request,
        and let spewrpmtar.in determine if the file has changed. If the
        timestamp it gets is >= to the timestamp of the file, an error code
        of 304 (Not Modifed) is returned. Otherwise the file is returned.
      
        If the timestamps are different (remote, server sends back an actual
        file), the MD5 of the file is compared against the value stored. If
        they are equal, update the timestamp in the db file to avoid
        repeated MD5s (or server downloads) in the future. If the MD5 is
        different, then reinstall the tarball or rpm, and update the db file
        with the new timestamp and MD5. Presto, we have auto update capability!
      
        Caveat: I pass along the old MD5 in the URL, but it is currently
        ignored. I do not know if doing the MD5 on the server is a good
        idea, but obviously it is easy to add later. At the moment it
        happens on the node, which means wasted bandwidth when the timestamp
        has changed, but the file has not (probably not something that will
        happen in typical usage).
      
        Caveat: The timestamp used on remote nodes is the time the tarfile
        is installed (GM time of course). We could arrange to return the
        timestamp of the local file back to the node, but that would mean
        complicating the protocol (or using an http header) and I was not in
        the mood for that. In typical usage, I do not think that people will
        be changing tarfiles and rpms so rapidly that this will make a
        difference, but if it does, we can change it.
      
      * node_update.in, client side watchdog, and various web pages:
        Deflated node_update, removing all of the older ssh code. We now
        assume that all nodes will auto update on a periodic basis, via the
        watchdog that runs on all client nodes, including plab nodes.
      
        Changed the permission check to look for new UPDATE permission (used
        to be UPDATEACCOUNT). As before, it requires local_root or better.
        The reason for this is that node_update now implies more than just
        updating the accounts/mounts. The web pages have been changed to
        explain that in addition to mounts/accounts, rpms and tarfiles will
        also be updated. At the moment, this is still tied to a single
        variable (update_accounts) in the nodes table, but as Kirk requested
        at the meeting, it will probably be nice to split these out in the
        future.
      
        Added the ability to node_update a single node in an experiment (in
        addition to all nodes option on the showexp page). This has been
        added to the shownode webpage menu options.
      
        Changed locking code to use the newer wrapper states, and to move
        the experiment to RUNNING_LOCKED until the update completes. This is
        to prevent mayhem in the rest of the system (which could be dealt
        with, but is not worth the trouble; people have to wait until their
        initiated update is complete, before they can swap out the
        experiment).
      
        Added "short" mode to shownode routine, equiv to the recently added
        short mode for showexp. I use this on the confirmation page for
        updating a single node, giving the user a couple of pertinent (feel
        good) facts before they comfirm.
      2641af4d
  23. 03 Oct, 2003 1 commit
  24. 17 Sep, 2003 1 commit
  25. 05 Aug, 2003 1 commit
    • Leigh Stoller's avatar
      The rest of the sync server additions: · 212cc781
      Leigh Stoller authored
      * Parser: Added new tb command to set the name of the sync server:
      
      	tb-set-sync-server <node>
      
        This initializes the sync_server slot of the experiment entry to the
        *vname* of the node that should run the sync server for that
        experiment. In other words, the sync server is per-experiment, runs
        on a node in the experiment, and the user gets to chose which node
        it runs on.
      
      * tmcd and client side setup. Added new syncserver command which
        returns the name of the syncserver and whether the requesting node
        is the lucky one to run the daemon:
      
          SYNCSERVER SERVER='nodeG.syncserver.testbed.emulab.net' ISSERVER=1
      
        The name of the syncserver is written to /var/emulab/boot/syncserver
        on the nodes so that clients can easily figure out where the server
        is.
      
        Aside: The ready bits are now ignored (no DB accesses are made) for
        virtual nodes; they are forced to use the new sync server.
      
      * New os/syncd directory containing the daemon and the client. The
        daemon is pretty simple. It waits for TCP (and UDP, although that
        path is not complete yet) connections, and reads in a little
        structure that gives the name of the "barrier" to wait for, and an
        optional count of clients in the group (this would be used by the
        "master" who initializes barriers for clients). The socket is saved
        (no reply is made, so the client is blocked) until the count reaches
        zero. Then all clients are released by writting back to the
        sockets, and the sockets are closed. Obviously, the number of
        clients is limited by the numbed of FDs (open sockets), hence the
        need for a UDP variant, but that will take more work.
      
        The client has a simple command line interface:
      
          usage: emulab-sync [options]
          -n <name>         Optional barrier name; must be less than 64 bytes long
          -d                Turn on debugging
          -s server         Specify a sync server to connect to
          -p portnum        Specify a port number to connect to
          -i count          Initialize named barrier to count waiters
          -u                Use UDP instead of TCP
      
          The client figures out the server by looking for the file created
          above by libsetup (/var/emulab/boot/syncserver). If you do not
          specify a barrier "name", it uses an internal default. Yes, the
          server can handle multiple barriers (differently named of course)
          at once (non-overlapping clients obviously).
      
          Clients can wait before a barrier in "initialized." The count on
          the barrier just goes negative until someone initializes the
          barrier using the -i option, which increments the count by the
          count. Therefore, the master does not have to arrange to get there
          "first." As an example, consider a master and one client:
      
      	nodeA> /usr/local/etc/emulab/emulab-sync -n mybarrier
      	nodeB> /usr/local/etc/emulab/emulab-sync -n mybarrier -i 1
      
          Node A waits until Node B initializes the barrier (gives it a
          count).  The count is the number of *waiters*, not including the
          master. The master is also blocked until all of the waiters have
          checked in.
      
          I have not made an provision for timeouts or crashed clients. Lets
          see how it goes.
      212cc781
  26. 18 Dec, 2002 1 commit
  27. 27 Nov, 2002 1 commit
  28. 23 Nov, 2002 1 commit
  29. 07 Jul, 2002 1 commit
  30. 21 Apr, 2002 1 commit
  31. 14 Jan, 2002 1 commit
    • Leigh Stoller's avatar
      Make Frisbee.Redux live: · d08b5e41
      Leigh Stoller authored
      * Add appropriate goo to os/GNUMakefile so that Frisbee daemon is
        built and installed.
      
      * Rework the frisbee launcher slightly. Aside from little changes
        (send email to tbops when frisbeed dies, new cmdline syntax to
        frisbeed), allow for frisbeed to exit gracefully after a period of
        inactivity (no client requests for 30 minutes, at present). In order
        to prevent a race condition with a new client being added (and
        rebooted) and frisbeed terminating before the client gets started,
        add a load_busy indicator to the images table (next to load_address
        slot) and set that to one each time to frisbeelauncher is invoked.
        When frisbeed exits, test and clear that bit atomically (lock
        tables) and go around another time (restart frisbeed for another 30
        minute period).
      
      * Rework waitmode in os_load. Wait for all of the nodes to finish at
        once, and track which nodes never finish. Retry those nodes again by
        rebooting. The number of retries is configurable in the script, and
        is currently set to one. This should take care of some PXE boot
        related problems, although obviously not all.
      
      * Got rid of -w option to os_load and made waitmode the default. The
        -s option can be used to start a reload, but not to wait for it to
        complete.
      
      * Minor changes to sched_reload and reload_daemon; pass in -s option
        to os_load.
      d08b5e41
  32. 01 Aug, 2001 1 commit
    • Leigh Stoller's avatar
      An attempt at making image creation an easy/automatic operation. HA! · 27f26d99
      Leigh Stoller authored
      This uses the pxe booted freebsd kernel and MFS. In addition, I use
      the standard testbed mechanism of specifying a startup command to
      run, which will do the imagezip to NFS mounted /proj/<pid>/.... The
      controlling script on paper sets up the database, reboots the node,
      and then waits for the startstatus to change. Then it resets the DB
      and reboots the node so that it returns back to its normal OS. The
      format of operation is:
      
      	create_image <node> <imageid> <filename>
      
      Node must be under the user's control of course. The filename must
      reside in the node's project (/proj/<pid>/whatever) since thats the
      directory that is mounted by the testbed config software when the
      machine boots. The imageid already exists in the DB, and is used to
      determine what part of the disk to zip up (say, using the slice option
      to the zipper). Since this operation is rather time consuming, it does
      the usual trick of going to background and sending email status later.
      27f26d99
  33. 16 May, 2001 1 commit
  34. 09 Apr, 2001 1 commit
  35. 01 Mar, 2001 1 commit
  36. 04 Jan, 2001 1 commit
  37. 03 Jan, 2001 1 commit
  38. 02 Jan, 2001 1 commit
  39. 13 Dec, 2000 1 commit
  40. 01 Dec, 2000 1 commit