1. 20 Oct, 2006 2 commits
    • Leigh Stoller's avatar
      Add compression option to sync option of loghole. When turned on, any file · 4d4a27e1
      Leigh Stoller authored
      greater the 512K is automatically compressed with gzip. Might need to
      make this number bigger; we shall see.
      
      If you run emacs, put this in your .emacs file.
      
      	(load "jka-compr")
      	(jka-compr-install)
      
      and any time you visit a file that ends in one of the standard compression
      extensions, emacs will automatically do the uncompress for you on the data
      in the buffer (not the actual disk file of course). Very convenient.
      
      You can also get your browser to do the same, but I leave that as an
      exercise for the reader.
      4d4a27e1
    • Mike Hibler's avatar
      Wow, this should make me look important! · afa5e919
      Mike Hibler authored
      Two-day boondoggle to support "/scratch", an optional large, shared filesystem
      for users.  To do this, I needed to find all the instances where /proj is used
      and behave accordingly.  The boondoggle part was the decision to gather up all
      the hardwired instances of shared directory names ("/proj", "/users", etc.)
      so that they are set in a common place (via unexposed configure variables).
      This is a boondoggle because:
      
      1. I didn't change the client-side scripts.  They need a different mechanism
         (e.g., tmcd) to get the info, configure is the wrong way.
      
      2. Even if I had done #1 it is likely--no, certain--that something would
         fail if you tried to rename "/proj" to be "/mike".  These names are just
         too ingrained.
      
      3. We may not even use "/scratch" as it turns out.
      
      Note, I also didn't fix any of the .html documentation.  Anyway, it is done.
      To maintain my illusion in the future you should:
      
      1. Have perl scripts include "use libtestbed" and use the defined PROJROOT(),
         et.al. functions where possible.  If not possible, make sure they run
         through configure and use @PROJROOT_DIR@, etc.
      
      2. Use the configure method for python, C, php and other languages.
      
      3. There are perl (TBValidUserDir) and php (VALIDUSERPATH) functions which
         you should call to determine if an NS, template parameter, tarball or
         other file are in "an acceptable location."  Use these functions where
         possible.  They know about the optional "scratch" filesystem.  Note that
         the perl function is over-engineered to handles cases that don't occur
         in nature.
      afa5e919
  2. 18 Oct, 2006 1 commit
  3. 05 Oct, 2006 1 commit
    • Leigh Stoller's avatar
      More work on "recording" template events. · e9607a77
      Leigh Stoller authored
      * New version of template_record just for ops, since so much is
        different about ops, not bothering to maintain a single version.
      
      * Various fixes to how the recorded events are stored and reconstituted.
        The big fix is to wrap them in a sequence to that they get fired
        properly (waiting for completion of previous event in recording).
      
      * New buttons to Pause and Continue event time, which is used when
        adding recorded events. This allows users to pause time while they
        "think" so when an event is recorded, the thinking time is not actually
        in the timeline. Eventually hope to figure this out automatically, but
        that will take some real, uh, thinking.
      
      * Add a new event editor (linked off the template page) that allows
        you to delete and change the recordings. Note that you can only edit
        the events at the template level; you cannot edit the events of an
        instance (swapped in experiment), and you can only edit the recorded
        events, not any other events. Not sure its useful to be able to do
        either of these yet, but probably not too hard to add at some point.
      e9607a77
  4. 03 Oct, 2006 1 commit
  5. 02 Oct, 2006 1 commit
  6. 29 Sep, 2006 1 commit
    • Leigh Stoller's avatar
      Minor changes to per-experiment DB stuff. · dc8e62a3
      Leigh Stoller authored
      * Create a per-experiment DB user for the per-experiment DB; the user
        name is equal to the DB name.
      
      * Add a dpdbpassword field to the experiments table; this is the
        randomly generated password for the DB user mentioned above.
      
      * For Templates, use the above user/password in the environment,
        instead of the swapper uid/password.
      
      * Add experiment dbname/dbpassword to the Show Experiment page.
      dc8e62a3
  7. 19 Sep, 2006 3 commits
  8. 14 Sep, 2006 1 commit
  9. 13 Sep, 2006 2 commits
  10. 12 Sep, 2006 1 commit
  11. 05 Sep, 2006 2 commits
    • Leigh Stoller's avatar
      A bunch of template changes resulting from meetings last week. · 087dbfff
      Leigh Stoller authored
      * Add XMLRPC interface for template swapin,stoprun,startrun,swapout and
        add the appropriate wrappers to the script_wrapper on ops.
      
      * Allow parameter descriptions in NS files. This is probably not in its
        final form since its a bit confusing as to what has priority; something
        in the NS file or a metadata item. Anyway, you can do this in your NS
        file:
      
      	$ns define-template-parameter GUID "0/0" "The GUID to be analyzed"
      
        The rules are currently that the NS file description has priority and
        is copied to child templates, unless the user has modified a description
        via the web interface, in which case the NS file description is ignored.
        I know, sounds awful, but for the most part people are going to use the
        NS file anyway.
      
      * Add "clear" option when starting a new experiment run; the per
        experiment DB at the logholes are cleared. Note that this is *not* the
        default behaviour; you have to either check the checkbox on the web form
        or use the -c option to the script wrapper, or clear=yes if talking
        directly to the XMLRPC server.
      
      * Fix up how email is generated for template_swapin and template_create,
        so that Kevin can debug tblog/tbreport stuff, but also so that we maintain
        mail logs as before. I have made some improvements to libaudit so as to
        centralize the mail goo, and avoid duplicating all that stuff.
      
      * Minor fixes to the program agent so that the new environment strings are
        sent before the program agent exits and reloads them!
      
      * Other minor little things.
      087dbfff
    • Leigh Stoller's avatar
      Add --delete option to sync directive, to schedule a clean for the · ef320fa8
      Leigh Stoller authored
      next sync (similar to how it works with the archive directive).
      ef320fa8
  12. 14 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Checkpoint my dynamic event stuff, crude as it is. The idea for this first · 9d021a07
      Leigh Stoller authored
      draft is that the user will at the end of an experiment run, log into one
      of his nodes and perform some analysis which is intended to be repeated at
      the end of the next run, and in future instantiations of the template.
      
      A new table called experiment_template_events holds the dynamic events for
      the template. Right now I am supporting just program events, but it will be
      easy to support arbitrary events later. As an absurd example:
      
      	node6> /usr/local/bin/template_analyze ~/data_analyze arg arg ...
      
      The user is currently responsible for making sure the output goes into a
      file in the archive. I plan to make the template_analyze wrapper handle
      that automatically later, but for now what you really want is to invoke a
      script that encapsulates that, redirecting output to $ARCHIVE (this
      variable is installed in the environment template_analyze.
      
      The wrapper script will save the current time, and then run the program.
      If the program terminates with a zero exit status, it will ssh over to ops
      and invoke an xmlrpc routine to tell boss to add a program event to both
      the eventlist for the current instance, and to the template_eventlist for
      future instances. The time of the event is the relative start time that was
      saved above (remember, each experiment run replays the event stream from
      time zero).
      
      For the future, we want to allow this to be done on ops as well, but
      that will take more infrastructure, to run "program agents" on ops.
      
      It would be nice to install the ssl xmlrpc client side on our images so
      that we do not have to ssh to ops to invoke the client.
      9d021a07
  13. 10 Aug, 2006 2 commits
  14. 09 Aug, 2006 1 commit
  15. 08 Aug, 2006 3 commits
  16. 03 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Support for capturing the trace data that is stored in the pcal files · 4ce9c421
      Leigh Stoller authored
      into per-experiment databases on ops. Additional support for reconsituting
      those databases back into temporary databases on ops, for post processing.
      
      * This revision relies on the "snort" port (/usr/ports/security/snort)
        to read the pcap files and load them into a database. The schema is
        probably not ideal, but its better then nothing. See the file
        ops:/usr/local/share/examples/snort/create_mysql for the schema.
      
      * For simplicity, I have hooked into loghole, which already had all
        the code for downloading the trace data. I added some new methods to
        the XMLRPC server for loghole to use, to get the users DB password
        and the name of the per-experiment database. There is a new slot in
        the traces table that indicates that the trace should be snorted to
        its DB. In case you forgot, at the end of a run or when the instance
        is swapped out, loghole is run to download the trace data.
      
      * For reconsituting, there are lots of additions to opsdb_control and
        opsdb_control.proxy to create "temporary" databases and load them
        from a dump file that is stored in the archive. I've added a button
        to the Template Record page, inappropriately called "Analyze" since
        right now all it does is reconsitute the trace data into a DB on
        ops.
      
        Currently, the only indication of what has been done (the name of
        the DBs created on ops) is the log email that the user gets. A
        future project is tell the user this info in the web interface.
      
      * To turn on database capturing of trace data, do this in your NS
        file:
      
      	set link0 ...
      	$link0 trace
      	$link0 trace_snaplen 128
      	$link0 trace_db 1
      
         the increase in snaplen is optional, but a good idea if you want
         snort to undertand more then just ip headers.
      
      * Also some changes to the parser to allow plain experiments to take
        advantage of all this stuff. To simple get yourself a per-experiment
        DB, put this in your NS file:
      
      	tb-set-dpdb 1
      
        however, anytime you turn trace_db on for a link or lan, you
        automatically get a per-experiment DB.
      
      * To capture the trace data to the DB, you can run loghole by hand:
      
      	loghole sync -s
      
        the -s option turns on the "post-process" phase of loghole.
      4ce9c421
  17. 18 Jul, 2006 3 commits
    • Jay Lepreau's avatar
      testbed-ops@emulab.net -> @flux · aab98e1d
      Jay Lepreau authored
      Should probably be done with the defs file and makefile.
      aab98e1d
    • Robert Ricci's avatar
      Nits to make Jay happy · 37ebf66d
      Robert Ricci authored
      37ebf66d
    • Leigh Stoller's avatar
      Changes necessary for moving most of the stuff in the node_types · 624a0364
      Leigh Stoller authored
      table, into a new table called node_type_attributes, which is intended
      to be a more extensible way of describing nodes.
      
      The only things left in the node_types table will be type,class and the
      various isXXX boolean flags, since we use those in numerous joins all over
      the system (ie: when discriminating amongst nodes).
      
      For the most part, all of that other stuff is rarely used, or used in
      contexts where the information is needed, but not for type descrimination.
      Still, it made for a lot of queries to change!
      
      Along the way I added a NodeType library module that represents the type
      info as a perl object. I also beefed up the existing Node module, and
      started using it in more places. I also added an Interfaces module, but I
      have not done much with that yet.
      
      I have not yet removed all the slots from the node_types table; I plan to
      run the new code for a few days and then remove the slots.
      
      Example using the new NodeType object:
      
      	use NodeType;
      
      	my $typeinfo = NodeType->Lookup($type);
      
              if ($typeinfo->control_interface(\$control_iface) ||
                  !$control_iface) {
        	    warn "No control interface for $type is defined in the DB!\n";
              }
      
      or using the Node:
      
      	use Node;
      
              my $nodeobject = Node->Lookup($node_id);
              my $imageable  = $nodeobject->NodeTypeInfo()->imageable();
      or
              my $rebootable = $nodeobject->isrebootable();
      or
              $nodeobject->NodeTypeAttribute("control_interface", \$control_iface);
      
      Lots of way to accomplish the same thing, but the main point is that the
      Node is able to override the NodeType (if it wants to), which I think is
      necessary for flexibly describing one/two of a kind things like switches, etc.
      624a0364
  18. 14 Jul, 2006 1 commit
  19. 10 Jul, 2006 1 commit
    • Kevin Atkinson's avatar
      · ed47be9c
      Kevin Atkinson authored
      Updated plinabox template to use new PLC image (PLAB-PLC).
      ed47be9c
  20. 05 Jul, 2006 2 commits
  21. 22 Jun, 2006 3 commits
  22. 21 Jun, 2006 1 commit
    • Leigh Stoller's avatar
      Munge the schemacheck code to deal with all the oddities of the way · 53c29cfa
      Leigh Stoller authored
      mysql 5.0 dumps the schema. What a pain in the ass.
      
      Note that "timestamp" is basically impossible since its radically
      different between 3.X and 5.X, which would break schemacheck on 3.X
      based Emulabs. Since there are only three of them in the schema, I
      changed schemadiff to not look too hard at them.
      53c29cfa
  23. 20 Jun, 2006 1 commit
  24. 06 Jun, 2006 1 commit
  25. 01 Jun, 2006 1 commit
    • Leigh Stoller's avatar
      Add suport for building per project, group, experiment DBs on ops. At · adbcfd47
      Leigh Stoller authored
      present the per-experiment stuff is not hooked in, but will be for
      templates later. Anyway, each user gets a mysql account on ops, with
      password set to the same as their mailman password (which is also
      their jabber password, etc). Each project gets a DB named by the
      project, and each group gets a DB named by pid,gid. Users are placed
      on the access lists for the DBs as you would expect.
      
      There is a little bit of complexity to make sure that we can create
      DBs on ops outside the Emulab path and grant access to them, without
      Emulab getting confused or mucking things up.
      
      I'll get a news item done ...
      adbcfd47
  26. 24 May, 2006 2 commits