1. 14 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Checkpoint my dynamic event stuff, crude as it is. The idea for this first · 9d021a07
      Leigh Stoller authored
      draft is that the user will at the end of an experiment run, log into one
      of his nodes and perform some analysis which is intended to be repeated at
      the end of the next run, and in future instantiations of the template.
      
      A new table called experiment_template_events holds the dynamic events for
      the template. Right now I am supporting just program events, but it will be
      easy to support arbitrary events later. As an absurd example:
      
      	node6> /usr/local/bin/template_analyze ~/data_analyze arg arg ...
      
      The user is currently responsible for making sure the output goes into a
      file in the archive. I plan to make the template_analyze wrapper handle
      that automatically later, but for now what you really want is to invoke a
      script that encapsulates that, redirecting output to $ARCHIVE (this
      variable is installed in the environment template_analyze.
      
      The wrapper script will save the current time, and then run the program.
      If the program terminates with a zero exit status, it will ssh over to ops
      and invoke an xmlrpc routine to tell boss to add a program event to both
      the eventlist for the current instance, and to the template_eventlist for
      future instances. The time of the event is the relative start time that was
      saved above (remember, each experiment run replays the event stream from
      time zero).
      
      For the future, we want to allow this to be done on ops as well, but
      that will take more infrastructure, to run "program agents" on ops.
      
      It would be nice to install the ssl xmlrpc client side on our images so
      that we do not have to ssh to ops to invoke the client.
      9d021a07
  2. 10 Aug, 2006 2 commits
  3. 09 Aug, 2006 1 commit
  4. 08 Aug, 2006 3 commits
  5. 03 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Support for capturing the trace data that is stored in the pcal files · 4ce9c421
      Leigh Stoller authored
      into per-experiment databases on ops. Additional support for reconsituting
      those databases back into temporary databases on ops, for post processing.
      
      * This revision relies on the "snort" port (/usr/ports/security/snort)
        to read the pcap files and load them into a database. The schema is
        probably not ideal, but its better then nothing. See the file
        ops:/usr/local/share/examples/snort/create_mysql for the schema.
      
      * For simplicity, I have hooked into loghole, which already had all
        the code for downloading the trace data. I added some new methods to
        the XMLRPC server for loghole to use, to get the users DB password
        and the name of the per-experiment database. There is a new slot in
        the traces table that indicates that the trace should be snorted to
        its DB. In case you forgot, at the end of a run or when the instance
        is swapped out, loghole is run to download the trace data.
      
      * For reconsituting, there are lots of additions to opsdb_control and
        opsdb_control.proxy to create "temporary" databases and load them
        from a dump file that is stored in the archive. I've added a button
        to the Template Record page, inappropriately called "Analyze" since
        right now all it does is reconsitute the trace data into a DB on
        ops.
      
        Currently, the only indication of what has been done (the name of
        the DBs created on ops) is the log email that the user gets. A
        future project is tell the user this info in the web interface.
      
      * To turn on database capturing of trace data, do this in your NS
        file:
      
      	set link0 ...
      	$link0 trace
      	$link0 trace_snaplen 128
      	$link0 trace_db 1
      
         the increase in snaplen is optional, but a good idea if you want
         snort to undertand more then just ip headers.
      
      * Also some changes to the parser to allow plain experiments to take
        advantage of all this stuff. To simple get yourself a per-experiment
        DB, put this in your NS file:
      
      	tb-set-dpdb 1
      
        however, anytime you turn trace_db on for a link or lan, you
        automatically get a per-experiment DB.
      
      * To capture the trace data to the DB, you can run loghole by hand:
      
      	loghole sync -s
      
        the -s option turns on the "post-process" phase of loghole.
      4ce9c421
  6. 18 Jul, 2006 3 commits
    • Jay Lepreau's avatar
      testbed-ops@emulab.net -> @flux · aab98e1d
      Jay Lepreau authored
      Should probably be done with the defs file and makefile.
      aab98e1d
    • Robert Ricci's avatar
      Nits to make Jay happy · 37ebf66d
      Robert Ricci authored
      37ebf66d
    • Leigh Stoller's avatar
      Changes necessary for moving most of the stuff in the node_types · 624a0364
      Leigh Stoller authored
      table, into a new table called node_type_attributes, which is intended
      to be a more extensible way of describing nodes.
      
      The only things left in the node_types table will be type,class and the
      various isXXX boolean flags, since we use those in numerous joins all over
      the system (ie: when discriminating amongst nodes).
      
      For the most part, all of that other stuff is rarely used, or used in
      contexts where the information is needed, but not for type descrimination.
      Still, it made for a lot of queries to change!
      
      Along the way I added a NodeType library module that represents the type
      info as a perl object. I also beefed up the existing Node module, and
      started using it in more places. I also added an Interfaces module, but I
      have not done much with that yet.
      
      I have not yet removed all the slots from the node_types table; I plan to
      run the new code for a few days and then remove the slots.
      
      Example using the new NodeType object:
      
      	use NodeType;
      
      	my $typeinfo = NodeType->Lookup($type);
      
              if ($typeinfo->control_interface(\$control_iface) ||
                  !$control_iface) {
        	    warn "No control interface for $type is defined in the DB!\n";
              }
      
      or using the Node:
      
      	use Node;
      
              my $nodeobject = Node->Lookup($node_id);
              my $imageable  = $nodeobject->NodeTypeInfo()->imageable();
      or
              my $rebootable = $nodeobject->isrebootable();
      or
              $nodeobject->NodeTypeAttribute("control_interface", \$control_iface);
      
      Lots of way to accomplish the same thing, but the main point is that the
      Node is able to override the NodeType (if it wants to), which I think is
      necessary for flexibly describing one/two of a kind things like switches, etc.
      624a0364
  7. 14 Jul, 2006 1 commit
  8. 10 Jul, 2006 1 commit
    • Kevin Atkinson's avatar
      · ed47be9c
      Kevin Atkinson authored
      Updated plinabox template to use new PLC image (PLAB-PLC).
      ed47be9c
  9. 05 Jul, 2006 2 commits
  10. 22 Jun, 2006 3 commits
  11. 21 Jun, 2006 1 commit
    • Leigh Stoller's avatar
      Munge the schemacheck code to deal with all the oddities of the way · 53c29cfa
      Leigh Stoller authored
      mysql 5.0 dumps the schema. What a pain in the ass.
      
      Note that "timestamp" is basically impossible since its radically
      different between 3.X and 5.X, which would break schemacheck on 3.X
      based Emulabs. Since there are only three of them in the schema, I
      changed schemadiff to not look too hard at them.
      53c29cfa
  12. 20 Jun, 2006 1 commit
  13. 06 Jun, 2006 1 commit
  14. 01 Jun, 2006 1 commit
    • Leigh Stoller's avatar
      Add suport for building per project, group, experiment DBs on ops. At · adbcfd47
      Leigh Stoller authored
      present the per-experiment stuff is not hooked in, but will be for
      templates later. Anyway, each user gets a mysql account on ops, with
      password set to the same as their mailman password (which is also
      their jabber password, etc). Each project gets a DB named by the
      project, and each group gets a DB named by pid,gid. Users are placed
      on the access lists for the DBs as you would expect.
      
      There is a little bit of complexity to make sure that we can create
      DBs on ops outside the Emulab path and grant access to them, without
      Emulab getting confused or mucking things up.
      
      I'll get a news item done ...
      adbcfd47
  15. 24 May, 2006 2 commits
  16. 27 Apr, 2006 2 commits
  17. 20 Apr, 2006 1 commit
  18. 07 Apr, 2006 1 commit
  19. 06 Apr, 2006 1 commit
  20. 21 Mar, 2006 1 commit
  21. 14 Mar, 2006 1 commit
  22. 13 Mar, 2006 2 commits
    • Leigh Stoller's avatar
      A couple minor tweaks to allow taking snapshots of global images, · 026d4c32
      Leigh Stoller authored
      which are stored in /proj/$pid/images since they cannot go directly
      to boss. They need to be copied back of course, but only admins
      can create global images anyway.
      026d4c32
    • Leigh Stoller's avatar
      A set of changes to run "prepare" on a node just prior to an image · d8f8f9b4
      Leigh Stoller authored
      being taken.
      
      The basic strategy is to have node_reboot (when -p option supplied)
      invoke a special command on the node that will cause the shutdown
      procedure to run prepare as it goes single user, but before the
      network is turned off and the machine rebooted. The output of the
      prepare run is capture and send back via the tmcd BOOTLOG command and
      stored in the DB, so that create_image can dump that to the logfile
      (so that the person taking the image can know for certain that the
      prepare ran and finished okay).
      
      On linux this is pretty easy to arrange since reboot is actually
      shutdown and shutdown runs the K scripts in /etc/rc.d/rc6.d, and at
      the end the node is basically single user mode. I just added a new
      script to run prepare and send back the output.
      
      On FreeBSD this is a lot harder since there are no decent hooks.
      Instead, I had to hack up init (see tmcd/freebsd/init/{4,5,6}) with
      some simple code that looks for a command to run instead of going to a
      single user shell. The command (script) runs prepare, sends the output
      back to tmcd, and then does a real reboot.
      
      Okay, so how to get -p passed to node_reboot? I hacked up the
      libadminmfs code slightly to do that, with new 'prepare' argument
      option. This may not be the best approach; might have to do this as a
      real state transition if problems develop. I will wait and see.
      
      Also, I changed www/loadimage.php3 to spew the output of the
      create_image to the browser.
      d8f8f9b4
  23. 08 Mar, 2006 1 commit
  24. 07 Mar, 2006 1 commit
  25. 02 Mar, 2006 2 commits
  26. 24 Feb, 2006 3 commits