1. 24 Sep, 2012 1 commit
    • Eric Eide's avatar
      Replace license symbols with {{{ }}}-enclosed license blocks. · 6df609a9
      Eric Eide authored
      This commit is intended to makes the license status of Emulab and
      ProtoGENI source files more clear.  It replaces license symbols like
      "EMULAB-COPYRIGHT" and "GENIPUBLIC-COPYRIGHT" with {{{ }}}-delimited
      blocks that contain actual license statements.
      
      This change was driven by the fact that today, most people acquire and
      track Emulab and ProtoGENI sources via git.
      
      Before the Emulab source code was kept in git, the Flux Research Group
      at the University of Utah would roll distributions by making tar
      files.  As part of that process, the Flux Group would replace the
      license symbols in the source files with actual license statements.
      
      When the Flux Group moved to git, people outside of the group started
      to see the source files with the "unexpanded" symbols.  This meant
      that people acquired source files without actual license statements in
      them.  All the relevant files had Utah *copyright* statements in them,
      but without the expanded *license* statements, the licensing status of
      the source files was unclear.
      
      This commit is intended to clear up that confusion.
      
      Most Utah-copyrighted files in the Emulab source tree are distributed
      under the terms of the Affero GNU General Public License, version 3
      (AGPLv3).
      
      Most Utah-copyrighted files related to ProtoGENI are distributed under
      the terms of the GENI Public License, which is a BSD-like open-source
      license.
      
      Some Utah-copyrighted files in the Emulab source tree are distributed
      under the terms of the GNU Lesser General Public License, version 2.1
      (LGPL).
      6df609a9
  2. 11 May, 2007 1 commit
  3. 20 Feb, 2007 1 commit
  4. 05 Sep, 2006 1 commit
    • Leigh Stoller's avatar
      A bunch of template changes resulting from meetings last week. · 087dbfff
      Leigh Stoller authored
      * Add XMLRPC interface for template swapin,stoprun,startrun,swapout and
        add the appropriate wrappers to the script_wrapper on ops.
      
      * Allow parameter descriptions in NS files. This is probably not in its
        final form since its a bit confusing as to what has priority; something
        in the NS file or a metadata item. Anyway, you can do this in your NS
        file:
      
      	$ns define-template-parameter GUID "0/0" "The GUID to be analyzed"
      
        The rules are currently that the NS file description has priority and
        is copied to child templates, unless the user has modified a description
        via the web interface, in which case the NS file description is ignored.
        I know, sounds awful, but for the most part people are going to use the
        NS file anyway.
      
      * Add "clear" option when starting a new experiment run; the per
        experiment DB at the logholes are cleared. Note that this is *not* the
        default behaviour; you have to either check the checkbox on the web form
        or use the -c option to the script wrapper, or clear=yes if talking
        directly to the XMLRPC server.
      
      * Fix up how email is generated for template_swapin and template_create,
        so that Kevin can debug tblog/tbreport stuff, but also so that we maintain
        mail logs as before. I have made some improvements to libaudit so as to
        centralize the mail goo, and avoid duplicating all that stuff.
      
      * Minor fixes to the program agent so that the new environment strings are
        sent before the program agent exits and reloads them!
      
      * Other minor little things.
      087dbfff
  5. 14 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Checkpoint my dynamic event stuff, crude as it is. The idea for this first · 9d021a07
      Leigh Stoller authored
      draft is that the user will at the end of an experiment run, log into one
      of his nodes and perform some analysis which is intended to be repeated at
      the end of the next run, and in future instantiations of the template.
      
      A new table called experiment_template_events holds the dynamic events for
      the template. Right now I am supporting just program events, but it will be
      easy to support arbitrary events later. As an absurd example:
      
      	node6> /usr/local/bin/template_analyze ~/data_analyze arg arg ...
      
      The user is currently responsible for making sure the output goes into a
      file in the archive. I plan to make the template_analyze wrapper handle
      that automatically later, but for now what you really want is to invoke a
      script that encapsulates that, redirecting output to $ARCHIVE (this
      variable is installed in the environment template_analyze.
      
      The wrapper script will save the current time, and then run the program.
      If the program terminates with a zero exit status, it will ssh over to ops
      and invoke an xmlrpc routine to tell boss to add a program event to both
      the eventlist for the current instance, and to the template_eventlist for
      future instances. The time of the event is the relative start time that was
      saved above (remember, each experiment run replays the event stream from
      time zero).
      
      For the future, we want to allow this to be done on ops as well, but
      that will take more infrastructure, to run "program agents" on ops.
      
      It would be nice to install the ssl xmlrpc client side on our images so
      that we do not have to ssh to ops to invoke the client.
      9d021a07
  6. 10 Aug, 2006 2 commits
  7. 08 Aug, 2006 1 commit
  8. 03 Aug, 2006 1 commit
    • Leigh Stoller's avatar
      Support for capturing the trace data that is stored in the pcal files · 4ce9c421
      Leigh Stoller authored
      into per-experiment databases on ops. Additional support for reconsituting
      those databases back into temporary databases on ops, for post processing.
      
      * This revision relies on the "snort" port (/usr/ports/security/snort)
        to read the pcap files and load them into a database. The schema is
        probably not ideal, but its better then nothing. See the file
        ops:/usr/local/share/examples/snort/create_mysql for the schema.
      
      * For simplicity, I have hooked into loghole, which already had all
        the code for downloading the trace data. I added some new methods to
        the XMLRPC server for loghole to use, to get the users DB password
        and the name of the per-experiment database. There is a new slot in
        the traces table that indicates that the trace should be snorted to
        its DB. In case you forgot, at the end of a run or when the instance
        is swapped out, loghole is run to download the trace data.
      
      * For reconsituting, there are lots of additions to opsdb_control and
        opsdb_control.proxy to create "temporary" databases and load them
        from a dump file that is stored in the archive. I've added a button
        to the Template Record page, inappropriately called "Analyze" since
        right now all it does is reconsitute the trace data into a DB on
        ops.
      
        Currently, the only indication of what has been done (the name of
        the DBs created on ops) is the log email that the user gets. A
        future project is tell the user this info in the web interface.
      
      * To turn on database capturing of trace data, do this in your NS
        file:
      
      	set link0 ...
      	$link0 trace
      	$link0 trace_snaplen 128
      	$link0 trace_db 1
      
         the increase in snaplen is optional, but a good idea if you want
         snort to undertand more then just ip headers.
      
      * Also some changes to the parser to allow plain experiments to take
        advantage of all this stuff. To simple get yourself a per-experiment
        DB, put this in your NS file:
      
      	tb-set-dpdb 1
      
        however, anytime you turn trace_db on for a link or lan, you
        automatically get a per-experiment DB.
      
      * To capture the trace data to the DB, you can run loghole by hand:
      
      	loghole sync -s
      
        the -s option turns on the "post-process" phase of loghole.
      4ce9c421
  9. 06 Jun, 2006 1 commit
  10. 01 Jun, 2006 1 commit
    • Leigh Stoller's avatar
      Add suport for building per project, group, experiment DBs on ops. At · adbcfd47
      Leigh Stoller authored
      present the per-experiment stuff is not hooked in, but will be for
      templates later. Anyway, each user gets a mysql account on ops, with
      password set to the same as their mailman password (which is also
      their jabber password, etc). Each project gets a DB named by the
      project, and each group gets a DB named by pid,gid. Users are placed
      on the access lists for the DBs as you would expect.
      
      There is a little bit of complexity to make sure that we can create
      DBs on ops outside the Emulab path and grant access to them, without
      Emulab getting confused or mucking things up.
      
      I'll get a news item done ...
      adbcfd47