1. 09 Jul, 2007 1 commit
    • Leigh B. Stoller's avatar
      Checkpoint my cvs interface to the workbench. This first cut uses the · 8371fc79
      Leigh B. Stoller authored
      "rtag" directive to initiate template modify operations. So, to get started
      you do a checkout:
      
        cvs -d ops.emulab.net:/proj/$pid/templates/XXXXX/cvsrepo checkout XXXXX
      
      where XXXXX is the part of the guid (10000/1) before the slash. Might try
      and roll all templates into a single project wide repo at some point, to
      avoid the extraneous path stuff, but didn't want to worry that just yet.
      
      Okay, so have a checkout. You can work along the trunk, doing commits. To
      create a new template (a modify of the existing template), you tag the tree
      using rtag:
      
        cvs -d ops.emulab.net:/proj/$pid/templates/XXXXX/cvsrepo rtag mytag XXXXX
      
      A template modify is started at the end, and you should probably wait for
      email before continuing. Eventually I will need to add locking of some
      kind, but I have to do the modify in the background, or else I get deadlock
      cause cvs keeps the repo locked, and the modify also needs to access it.
      
      Each time you tag along the trunk, you get a modified template, which in
      the history diagram looks like:
      
        10000/1 --> 10000/2 --> 10000/3 ...
      
      If you want to branch, say at 10000/2 you can create a branch tag using rtag:
      
        cvs -d [cut] rtag -r T10000/2 -b mytag2 XXXXX
      
      You can also use your own tags for -r option, but I also create a TXXXXX/YY
      tag at each template modify, which is easy to remember.
      
      Then update your sandbox to the new branch, commit changes along that
      branch, and then later use rtag again to initiate a template modify
      operation:
      
        cvs update -r mytag2
        cvs commit ...
        cvs -d [cut] rtag -r mytag2 mytag3 XXXXX
      
      And now the history diagram looks like:
      
        10000/1 --> 10000/2 --> 10000/3 ...
                      |
                      |
                      -> 10000/4 ...
      
      You should be able to mix interaction via the web with interaction via the
      cvs interface. I've tested it, although not extensively.
      8371fc79
  2. 30 May, 2007 1 commit
  3. 23 May, 2007 1 commit
    • Leigh B. Stoller's avatar
      First cut at template checkout and commit from a checkout. The interface · b674bc7b
      Leigh B. Stoller authored
      described is the one exported to ops via the XMLRPC interface. This is
      just playing aroundl no doubt this stuff is going to change.
      
      * template_checkout guid/vers
      
        Checkout a copy of the template to the current working directory.
      
      * template_commit
      
        Modify the previous template checkout, using the nsfile contained in
        the tbdata directory (subdir of the current directory). In other words,
        the current template is modified, creating a new template in the
        current working directory (the current directory refers to the new
        template).
      
        The datastore subdir is imported into the new template, but that is
        the only directory that is imported at present. Might change that.
      
      So this sounds much cooler then it really is. Why?
      
      * This only works from ops.
      
      * The "current directory" must be one of the standard approved directories
        (/proj, /users, /groups).
      
      * Cause, boss reads and writes that directory via NFS, as told to it
        by the xmlrpc client.
      
      At some point in the future it would be nice to support something
      fancier, using a custom transport, but lets see how this goes.
      b674bc7b
  4. 17 May, 2007 1 commit
  5. 15 May, 2007 1 commit
    • Leigh B. Stoller's avatar
      Checkpoint changes that have been discussed in the last few weeks: · c4f53202
      Leigh B. Stoller authored
      * Records are now "help open" when a run is stopped. When the next run
        is started, a check is made to see if the files
        (/project/$pid/exp/$eid) have changed, and if so a new version of the
        archive is committed before the next run is started.
      
      * Change the way swapmod is handled within an instance. A new option
        on the ShowExp page called Modify Resources. The intent is to allow
        an instance to be modified without having to start and stop runs,
        which tends to clutter things up, according to our user base. So, if
        you are within a run, that run is reset (reused) after the swapmod is
        finished. You can do this as many times as you like. If you are
        between runs (last operation was a stoprun), do the swapmod and then
        "speculatively" start a new run. Subsequent modifies reuse the that
        run again, as above.
      
        I think this is what Kevin was after ... there are some UI issues
        that may need to be resolved, will wait to hear what people have to
        say.
      
      * Revising a record is now supported. Export, change in place, and
        then use the Revise link on the ShowRun page. Currently this has to
        happen from the export directory on ops, but eventually allow an
        upload (to correspond to downloaded exports)
      
      * Check to see if export already exists, and give warning. Added a
        checkbox that allows user to overwrite the export.
      
      * A bunch of minor UI changes to the various template pages.
      c4f53202
  6. 02 Mar, 2007 1 commit
    • David Johnson's avatar
      Adds rmcp support (for new wifi pcs) to the power command. For now, you · d31ab2bd
      David Johnson authored
      have to re-run the swig-wrappers target in tools/rmanage/GNUmakefile to
      generate the wrapper and perl module; this must of course be done when
      changes are made to the rmcp libs.
      
        * GNUmakefile.in, configure, configure.in: add tools/rmanage
        * tbsetup/GNUmakefile.in, tbsetup/power*.in: add rmcp to power command
        * tools/GNUmakefile.in: add rmanage
        * tools/rmanage/*.c,*.h: bugfixes, swig helper methods, etc.
        * tools/rmanage/rmcp.i: swig import control file
        * tools/rmanage/rmcp.pm,rmcp_wrap.c: rmcp wrapper/module generated by swig
      d31ab2bd
  7. 19 Jan, 2007 1 commit
  8. 18 Jan, 2007 4 commits
  9. 25 Oct, 2006 1 commit
    • Leigh B. Stoller's avatar
      Makefile Whacking! Try to deal with the problem caused by the delay · 7590f9c5
      Leigh B. Stoller authored
      between when something is installed and when post-install runs. Short
      of a global lock (which we probably need anyway someday), my solution
      is this. In your makefiles, add these variables before the line that
      has the include of $(TESTBED_SRCDIR)/GNUmakerules:
      
      	SETUID_BIN_SCRIPTS   =
      	SETUID_SBIN_SCRIPTS  =
      
      I have added three new rules to GNUmakerules that look like this:
      
      	$(addprefix $(SBINDIR)/, $(SETUID_SBIN_SCRIPTS)): $(SBINDIR)/%: %
      		echo "Installing (setuid) $<"
      		-mkdir -p $(INSTALL_SBINDIR)
      		$(SUDO) $(INSTALL) -o root -m 4755 $< $@
      
      Yep, your eyes ain't lying to you; use sudo to run the target so that
      install does the right thing (which is that the old file is not
      replaced until the new one has the proper attributes on it).
      
      Note that post-install is still needed for the initial install, but
      should no longer be needed for day to day installs since all that other
      stuff post-install does is mkdir/chmod on directories.
      7590f9c5
  10. 24 Oct, 2006 1 commit
  11. 18 Oct, 2006 1 commit
  12. 16 Oct, 2006 1 commit
  13. 08 Aug, 2006 1 commit
  14. 03 Aug, 2006 1 commit
    • Leigh B. Stoller's avatar
      Support for capturing the trace data that is stored in the pcal files · 4ce9c421
      Leigh B. Stoller authored
      into per-experiment databases on ops. Additional support for reconsituting
      those databases back into temporary databases on ops, for post processing.
      
      * This revision relies on the "snort" port (/usr/ports/security/snort)
        to read the pcap files and load them into a database. The schema is
        probably not ideal, but its better then nothing. See the file
        ops:/usr/local/share/examples/snort/create_mysql for the schema.
      
      * For simplicity, I have hooked into loghole, which already had all
        the code for downloading the trace data. I added some new methods to
        the XMLRPC server for loghole to use, to get the users DB password
        and the name of the per-experiment database. There is a new slot in
        the traces table that indicates that the trace should be snorted to
        its DB. In case you forgot, at the end of a run or when the instance
        is swapped out, loghole is run to download the trace data.
      
      * For reconsituting, there are lots of additions to opsdb_control and
        opsdb_control.proxy to create "temporary" databases and load them
        from a dump file that is stored in the archive. I've added a button
        to the Template Record page, inappropriately called "Analyze" since
        right now all it does is reconsitute the trace data into a DB on
        ops.
      
        Currently, the only indication of what has been done (the name of
        the DBs created on ops) is the log email that the user gets. A
        future project is tell the user this info in the web interface.
      
      * To turn on database capturing of trace data, do this in your NS
        file:
      
      	set link0 ...
      	$link0 trace
      	$link0 trace_snaplen 128
      	$link0 trace_db 1
      
         the increase in snaplen is optional, but a good idea if you want
         snort to undertand more then just ip headers.
      
      * Also some changes to the parser to allow plain experiments to take
        advantage of all this stuff. To simple get yourself a per-experiment
        DB, put this in your NS file:
      
      	tb-set-dpdb 1
      
        however, anytime you turn trace_db on for a link or lan, you
        automatically get a per-experiment DB.
      
      * To capture the trace data to the DB, you can run loghole by hand:
      
      	loghole sync -s
      
        the -s option turns on the "post-process" phase of loghole.
      4ce9c421
  15. 28 Jul, 2006 1 commit
    • Leigh B. Stoller's avatar
      Add a "Create Template from Instance" ability. Basically, you can · a651da71
      Leigh B. Stoller authored
      create a new template (well, really a modify) from the current
      swapped in experiment. This allows you to create a template, swap in
      an instance, modify the datastore in the instance (which is a copy
      of the datastore in the template), and then create a new template
      using the datastore and nsfile from the instance. This is a new menu
      item on the showexp page for the instance.
      
      Also in this commit are fixes and improvements to the new navagation
      bar that I recently added.
      a651da71
  16. 26 Jul, 2006 1 commit
  17. 11 Jul, 2006 1 commit
  18. 21 Jun, 2006 1 commit
  19. 30 May, 2006 1 commit
    • Leigh B. Stoller's avatar
      Add an export option to the record listing. A new button on the Template · 2cfe4630
      Leigh B. Stoller authored
      Record page lets you export the contents of the archive that corresponds
      to that record, along with an XML file that describes the various DB bits
      for the template and instance.
      
      This is just a first cut so that Mike can start playing around. Subject to
      change, I'm sure.
      
      The archive is dumped to /proj/$pid/exports/$guid/$vers/$exptidx, which
      is basically the last commit of the instance when it was terminated.
      
      The xml file is called export.xml and is placed in the top level directory
      of the above directory. The file is created with XML::Simple, and a typical
      XML file might look like:
      
      <instance>
        <bindings>
          <name>NodeCount</name>
          <description>Number of nodes!</description>
          <value>1</value>
        </bindings>
        <bindings>
          <name>OS</name>
          <description></description>
          <value>RHL90-STD</value>
        </bindings>
        <bindings>
          <name>ScriptArgs</name>
          <description></description>
          <value>-b</value>
        </bindings>
        <eid>NewOne-V2</eid>
        <guid>10149/2</guid>
        <metadata>
          <name>M1</name>
          <guid>10162/1</guid>
          <value>Some metadata</value>
        </metadata>
        <pid>testbed</pid>
        <runs>
          <name>1</name>
          <archive_tag>T20060526-082533-172_endexp</archive_tag>
          <description></description>
          <exptidx>110</exptidx>
          <idx>1</idx>
          <runid>NewOne-V2</runid>
          <start_time>2006-05-26 08:23:02</start_time>
          <stop_time>2006-05-26 08:25:16</stop_time>
        </runs>
        <uid>stoller</uid>
      </instance>
      2cfe4630
  20. 15 May, 2006 1 commit
    • Mike Hibler's avatar
      Initial "Inner Plab" support. In your NS file, you declare one node: · 9512772e
      Mike Hibler authored
      tb-set-node-plab-role $plc plc
      
      to make it the PLC node.  Then any number of other nodes are declared as:
      
      tb-set-node-plab-role $plab1 node
      
      to make them inner plab nodes.  Unlike elabinelab, there is no magic
      "tb-plab-in-elab" command which implies the topology, you put all the
      plab nodes in a LAN or whatever yourself.  This may or may not be a good idea.
      
      Anyway, these NS commands set DB state in virt_nodes and reserved much like
      elabinelab.  During swapin, the dhcpd.conf file is rewritten so that
      inner plab nodes have their "filename" set to "pxelinux.0" and their
      "next-server" set to the designated PLC node.  The PLC node will then be
      loaded/booted before anything is done to the inner-plab nodes.  After
      it comes up, the inner plab nodes are rebooted and declared as up.
      There is a new tmcd command "eplabconfig" (suggestions for a new name
      welcom!), which returns info like:
      
          NAME=plc ROLE=plc IP=155.98.36.3 MAC=00d0b713f57d
          NAME=plab1 ROLE=node IP=155.98.36.10 MAC=0002b3877a4f
          NAME=plab2 ROLE=node IP=155.98.36.34 MAC=00d0b7141057
      
      to just the PLC node (returns nothing to any other node).
      
      The implications of this setup are:
      
       * The PLC node must act as a TFTP server as we have discussed in the past.
         The TMCC info above is hopefully enough to configure pxelinux, if not
         we can change it.
      
       * The PLC node is responsible for loading the disks of inner plab nodes.
         This is implied by the setup, where we change the dhcpd.conf file before
         doing anything to the inner nodes.  Thus, once the inner nodes are
         rebooted, they will be talking pxelinux with PLC, and not to boss.
         This step is dubious, as we could no doubt load the disks faster than
         whatever plab uses can.  But it simplified the setup (and is more
         realistic!).  The alternative, which is something that might be useful
         anyway, is to introduce a "state" after which nodes have been reloaded
         but before they are rebooted.  With that, we can reload the plab nodes
         and then change the dhcpd.conf file so when they reboot they start
         talking to the PLC.
      9512772e
  21. 12 May, 2006 1 commit
    • Leigh B. Stoller's avatar
      Redo the entire template library. I've been meaning to use perl · 78503406
      Leigh B. Stoller authored
      "object" and this was a good opportunity to see if they are useful and
      easy enough to use. Yep they are; the code is much cleaner with many
      fewer utility functions to get at stuff. I recommend this approach
      from now on.
      
      The problem is the php side, which ends up duplicating some stuff, but
      in the old style. This is not so bad for the template code since I
      have made it a point not to do anything but display functions in php;
      all modifications are handled in the backend.
      78503406
  22. 05 May, 2006 1 commit
  23. 30 Mar, 2006 1 commit
  24. 28 Mar, 2006 1 commit
  25. 22 Feb, 2006 1 commit
  26. 07 Feb, 2006 1 commit
  27. 26 Jan, 2006 1 commit
    • Kevin Atkinson's avatar
      · 05015359
      Kevin Atkinson authored
      Merged in changes from tblog-2-branch:
      
                Move parts of libtblog into libtblog_simple.  Libtblog simple
                provided the basic logging functions but doesn't touch anything.
                Moreover including libtblog_simple doesn't automatically start
                the logging subsystem.  It also doesn't have testbed dependencies
                which mean 1) it can be used in the core testbed libraries (such
                as libdb, libtestbed) without introducing a circular dependency
                and 2) can be used independently.
      
                Reworked DBFatal and DBWarn to use tblog.  It will still email
                testbed-ops, however.
      
                Make use of the "cause" field to determine the cause of the bug.
                In particular tblog_find_error will look at the value of this
                field and report the "cause".  In the future different actions
                can be taken based on the ultimate "cause" of the bug, such as if
                testbed-ops should be notified.
      
                Change format of Error Message reported by libtblog.  As per the
                email "Format or Error Messages" ro testbed-dev.
      
                Have libtblog use its own Database handle to avoid problems with
                locked tables.
      
                Also set DBCONN_MAXTRIES to 3 for most important queries.  For
                queries that are not important don't send mail on error.
      05015359
  28. 23 Jan, 2006 1 commit
    • Timothy Stack's avatar
      · add602df
      Timothy Stack authored
      Parse the NS file with the real NS parser so we can make sure linktest is
      doing the "right" thing.
      
      	* configure, configure.in: Add tbsetup/nsverify files.
      
      	* tbsetup/GNUmakefile.in: Add nsverify subdir.
      
      	* tbsetup/tbprerun.in: Run verify-ns on the experiments NS file.
      
      	* tbsetup/ns2ir/nstb_compat.tcl: Bring up-to-date with the current
      	world.
      
      	* tbsetup/nsverify/GNUmakefile.in: Makefile.
      
      	* tbsetup/nsverify/ns-2.27.patch: Patch file for NS version 2.27.
      
      	* tbsetup/nsverify/nstbparse.in: Wrapper for the NS parser.
      
      	* tbsetup/nsverify/tb_compat.tcl: Different version of
      	tb_compat.tcl that is used to verify linktest parameters.
      
      	* tbsetup/nsverify/verify-ns.in: Script that runs on boss and
      	verifies that the testbed parser worked correctly.
      
      	* tbsetup/ns2ir/parse-ns.in, tbsetup/ns2ir/parse.proxy.in: Tweaked
      	a bit so parse.proxy can be used to run the regular NS parser in
      	addition to the testbed one.
      add602df
  29. 05 Jan, 2006 1 commit
  30. 02 Jan, 2006 1 commit
    • Timothy Stack's avatar
      · bd20dd17
      Timothy Stack authored
      First cut at a daemon that does regular checkups of the testbed
      hardware/software.
      
      	* configure, configure.in: Add tbsetup/checkup directory.
      
      	* db/audit.in: Add a listing of stuck checkups.
      
      	* install/boss-install.in: Add 'elabckup' user.
      
      	* rc.d/3.testbed.sh.in: Startup the checkup_daemon.
      
      	* sql/database-create.sql, sql/database-migrate.txt: Add the
      	checkups tables.
      
      	* tbsetup/GNUmakefile.in: Descend into the checkup directory.
      
      	* tbsetup/checkup: The checkup daemon, man page, and
      	  associated scripts.
      
      	* tbsetup/ptopgen.in: Add a feature with a value of 0.9 to
      	  prereserved nodes to keep them from being allocated unless
      	  they're really wanted.
      
      	* utils/firstuser.in: Add some other options so the script can be
      	  used to create other pseudo users.
      bd20dd17
  31. 19 Dec, 2005 1 commit
    • Kevin Atkinson's avatar
      · 45f997fd
      Kevin Atkinson authored
      Updates to to Error Logging API Code.
      
      You should start seeing much better error messages coming from my
      system.  Errors coming from parse.proxy and assign (the two most
      frequent sources of errors) should now be concise and to the point.
      Errors coming from libosload/libreboot (the next most frequent source
      of errors) should now also be much better, but not perfect.  Getting
      perfect errors will likely a rework of how errors are handled in
      libosload/libreboot, just adding tberror/tbwarn/tbnotice calls is not
      enough.  I can do this at a latter date if necessary.
      
      A few minor database changes.
      
      Some changes to the API.  A few bug fixes. Lots of tberror/tbwarn/tbnotice
      added to scripts.
      
      Since assign is a C program, and at this time my API is perl only, I wrote a
      second wrapper around assign, assign_wrapper2.  When assign fails errors are
      now parsed in assign_wrapper2, sent to stderr and logged.  This means that
      RunAssign() just returns when assign fails rather than echoing some of
      assign.log output and then quiting.  The output to the activity log remains
      unchanged.
      
      Since "parse.proxy" is run from ops I couldn't use my API in it, even though
      it is a perl program.  Instead I parse the errors coming form it in
      parse-ns.
      45f997fd
  32. 15 Dec, 2005 1 commit
  33. 28 Nov, 2005 1 commit
  34. 17 Nov, 2005 1 commit
    • Mike Hibler's avatar
      1. Beef up "admin mode" support. · 4ec701e7
      Mike Hibler authored
      * Add libadminmfs.pm with routines for entering/exiting and executing
        commands in, the admin MFS.  Node admin and firewall swapout (see
        below) now use this, the image creation process does not yet.
      
      * Add swapout time hooks for running an admin mode process, likely to
        be used to collect swapout time state.  Currently controlled globally
        by two new sitevars.
      
      * Modified node_admin to use the library and added a "-c <command>"
        option to have nodes go into admin mode and run a command.  I don't
        really expect this to be useful, it was just a testing vehicle for
        the library.
      
      2. Improved the swapout process for firewalled experiments.  Largely
         just generalized what we already did for paniced experiments.
         At swapout, firewalled nodes are:
      
         - powered off
         - set to boot into admin mode and run a disk zapper
         - powered on
      
        The swapout process then waits for all nodes to successfully complete
        disk zapage, at which point the nodes are nfree'ed as usual.  Any
        failure of the above process, marks the experiment as panic'ed (to
        ensure that we are involved in cleanup) and sends mail to testbed-ops
        describing the state of the nodes.
      
      3. Added the aforementioned disk zapper, a little C program in the MFS
         which zeroes out the MBR and partition boot blocks (but not the MBR
         partition table or FS superblocks).  This is added insurance that if
         a node somehow gets diverted after being nfree'd but before getting
         the disk reloaded (e.g., goes to hwdown), that we cannot accidentally
         boot from the disk.  This program gets installed in the admin MFS.
      
      4. Related to firewalls, modified swapin to use the new documented
         "snmpit -N" to get the firewall VLAN number rather than parsing the
         output that was a side-effect of VLAN creation.
      4ec701e7
  35. 04 Nov, 2005 1 commit
  36. 20 Oct, 2005 1 commit
    • Kirk Webb's avatar
      · 5326988f
      Kirk Webb authored
      New node_attributes facility and table.
      
      Auxiliary node attributes, such as service tag #, BIOS version, etc., are
      should now be placed into the node_attributes table.  This can be accomplished
      by either using the node_attributes command line tool, or by using the
      modnodeattributes_form.php3 form (not linked in anywhere yet, but will be
      in a moment).  Attribute names and values are checked for sanity using
      table_regex entries.  Also note that I started with the nodecontrol stuff
      as a template.
      
      The command line tool and web form (which simply calls the command line tool
      to actually do the modifications) can add, delete, and/or remove attributes.
      
      Finally, note that the bios_version column has been moved from the nodes
      table to the node_attributes table.  The Node Information page will show
      the list of current attributes at the bottom of the info table.
      5326988f
  37. 19 Sep, 2005 1 commit
    • Leigh B. Stoller's avatar
      Move all modification of the group_membership table to the backend, · cfba1ac7
      Leigh B. Stoller authored
      into a single new script call modgroups. Usage:
      
      	modgroups [-a pid:gid:trust[,pid:gid:trust]...]
                        [-m pid:gid:trust[,pid:gid:trust]...]
                        [-r pid:gid[,pid:gid]...] user
      
      So, -a to add groups, -r to remove groups, and -m to modify the trust
      value for a member of a group.
      
      The reason for doing this is that previously, we had no idea in the
      backend what group changes actually happened; we just knew what the
      current groups are. This make it hard to add and remove users from
      mailing lists, chat server buddy lists, etc. This is cleaner ...
      cfba1ac7