- 09 Jul, 2007 1 commit
-
-
Leigh B. Stoller authored
"rtag" directive to initiate template modify operations. So, to get started you do a checkout: cvs -d ops.emulab.net:/proj/$pid/templates/XXXXX/cvsrepo checkout XXXXX where XXXXX is the part of the guid (10000/1) before the slash. Might try and roll all templates into a single project wide repo at some point, to avoid the extraneous path stuff, but didn't want to worry that just yet. Okay, so have a checkout. You can work along the trunk, doing commits. To create a new template (a modify of the existing template), you tag the tree using rtag: cvs -d ops.emulab.net:/proj/$pid/templates/XXXXX/cvsrepo rtag mytag XXXXX A template modify is started at the end, and you should probably wait for email before continuing. Eventually I will need to add locking of some kind, but I have to do the modify in the background, or else I get deadlock cause cvs keeps the repo locked, and the modify also needs to access it. Each time you tag along the trunk, you get a modified template, which in the history diagram looks like: 10000/1 --> 10000/2 --> 10000/3 ... If you want to branch, say at 10000/2 you can create a branch tag using rtag: cvs -d [cut] rtag -r T10000/2 -b mytag2 XXXXX You can also use your own tags for -r option, but I also create a TXXXXX/YY tag at each template modify, which is easy to remember. Then update your sandbox to the new branch, commit changes along that branch, and then later use rtag again to initiate a template modify operation: cvs update -r mytag2 cvs commit ... cvs -d [cut] rtag -r mytag2 mytag3 XXXXX And now the history diagram looks like: 10000/1 --> 10000/2 --> 10000/3 ... | | -> 10000/4 ... You should be able to mix interaction via the web with interaction via the cvs interface. I've tested it, although not extensively.
-
- 24 May, 2007 1 commit
-
-
Leigh B. Stoller authored
-
- 23 May, 2007 1 commit
-
-
Leigh B. Stoller authored
described is the one exported to ops via the XMLRPC interface. This is just playing aroundl no doubt this stuff is going to change. * template_checkout guid/vers Checkout a copy of the template to the current working directory. * template_commit Modify the previous template checkout, using the nsfile contained in the tbdata directory (subdir of the current directory). In other words, the current template is modified, creating a new template in the current working directory (the current directory refers to the new template). The datastore subdir is imported into the new template, but that is the only directory that is imported at present. Might change that. So this sounds much cooler then it really is. Why? * This only works from ops. * The "current directory" must be one of the standard approved directories (/proj, /users, /groups). * Cause, boss reads and writes that directory via NFS, as told to it by the xmlrpc client. At some point in the future it would be nice to support something fancier, using a custom transport, but lets see how this goes.
-
- 21 May, 2007 1 commit
-
-
Mike Hibler authored
e.g., for IXP-laden machines.
-
- 15 May, 2007 2 commits
-
-
Leigh B. Stoller authored
-
Leigh B. Stoller authored
* Records are now "help open" when a run is stopped. When the next run is started, a check is made to see if the files (/project/$pid/exp/$eid) have changed, and if so a new version of the archive is committed before the next run is started. * Change the way swapmod is handled within an instance. A new option on the ShowExp page called Modify Resources. The intent is to allow an instance to be modified without having to start and stop runs, which tends to clutter things up, according to our user base. So, if you are within a run, that run is reset (reused) after the swapmod is finished. You can do this as many times as you like. If you are between runs (last operation was a stoprun), do the swapmod and then "speculatively" start a new run. Subsequent modifies reuse the that run again, as above. I think this is what Kevin was after ... there are some UI issues that may need to be resolved, will wait to hear what people have to say. * Revising a record is now supported. Export, change in place, and then use the Revise link on the ShowRun page. Currently this has to happen from the export directory on ops, but eventually allow an upload (to correspond to downloaded exports) * Check to see if export already exists, and give warning. Added a checkbox that allows user to overwrite the export. * A bunch of minor UI changes to the various template pages.
-
- 26 Apr, 2007 1 commit
-
-
Leigh B. Stoller authored
-
- 25 Apr, 2007 1 commit
-
-
Leigh B. Stoller authored
-
- 30 Mar, 2007 1 commit
-
-
Robert Ricci authored
"I implemented and tested extensions to snmpit & friends so that an elabinelab could additional request that an experimental interface be placed in trunked mode, to discover the vlan tags associated with vlans, and to request modifications to existing vlans belonging to an elabinelab without tearing it down and reconstructing it."
-
- 23 Mar, 2007 1 commit
-
-
Leigh B. Stoller authored
-
- 06 Mar, 2007 1 commit
-
-
Leigh B. Stoller authored
indexed by exptidx. I also got the last of the pid and pid,gid tables.
-
- 15 Feb, 2007 1 commit
-
-
Leigh B. Stoller authored
Also remove guid/version argument requirement for template_export.
-
- 25 Jan, 2007 1 commit
-
-
Leigh B. Stoller authored
swapin a couple of months ago; A template can now be instantiated with the "preload" (-p) option so that you can look at, and then later you can template_swapin that instance (as is).
-
- 20 Oct, 2006 1 commit
-
-
Mike Hibler authored
Two-day boondoggle to support "/scratch", an optional large, shared filesystem for users. To do this, I needed to find all the instances where /proj is used and behave accordingly. The boondoggle part was the decision to gather up all the hardwired instances of shared directory names ("/proj", "/users", etc.) so that they are set in a common place (via unexposed configure variables). This is a boondoggle because: 1. I didn't change the client-side scripts. They need a different mechanism (e.g., tmcd) to get the info, configure is the wrong way. 2. Even if I had done #1 it is likely--no, certain--that something would fail if you tried to rename "/proj" to be "/mike". These names are just too ingrained. 3. We may not even use "/scratch" as it turns out. Note, I also didn't fix any of the .html documentation. Anyway, it is done. To maintain my illusion in the future you should: 1. Have perl scripts include "use libtestbed" and use the defined PROJROOT(), et.al. functions where possible. If not possible, make sure they run through configure and use @PROJROOT_DIR@, etc. 2. Use the configure method for python, C, php and other languages. 3. There are perl (TBValidUserDir) and php (VALIDUSERPATH) functions which you should call to determine if an NS, template parameter, tarball or other file are in "an acceptable location." Use these functions where possible. They know about the optional "scratch" filesystem. Note that the perl function is over-engineered to handles cases that don't occur in nature.
-
- 12 Oct, 2006 1 commit
-
-
Leigh B. Stoller authored
(initial) parameters for a new run. Three choices right now; from the template itself, from the instance, or from the previous run. On the web interface this is presented as three buttons. On ops, it is the the -y option, which takes one of template,instance,lastrun as its argument (you can of course combine the -y option with an XML file to override specific params). At present, there is no default. Lets give it a chance to sink in before I pick something that will annoy 50% of the people 75% of the time.
-
- 29 Sep, 2006 2 commits
-
-
Kirk Webb authored
Add the ability to specify the set of nodes you are intersted in to the node.getlist() function. Do this by adding a "nodes=<node_id>[,<node_id>...]" argument.
-
Mike Hibler authored
-
- 28 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
the pid,eid and passing that to the XMLRPC server to be mapped to the template.
-
- 26 Sep, 2006 2 commits
-
-
Mike Hibler authored
-
Leigh B. Stoller authored
the stop, which are caused by the remote agents not doing what they are supposed to do.
-
- 20 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
doing a Start Run. On the web page, there is a new checkbox, and on ops, template_startrun takes a new -m option. Caveat: You cannot specify a new NS file, yet. The original file is reparsed, and the idea is that a change in the template parameters will result in a change to the topology. I will add the ability to specify a new NS file in the next revision of this change. If you really really want to change the NS file, go to /proj/$pid/exp/$eid/archive/nsdata and edit nsfile.ns ... In addtion, DATASTORE is now defined while parsing the NS file. This turned to be quite the headache!
-
- 15 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
-
- 13 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
to make stoprun waiting work correctly. When tevc is invoked with the -w (wait for completion) option, tevc generates a token to put into the notification. The event scheduler will not generate a new token if there is already on in the notification, but instead pass it on. For the specific case of stoprun, the simulator agent has to pass that token along to boss and template_exprun, which generates the completion event (for reasons discussed in prior commit message).
-
- 12 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
it got more complicated as it progressed. The bulk of the change was changing template_exprun so that it can take a pid/eid as an alternative to eid/guid. This is a big convenience since its easy to find the template from a running experiment, and it makes it possible to invoke from the event scheduler, which has never heard of a template before (and its not something I wanted to teach it about). Its also easier on users. Anyway, back to the stoprun event. You can now do this: $ns at 100 "$ns stoprun" or tevc -e pid/eid now ns stoprun You can add the -w option to wait for the completion event that is sent, but this brings me to the glaring problems with this whole thing. * First, the scheduler has to fire off the stoprun in the background, since if it waits, we get deadlock. Why? Cause the implementation of stoprun uses the event system (SNAPSHOT event, other things), and if the scheduler is sitting and waiting, nothing happens. Okay, the solution to this was to generate a COMPLETION event from template_exprun once the stop operation is complete. This brings me to the second problem ... * Worse, is that the "ns" events that are sent to implement stoprun (like snapshot) send their own completion events, and that confuses anyone waiting on the original stoprun event (it returns early). So what to do about this? There is a "token" field in the completion event structure, which I presume is to allow you to match things up. But there is no way to set this token using tevc (and then wait for it), and besides, the event scheduler makes them up anyway and sticks them into the event. So, the seed of a fix are already germinating in my mind, but I wanted to get this commit in so that Mike would have fun reading this commit log.
-
- 10 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
so that users can schedule program events to run there. For example: set myprog [new Program $ns] $myprog set node "ops" $myprog set command "/usr/bin/env >& /tmp/foo" $ns at 10 "$myprog start" or tevc -e pid/eid now myprog start Since the program agent cannot talk to tmcd from ops, there are new routines to create the config files that the program agent uses, in the expertment tbdata directory. I also rewrote the eventsys.proxy script that starts the event scheduler on ops; I rolled the startup of the program agent into this script, via new -a option which is passed over from boss when an ops program agent is detected in the virt topology. This keep the number of new processes on ops to a small number. Also part of the above rewrite is that we now catch when event scheduler (or the program agent) exits abnormally, sending email to tbops and the swapper of the experiment. We have been seeing abnormal exits of the scheduler and it would good to detect and see if we can figure out what is going wrong. Other small bug fixes in experiment run.
-
- 05 Sep, 2006 1 commit
-
-
Leigh B. Stoller authored
* Add XMLRPC interface for template swapin,stoprun,startrun,swapout and add the appropriate wrappers to the script_wrapper on ops. * Allow parameter descriptions in NS files. This is probably not in its final form since its a bit confusing as to what has priority; something in the NS file or a metadata item. Anyway, you can do this in your NS file: $ns define-template-parameter GUID "0/0" "The GUID to be analyzed" The rules are currently that the NS file description has priority and is copied to child templates, unless the user has modified a description via the web interface, in which case the NS file description is ignored. I know, sounds awful, but for the most part people are going to use the NS file anyway. * Add "clear" option when starting a new experiment run; the per experiment DB at the logholes are cleared. Note that this is *not* the default behaviour; you have to either check the checkbox on the web form or use the -c option to the script wrapper, or clear=yes if talking directly to the XMLRPC server. * Fix up how email is generated for template_swapin and template_create, so that Kevin can debug tblog/tbreport stuff, but also so that we maintain mail logs as before. I have made some improvements to libaudit so as to centralize the mail goo, and avoid duplicating all that stuff. * Minor fixes to the program agent so that the new environment strings are sent before the program agent exits and reloads them! * Other minor little things.
-
- 31 Aug, 2006 1 commit
-
-
Leigh B. Stoller authored
* Export the above via the XMLRPC interface and add a wrapper function to the script_wrapper. This allows you do to this on ops: cd /proj/testbed/templates/10023/1 Edit some files template_commit Which creates a new template, using the current directory to infer the template. Otherwise, provide the template GUID on the command line. Hmm, maybe this should be called template_modify? Either way, the name does not quite match * Export template_export via the XMLRPC wrapper. This allows you to export a template (instance) record from the command line on ops. cd /proj/testbed/templates/10023/1 template_export -i 12 Exported to /proj/testbed/export/10000/3/12 Which exports the template record for instance number 12. Again, the GUID is infered, but you can specify one on the command line. The export directory is printed so you know where it went. Note that export does *not* populate a DB on ops with the old DB data.
-
- 14 Aug, 2006 1 commit
-
-
Leigh B. Stoller authored
draft is that the user will at the end of an experiment run, log into one of his nodes and perform some analysis which is intended to be repeated at the end of the next run, and in future instantiations of the template. A new table called experiment_template_events holds the dynamic events for the template. Right now I am supporting just program events, but it will be easy to support arbitrary events later. As an absurd example: node6> /usr/local/bin/template_analyze ~/data_analyze arg arg ... The user is currently responsible for making sure the output goes into a file in the archive. I plan to make the template_analyze wrapper handle that automatically later, but for now what you really want is to invoke a script that encapsulates that, redirecting output to $ARCHIVE (this variable is installed in the environment template_analyze. The wrapper script will save the current time, and then run the program. If the program terminates with a zero exit status, it will ssh over to ops and invoke an xmlrpc routine to tell boss to add a program event to both the eventlist for the current instance, and to the template_eventlist for future instances. The time of the event is the relative start time that was saved above (remember, each experiment run replays the event stream from time zero). For the future, we want to allow this to be done on ops as well, but that will take more infrastructure, to run "program agents" on ops. It would be nice to install the ssl xmlrpc client side on our images so that we do not have to ssh to ops to invoke the client.
-
- 09 Aug, 2006 1 commit
-
-
Leigh B. Stoller authored
-
- 03 Aug, 2006 1 commit
-
-
Leigh B. Stoller authored
into per-experiment databases on ops. Additional support for reconsituting those databases back into temporary databases on ops, for post processing. * This revision relies on the "snort" port (/usr/ports/security/snort) to read the pcap files and load them into a database. The schema is probably not ideal, but its better then nothing. See the file ops:/usr/local/share/examples/snort/create_mysql for the schema. * For simplicity, I have hooked into loghole, which already had all the code for downloading the trace data. I added some new methods to the XMLRPC server for loghole to use, to get the users DB password and the name of the per-experiment database. There is a new slot in the traces table that indicates that the trace should be snorted to its DB. In case you forgot, at the end of a run or when the instance is swapped out, loghole is run to download the trace data. * For reconsituting, there are lots of additions to opsdb_control and opsdb_control.proxy to create "temporary" databases and load them from a dump file that is stored in the archive. I've added a button to the Template Record page, inappropriately called "Analyze" since right now all it does is reconsitute the trace data into a DB on ops. Currently, the only indication of what has been done (the name of the DBs created on ops) is the log email that the user gets. A future project is tell the user this info in the web interface. * To turn on database capturing of trace data, do this in your NS file: set link0 ... $link0 trace $link0 trace_snaplen 128 $link0 trace_db 1 the increase in snaplen is optional, but a good idea if you want snort to undertand more then just ip headers. * Also some changes to the parser to allow plain experiments to take advantage of all this stuff. To simple get yourself a per-experiment DB, put this in your NS file: tb-set-dpdb 1 however, anytime you turn trace_db on for a link or lan, you automatically get a per-experiment DB. * To capture the trace data to the DB, you can run loghole by hand: loghole sync -s the -s option turns on the "post-process" phase of loghole.
-
- 03 Jul, 2006 1 commit
-
-
Mike Hibler authored
Actually, most of the changes here were just to generalize the "virtual interface" state in the DB. Other than the client-side scripts, there is very little specific here specific to tagged VLANs. In fact, you cannot specify "vlan" as a type yet as we haven't done the snmpit support for setting up the switches. For more info see bas:~mike/flux/doc/testbed-virtinterfaces.txt, which I will integrate into the knowledge base and the Emulab doc at some point.
-
- 27 Jun, 2006 1 commit
-
-
Leigh B. Stoller authored
wrong).
-
- 07 Jun, 2006 1 commit
-
-
Leigh B. Stoller authored
collab_password). I'm using this from the event scheduler so it can access the per-experiment DB to store the event trace. I suppose it would have been easier to stick them in a file?
-
- 04 May, 2006 1 commit
-
-
Timothy Stack authored
-
- 02 May, 2006 2 commits
-
-
Leigh B. Stoller authored
-
Leigh B. Stoller authored
does not handle. Just kill it from the response in osid.info; the caller is not interested.
-
- 15 Mar, 2006 2 commits
-
-
Timothy Stack authored
timeline to __ns_teardown so the evproxies unsubscribe from the experiment.
-
Timothy Stack authored
-
- 09 Feb, 2006 1 commit
-
-
Timothy Stack authored
-
- 26 Jan, 2006 1 commit
-
-
Timothy Stack authored
Some pelab/plab event system hacks. * event/lib/event.h, event/lib/event.c: Add event_subscribe_auth which lets you specify whether any authentication should be done for events received through this subscription. * event/sched/event-sched.c: Handle EVPROXY objects. Add a separate subscription for EVPROXY UPDATE events for each plab pnode (which might be too many...). Also, need to update the EXPT field for events received through a noauth subscription so the proxies can figure out which experiments are active. * lib/libtb/tbdefs.h, lib/libtb/tbdefs.c: Add UPDATE event defs. * xmlrpc/emulabserver.py.in: Inject __plab_setup and __plab_teardown timelines into the eventlist when an experiment has plab nodes. The __plab_setup timeline sends EVPROXY UPDATE events to each physical node while the __plab_teardown sends EVPROXY CLEAR events. The __plab_setup timeline is run when the scheduler starts up, the __plab_teardown isn't run automatically yet.
-