Here is a slightly more complex example demonstrating the use of RED queues, traffic generation, and the event system. Where possible, we adhere to the syntax and operational model of ns-2, as described in the NS manual.
What follows is a sample NS file that
demonstrates the above features, with annotations where appropriate.
First we define the 2 nodes in the topology:
set nodeA [$ns node]
set nodeB [$ns node]
Next define a duplex link between nodes nodeA and nodeB. Instead of a
standard DropTail link, it is declared to be a Random Early Detection
(RED) link. While this is obviously contrived, it allows us to ignore
routing issues within this
example.
set link0 [$ns duplex-link $nodeA $nodeB 100Mb 0ms RED]
Each link is has an NS "Queue" object associated with it, which you
can modify to suit your needs (currently, there are two queue
objects per duplex link; one for each direction. You need to set the
parameters for both directions, which means you can set the parameters
asymmetrically if you want). The following parameters can be
changed, and are defined in the NS manual (see Section 7.3).
The maximum limit_ for a queue is 1 megabyte if it is
specified in bytes, or 100 slots if it is specified in slots (each
slot is 1500 bytes).
set queue0 [[$ns link $nodeA $nodeB] queue]
$queue0 set gentle_ 0
$queue0 set red_ 0
$queue0 set queue-in-bytes_ 0
$queue0 set limit_ 50
$queue0 set maxthresh_ 15
$queue0 set thresh_ 5
$queue0 set linterm_ 10
$queue0 set q_weight_ 0.002
In the case of a LAN, there is a single queue object for every node
that is a member of the LAN and it refers to the node-to-lan direction.
The only special case is a 100Mb 0ms LAN that does not use
end node shaping.
No queue object is available in that case. Here is an example
that illustrates how to get handles on the queue objects of a LAN
so as to change the parameters:
set n0 [$ns node]
set n1 [$ns node]
set n2 [$ns node]
set lan0 [$ns make-lan "$n0 $n1 $n2" 100Mb 0ms]
set q0 [[$ns lanlink $lan0 $n0] queue]
set q1 [[$ns lanlink $lan0 $n1] queue]
set q2 [[$ns lanlink $lan0 $n2] queue]
$q0 set limit_ 20
...
A UDP agent is created and attached to nodeA, then a CBR traffic
generator application is created, and attached to the UDP agent:
set udp0 [new Agent/UDP]
$ns attach-agent $nodeA $udp0
set cbr0 [new Application/Traffic/CBR]
$cbr0 set packetSize_ 500
$cbr0 set interval_ 0.005
$cbr0 attach-agent $udp0
A TCP agent is created and also attached to nodeA, then a second CBR
traffic generator application is created, and attached to the TCP
agent:
set tcp0 [new Agent/TCP]
$ns attach-agent $nodeA $tcp0
set cbr1 [new Application/Traffic/CBR]
$cbr1 set packetSize_ 500
$cbr1 set interval_ 0.005
$cbr1 attach-agent $tcp0
You must define traffic sinks for each of the traffic generators
created above. The sinks are attached to nodeB:
set null0 [new Agent/Null]
$ns attach-agent $nodeB $null0
set null1 [new Agent/TCPSink]
$ns attach-agent $nodeB $null1
Then you must connect the traffic generators on nodeA to the traffic sinks
on nodeB:
$ns connect $udp0 $null0
$ns connect $tcp0 $null1
Lastly, a set of events to control your applications and link
characteristics:
$ns at 60.0 "$cbr0 start"
$ns at 70.0 "$link0 bandwidth 10Mb duplex"
$ns at 80.0 "$link0 delay 10ms"
$ns at 90.0 "$link0 plr 0.05"
$ns at 100.0 "$link0 down"
$ns at 110.0 "$link0 up"
$ns at 115.0 "$cbr0 stop"
$ns at 120.0 "$cbr1 start"
$ns at 130.0 "$cbr1 set packetSize_ 512"
$ns at 130.0 "$cbr1 set interval_ 0.01"
$ns at 140.0 "$link0 down"
$ns at 150.0 "$cbr1 stop"
When you receive email containing the experiment setup information (as
described in Beginning an
Experiment), you will notice an additional section that gives a
summary of the events that will be delivered during your experiment:
Event Summary:
--------------
Event count: 18
First event: 60.000 seconds
Last event: 160.000 seconds
You can get a full listing of the events for your experiment by
checking the 'Details' pane on the 'Show Experiment' page for your
experiment. This report will include a section like this:
Event List:
Time Node Agent Type Event Arguments
------------ ------------ ---------- ---------- ---------- ------------
60.000 nodeA cbr0 TRAFGEN START PACKETSIZE=500
RATE=100000
INTERVAL=0.005
70.000 tbsdelay0 link0 LINK MODIFY BANDWIDTH=10000
80.000 tbsdelay0 link0 LINK MODIFY DELAY=10ms
90.000 tbsdelay0 link0 LINK MODIFY PLR=0.05
100.000 tbsdelay0 link0 LINK DOWN
110.000 tbsdelay0 link0 LINK UP
115.000 nodeA cbr0 TRAFGEN STOP
120.000 nodeA cbr1 TRAFGEN START PACKETSIZE=500
RATE=100000
INTERVAL=0.005
130.000 nodeA cbr1 TRAFGEN MODIFY PACKETSIZE=512
130.000 nodeA cbr1 TRAFGEN MODIFY INTERVAL=0.01
140.000 tbsdelay0 link0 LINK DOWN
150.000 tbsdelay0 link0 LINK UP
160.000 nodeA cbr1 TRAFGEN STOP
The above list represents the set of events for your experiments, and are stored in the Emulab Database. When your experiment is swapped in, an event scheduler is started that will process the list, and send them at the time offset specified. In order to make sure that all of the nodes are actually rebooted and ready, time does not start ticking until all of the nodes have reported to the event system that they are ready. At present, events are restricted to system level agents (Emulab traffic generators and delay nodes), but in the future we expect to provide an API that will allow experimenters to write their own event agents.
NS scripts give you the ability to schedule events dynamically; an NS
script is just a TCL program and the argument to the "at" command is
any valid TCL expression. This gives you great flexibility in a
simulated world, but alas, this cannot be supported in a practical
manner in the real world. Instead, we provide a way for you to inject
events into the system dynamically, but leave it up to you to script
those events in whatever manner you are most comfortable with, be it a
PERL script, or a shell script, or even another TCL script! Dynamic
event injection is accomplished via the Testbed Event Client
(tevc), which is installed on your experimental nodes and on
users.emulab.net. The command line syntax for tevc
is:
where the time parameter is one of:
tevc -e proj/expt time objname event [args ...]
For example, you could issue this sequence of events.
- now
- +seconds (floating point or integer)
- [[[[yy]mm]dd]HH]MMss
tevc -e testbed/myexp now cbr0 set interval_=0.2
tevc -e testbed/myexp +10 cbr0 start
tevc -e testbed/myexp +15 link0 down
tevc -e testbed/myexp +17 link0 up
tevc -e testbed/myexp +20 cbr0 stop
Some points worth mentioning:
NS File: $ns at 3.0 "$link0 down"
tevc: tevc -e proj/expt +3.0 link0 down
In "ns" script: $link0 bandwidth 10Mb duplex $link0 delay 10ms $link0 plr 0.05 With "tevc": tevc ... link0 modify bandwidth=20000 # In kbits/second; 20000 = 20Mbps tevc ... link0 modify delay=10ms # In msecs (the "ms" is ignoredd) tevc ... link0 modify plr=0.1 Both: $link0 up $link0 down
$queue0 set queue-in-bytes_ 0 $queue0 set limit_ 75 $queue0 set maxthresh_ 20 $queue0 set thresh_ 7 $queue0 set linterm_ 11 $queue0 set q_weight_ 0.004
$cbr0 start $cbr0 set packetSize_ 512 $cbr0 set interval_ 0.01 $cbr0 set rate_ 10Mb $cbr0 set iptos_ 16 $cbr0 stop
$ns at 100.0 "$link0 down"
$ns at 100.0 "$link1 down"
$ns at 100.0 "$link2 down"
$ns at 100.0 "$link3 down"
which works, but is somewhat verbose. Its also presents a problem when
sending dynamic events with tevc from the shell:
tevc -e proj/expt now link0 down
tevc -e proj/expt now link1 down
tevc -e proj/expt now link2 down
tevc -e proj/expt now link3 down
These four events will be separated by many milliseconds as each call
to tevc requires forking a command from the shell, contacting boss,
sending the event to the event scheduler, etc.
set mylinks [new EventGroup $ns]
$mylinks add $link0 $link1 $link2 $link3
$ns at 60.0 "$mylinks down"
From the command line:
tevc -e proj/expt now mylinks down
Note:
set mylinks1 [new EventGroup $ns]
set mylinks2 [new EventGroup $ns]
$mylinks1 add $link0 $link1 $link2 $link3
$mylinks2 add $link0 $link1 $link2 $link3
We have added some extensions that allow you to use NS's at
syntax to invoke arbitrary commands on your experimental nodes. Once
you define a program object and initialize its command line and the
node on which the command should be run, you can schedule the command
to be started and stopped with NS at statements. To define a
program object:
Then in your NS file a set of static events to run these commands:
set prog0 [$nodeA program-agent -command "/bin/ls -lt"]
set prog1 [$nodeB program-agent -command "/bin/sleep 60"]
If you want to schedule starts and stops using dynamic events:
$ns at 10 "$prog0 start"
$ns at 20 "$prog1 start"
$ns at 30 "$prog1 stop"
If you want to change the command that is run (override the command
you specified in your NS file), then:
tevc -e testbed/myexp now prog0 start
tevc -e testbed/myexp now prog1 start
tevc -e testbed/myexp +20 prog1 stop
Some points worth mentioning:
tevc -e testbed/myexp now prog0 start COMMAND='ls'
set link0 [$ns duplex-link $nodeB $nodeA 30Mb 50ms DropTail]
$link0 trace
The default mode for tracing a link (or a lan) is to capture just the
packet headers (first 64 bytes of the packet) and store them to a
tcpdump output file. You may also view a realtime summary of packets
through the link, via the web interface, by going to the 'Show
Experiment' page for the experiment, and clicking on the Link
Tracing/Monitoring' menu option. You may also control the link
tracing programs on the fly, pausing them, restarting them, and
killing them.
$link0 trace packet
Or you may not capture any packets, but simply gather the realtime
summary so that you can view it via the web interface.
$link0 trace monitor
$link0 trace monitor "icmp or tcp"
$link0 trace_snaplen 128
[[$ns link $nodeA $nodeB] queue] set trace_type header
[[$ns link $nodeA $nodeB] queue] set trace_snaplen 128
[[$ns link $nodeA $nodeB] queue] set trace_expr "ip proto tcp"
To set the parameters for packets leaving nodeB, simply
reverse the arguments to the link statement:
[[$ns link $nodeB $nodeA] queue] set trace_snaplen 128
For a lan, the syntax is slightly different. Consider a lan called
lan0 with a node called nodeL on it:
[[$ns lanlink $lan0 $nodeL] queue] set trace_type header
[[$ns lanlink $lan0 $nodeL] queue] set trace_snaplen 128
[[$ns lanlink $lan0 $nodeL] queue] set trace_expr "ip proto tcp"
tevc -e myproj/myexp now link0-tracemon snapshot
tevc -e myproj/myexp now link0-tracemon stop
tevc -e myproj/myexp now link0-tracemon start
And of course, you may use the NS "at" syntax to schedule static
events from your NS file:
$ns at 10 "$link0 trace stop"
$ns at 20 "$link0 trace start"
$ns at 30 "$link0 trace snapshot"
where the .recv files hold the packets that were sent by the node and received by the delay node. The .xmit files hold those packets that were transmitted by the delay node and received by the other side of the link. So, for packets sent from nodeA to nodeB, the packet would arrive at the delay node and be recorded in trace_nodeA-link0.recv. Once the packet traverses the delay node (subject to Dummynet traffic shaping) and it is about to be transmitted, it is recorded in trace_nodeA-link0.xmit. By comparing these two files, you can see how the Dummynet traffic shaping has affected your packets, in each direction. Note that even if you have not specified traffic shaping, you still get the same set of files. In this case, the .recv and .xmit files will be nearly identical, reflecting only the negligible propagation delay through the software bridge.
- trace_nodeA-link0.xmit
- trace_nodeA-link0.recv
- trace_nodeB-link0.xmit
- trace_nodeB-link0.recv
and a new set of files is opened. Note that the files are not rolled; the next time you issue the snapshot command, the current set of files ending with .0 are lost.
- trace_nodeA-link0.xmit.0
- trace_nodeA-link0.recv.0
- trace_nodeB-link0.xmit.0
- trace_nodeB-link0.recv.0
$link0 trace_endnode 1
(Note that if a delay node does exist, it will be used for traffic capture
even if endnode tracing is specified.)
When tracing/monitoring is done on an endnode, the output files are
again stored in /local/logs, and are named by the link and
node name. The difference is that there is just a single
output file, for those packets leaving the node. Packets are
captured after traffic shaping has been applied.
Endnode tracing can also be used on PlanetLab nodes setup through
Emulab's Planetlab portal. In this case, all packets sent or received
are recorded in the .xmit file.
- trace_nodeA-link0.xmit