advanced.html 26.8 KB
Newer Older
Leigh B. Stoller's avatar
Leigh B. Stoller committed
1 2
   Copyright (c) 2000-2006 University of Utah and the Flux Group.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
4 5
   All rights reserved.
6 7 8 9 10 11 12 13 14 15 16 17
<h1>Emulab Tutorial - A More Advanced Example</h1>

Here is a slightly more complex example demonstrating the use of RED
queues, traffic generation, and the event system. Where possible, we
adhere to the syntax and operational model of
<a href="">ns-2</a>, as described in the
<a href="">NS manual</a>.

<li> <b>RED/GRED Queues</b>: In addition to normal DropTail links, Emulab
19 20 21 22 23 24 25 26 27 28
     supports the specification of the RED and GRED (Gentle RED) links
     in your NS file. RED/GRED queuing is handled via the
     insertion of a traffic shaping delay node, in much the same way
     that bandwidth, delay, and packet loss is handled. For a better
     understanding of how we support traffic shaping, see the
     <tt>ipfw</tt> and <tt>dummynet</tt> man pages on
     <tt></tt>. It is important to note that Emulab
     supports a smaller set of tunable parameters then NS does; please
     read the aforementioned manual pages!

<li> <b>Traffic Generation</b>: Emulab supports Constant Bit Rate (CBR)
30 31 32
     traffic generation, in conjunction with either Agent/UDP or
     Agent/TCP agents. We currently use the
     <a href="">TG Tool Set</a> to generate

<li> <b>Traffic Generation using
     <a href="">
     NS Emulation (NSE)</b></a>: Emulab supports TCP traffic generation
38 39 40 41 42 43 44 45
     using NS's Agent/TCP/FullTcp which is a BSD Reno derivative and
     its subclasses namely Newreno, Tahoe and Sack. Currently two
     application classes are supported: Application/FTP and 
     Application/Telnet. The former drives the FullTcp agent to send
     bulk-data according to connection dynamics. The latter uses the
     NS's <a href=""> 
     tcplib</a> telnet distribution for telnet-like data. For
     configuration parameters and commands allowed on the objects,
46 47
     refer to NS documentation
     <a href="">
48 49

<li> <b>Event System</b>: Emulab supports limited use of the NS <em>at</em>
     syntax, allowing you to define a static set of events in your NS
52 53 54 55 56 57 58 59 60
     file, to be delivered to agents running on your nodes. There is
     also "dynamic events" support that can be used to inject events
     into the system on the fly, say from a script running on

<li> <b>Program Objects</b>: Emulab has added extensions that allow you to
     run arbitrary programs on your nodes, starting and stopping them
     at any point during your experiment run.

61 62 63
<li> <b>Link Tracing and Monitoring</b>: Emulab supports simplified
      <a href="#Tracing">tracing and monitoring</a> of links and lans.

64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86

What follows is a <a href="advanced.ns">sample NS file</a> that
demonstrates the above features, with annotations where appropriate.
First we define the 2 nodes in the topology:
	set nodeA [$ns node]
	set nodeB [$ns node]			</code></pre>

Next define a duplex link between nodes nodeA and nodeB. Instead of a
standard DropTail link, it is declared to be a Random Early Detection
(RED) link. While this is obviously contrived, it allows us to ignore
<a href="tutorial.php3#Routing">routing</a> issues within this
	set link0 [$ns duplex-link $nodeA $nodeB 100Mb 0ms RED]</code></pre>

Each link is has an NS "Queue" object associated with it, which you
87 88 89 90
can modify to suit your needs (<em>currently, there are two queue
objects per duplex link; one for each direction. You need to set the
parameters for both directions, which means you can set the parameters
asymmetrically if you want</em>). The following parameters can be
changed, and are defined in the NS manual (see Section 7.3).
92 93 94
	set queue0 [[$ns link $nodeA $nodeB] queue]
	$queue0 set gentle_ 0
	$queue0 set red_ 0
	$queue0 set queue-in-bytes_ 0
97 98 99 100 101
	$queue0 set limit_ 50
	$queue0 set maxthresh_ 15
	$queue0 set thresh_ 5
	$queue0 set linterm_ 10
	$queue0 set q_weight_ 0.002</code></pre>
102 103 104 105

<em>The maximum <tt>limit_</tt> for a queue is <b>1 megabyte</b> if it is
specified in bytes, or <b>100 slots</b> if it is specified in slots (each
slot is 1500 bytes).</em>
106 107

108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131
In the case of a LAN, there is a single queue object for every node
that is a member of the LAN and it refers to the node-to-lan direction.
The only special case is a 100Mb 0ms LAN that does <b>not</b> use 
<a href="/doc/docwrapper.php3?docname=linkdelays.html#LINKDELAYS">end node shaping</a>.
No queue object is available in that case. Here is an example
that illustrates how to get handles on the queue objects of a LAN
so as to change the parameters:
	set n0 [$ns node]
	set n1 [$ns node]
	set n2 [$ns node]

	set lan0 [$ns make-lan "$n0 $n1 $n2" 100Mb 0ms]

	set q0 [[$ns lanlink $lan0 $n0] queue]
	set q1 [[$ns lanlink $lan0 $n1] queue]
	set q2 [[$ns lanlink $lan0 $n2] queue]

	$q0 set limit_ 20

132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
A UDP agent is created and attached to nodeA, then a CBR traffic
generator application is created, and attached to the UDP agent:
	set udp0 [new Agent/UDP]
	$ns attach-agent $nodeA $udp0

	set cbr0 [new Application/Traffic/CBR]
	$cbr0 set packetSize_ 500
	$cbr0 set interval_ 0.005
	$cbr0 attach-agent $udp0</code></pre>

A TCP agent is created and also attached to nodeA, then a second CBR
traffic generator application is created, and attached to the TCP
	set tcp0 [new Agent/TCP]
	$ns attach-agent $nodeA $tcp0

	set cbr1 [new Application/Traffic/CBR]
	$cbr1 set packetSize_ 500
	$cbr1 set interval_ 0.005
	$cbr1 attach-agent $tcp0</code></pre>

You must define traffic sinks for each of the traffic generators
created above. The sinks are attached to nodeB:
	set null0 [new Agent/Null]
	$ns attach-agent $nodeB $null0

Leigh B. Stoller's avatar
Leigh B. Stoller committed
	set null1 [new Agent/TCPSink]
167 168 169 170 171 172 173 174 175 176 177
	$ns attach-agent $nodeB $null1</code></pre>

Then you must connect the traffic generators on nodeA to the traffic sinks
on nodeB:
	$ns connect $udp0 $null0  
	$ns connect $tcp0 $null1</code></pre>

178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208
Here is a good example for NSE FullTcp traffic generation. The
following code snippet attaches an FTP agent that drives a Reno
FullTcp on NodeA:
	set tcpfull0 [new Agent/TCP/FullTcp]
	$ns attach-agent $nodeA $tcpfull0

	set ftp0 [new Application/FTP]
	$ftp0 attach-agent $tcpfull0</code></pre>

You must then define the sink FullTcp endpoint and call the method
"listen" making this agent wait for an incoming connection:
	set tcpfull1 [new Agent/TCP/FullTcp/Sack]
	$tcpfull1 listen
	$ns attach-agent $nodeB $tcpfull1</code></pre>


Like all other source-sink traffic generators, you need to connect
	$ns connect $tcpfull0 $tcpfull1</code></pre>


209 210 211 212 213
Lastly, a set of events to control your applications and link
	$ns at 60.0  "$cbr0  start"
	$ns at 70.0  "$link0 bandwidth 10Mb duplex"
215 216 217 218 219
	$ns at 80.0  "$link0 delay 10ms"
	$ns at 90.0  "$link0 plr 0.05"
	$ns at 100.0 "$link0 down"
	$ns at 110.0 "$link0 up"
	$ns at 115.0 "$cbr0  stop"
220 221 222 223 224

	$ns at 120.0 "$ftp0 start"
	$ns at 140.0 "$tcpfull0 set segsize_ 256; $tcpfull0 set	segsperack_ 2"
	$ns at 145.0 "$tcpfull1 set nodelay_ true"
	$ns at 150.0 "$ftp0 stop"
225 226 227 228 229
	$ns at 120.0 "$cbr1  start"
	$ns at 130.0 "$cbr1  set packetSize_ 512"
	$ns at 130.0 "$cbr1  set interval_ 0.01"
	$ns at 140.0 "$link0 down"
230 231 232
	$ns at 150.0 "$cbr1  stop"

233 234 235 236 237

When you receive email containing the experiment setup information (as
described in <a href="tutorial.php3#Beginning">Beginning an
Mike Hibler's avatar
Mike Hibler committed
Experiment</a>), you will notice an additional section that gives a
Robert Ricci's avatar
Robert Ricci committed
239 240 241 242 243 244 245 246 247
summary of the events that will be delivered during your experiment:
Event Summary:
Event count: 18
First event: 60.000 seconds
Last event: 160.000 seconds </code></pre>

248 249 250
You can get a full listing of the events for your experiment by
checking the 'Details' pane on the 'Show Experiment' page for your
experiment. This report will include a section like this:
Robert Ricci's avatar
Robert Ricci committed
251 252

253 254 255 256 257 258 259 260 261 262 263 264 265 266 267
Event List:
Time         Node         Agent      Type       Event      Arguments
------------ ------------ ---------- ---------- ---------- ------------ 
60.000       nodeA        cbr0       TRAFGEN    START      PACKETSIZE=500
70.000       tbsdelay0    link0      LINK       MODIFY     BANDWIDTH=10000
80.000       tbsdelay0    link0      LINK       MODIFY     DELAY=10ms
90.000       tbsdelay0    link0      LINK       MODIFY     PLR=0.05
100.000      tbsdelay0    link0      LINK       DOWN       
110.000      tbsdelay0    link0      LINK       UP         
115.000      nodeA        cbr0       TRAFGEN    STOP       
120.000      nodeA        cbr1       TRAFGEN    START      PACKETSIZE=500
120.000      nodeA        ftp0       TRAFGEN    MODIFY     $ftp0 start
269 270 271
130.000      nodeA        cbr1       TRAFGEN    MODIFY     PACKETSIZE=512
130.000      nodeA        cbr1       TRAFGEN    MODIFY     INTERVAL=0.01
140.000      tbsdelay0    link0      LINK       DOWN       
272 273 274
140.000      nodeA        tcpfull0   TRAFGEN    MODIFY     $tcpfull0 set segsize_ 256
140.000      nodeA        tcpfull0   TRAFGEN    MODIFY     $tcpfull0 set segsperack_ 2
145.000      nodeB        tcpfull1   TRAFGEN    MODIFY     $tcpfull1 set nodelay_ true
150.000      tbsdelay0    link0      LINK       UP         
150.000      nodeA        ftp0       TRAFGEN    MODIFY     $ftp0 stop
277 278 279 280 281 282 283 284 285 286 287
160.000      nodeA        cbr1       TRAFGEN    STOP		</code></pre>

The above list represents the set of events for your experiments, and
are stored in the Emulab Database. When your experiment is swapped in,
an <em>event scheduler</em> is started that will process the list, and
send them at the time offset specified. In order to make sure that all
of the nodes are actually rebooted and ready, time does not start
ticking until all of the nodes have reported to the event system that
they are ready. At present, events are restricted to system level
agents (Emulab traffic generators and delay nodes), but in the future
Mike Hibler's avatar
Mike Hibler committed
we expect to provide an API that will allow experimenters to write
289 290 291 292 293 294
their own event agents.

Dynamic Scheduling of Events
<a NAME="DynamicEvents"></a>
296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311

NS scripts give you the ability to schedule events dynamically; an NS
script is just a TCL program and the argument to the "at" command is
any valid TCL expression. This gives you great flexibility in a
simulated world, but alas, this cannot be supported in a practical
manner in the real world. Instead, we provide a way for you to inject
events into the system dynamically, but leave it up to you to script
those events in whatever manner you are most comfortable with, be it a
PERL script, or a shell script, or even another TCL script!  Dynamic
event injection is accomplished via the <em>Testbed Event Client</em>
(tevc), which is installed on your experimental nodes and on
<tt></tt>. The command line syntax for <tt>tevc</tt>
	tevc -e proj/expt time objname event [args ...]</code></pre>
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336

where the <tt>time</tt> parameter is one of:

<li> now
<li> +seconds (floating point or integer)
<li> [[[[yy]mm]dd]HH]MMss

For example, you could issue this sequence of events. 
	tevc -e testbed/myexp now cbr0 set interval_=0.2
	tevc -e testbed/myexp +10 cbr0 start
	tevc -e testbed/myexp +15 link0 down
	tevc -e testbed/myexp +17 link0 up
	tevc -e testbed/myexp +20 cbr0 stop</code></pre>

Some points worth mentioning:

<li> There is no "global" clock; Emulab nodes are kept in sync with
     NTP, which does a very good job of keeping all of the clocks
Leigh B. Stoller's avatar
Leigh B. Stoller committed
337 338 339 340 341
     within 1ms of each other.

<li> The times "now" and "+seconds" are relative to the time at which
     each event is submitted, not to each other or the start of the
342 343 344 345 346

<li> The set of events you can send is currently limited to control of
     traffic generators and delay nodes. We expect to add more agents
     in the future.

Leigh B. Stoller's avatar
Leigh B. Stoller committed
347 348 349 350
<li> Sending dynamic events that intermix with statically scheduled events
     can result in unpredictable behavior if you are not careful.

<li> Currently, the event list is replayed each time the experiment is
Mike Hibler's avatar
Mike Hibler committed
     swapped in. This is almost certainly not the behavior people
Leigh B. Stoller's avatar
Leigh B. Stoller committed
352 353
     expect; we plan to change that very soon.

354 355 356
<li> <tt>tevc</tt> does not provide any feedback; if you specify an
     object (say, cbr78 or link45) that is not a valid object in your
     experiment, the event is silently thrown away. Further, if you
Mike Hibler's avatar
Mike Hibler committed
     specify an operation or parameter that is not appropriate (say,
358 359 360 361
     "link0 start" instead of "link0 up"), the event is silently
     dropped. We expect to add error feedback in the future.

362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382
Supported Events

This is a (mostly) comprehensive list of events that you can specify,
either in your NS file or as a dynamic event on the command line. In
the listings below, the use of "link0", "cbr0", etc. are included to
clarify the syntax; the actual object names will depend on your NS
file. Also note that when sending events from the command line with
<tt>tevc</tt>, you should not include the dollar ($) sign. For

<table border=0>
 <td> NS File:</td>
 <td><code></pre>$ns at 3.0 "$link0 down"</pre></code></td>
 <td> tevc:</td>
 <td><code></pre>tevc -e proj/expt +3.0 link0 down</pre></code>
384 385 386 387 388 389
390 391 392 393 394 395 396 397 398 399 400 401 402 403 404
<li> Links: <pre>
   In "ns" script:
     $link0 bandwidth 10Mb duplex
     $link0 delay 10ms
     $link0 plr 0.05

   With "tevc":
     tevc ... link0 modify bandwidth=20000	# In kbits/second; 20000 = 20Mbps
     tevc ... link0 modify delay=10ms		# In msecs (the "ms" is ignoredd)
     tevc ... link0 modify plr=0.1

     $link0 up
     $link0 down
405 406

<li> Queues: Queues are special. In your NS file you modify the actual
407 408 409 410 411 412 413 414 415
     queue, while on the command line you use the link to which the queue belongs.<pre>
      $queue0 set queue-in-bytes_ 0
      $queue0 set limit_ 75
      $queue0 set maxthresh_ 20
      $queue0 set thresh_ 7
      $queue0 set linterm_ 11
      $queue0 set q_weight_ 0.004

Mike Hibler's avatar
Mike Hibler committed
416 417
<li> CBR: interval_ and rate_ are two way of specifying the same thing.
     iptos_ allows you to set the IP_TOS socket option for a traffic
419 420 421
      $cbr0 start
      $cbr0 set packetSize_ 512
      $cbr0 set interval_ 0.01
Robert Ricci's avatar
Robert Ricci committed
      $cbr0 set rate_ 10Mb
Mike Hibler's avatar
Mike Hibler committed
      $cbr0 set iptos_ 16
424 425 426 427
      $cbr0 stop

<li> FullTcp, FTP and Telnet: Refer to the NS documentation <a
Leigh B. Stoller's avatar
Leigh B. Stoller committed
429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464

<a NAME="EventGroups"></a>
Event Groups

Event Groups allow you to conveniently send events to groups of like
objects. For example, if you want to bring down a set of links at the
same time, you could do it one event at a time:

	$ns at 100.0 "$link0 down"
	$ns at 100.0 "$link1 down"
	$ns at 100.0 "$link2 down"
	$ns at 100.0 "$link3 down" </code></pre>

which works, but is somewhat verbose. Its also presents a problem when
sending dynamic events with <tt>tevc</tt> from the shell:

	tevc -e proj/expt now link0 down
	tevc -e proj/expt now link1 down
	tevc -e proj/expt now link2 down
	tevc -e proj/expt now link3 down</pre></code>

These four events will be separated by many milliseconds as each call
to tevc requires forking a command from the shell, contacting boss,
sending the event to the event scheduler, etc.

A better alternative is to create an <em>event group</em>, which will
schedule events for all of the members of the group, sending them at
the same time from the event scheduler. The example above can be more
simply implemented as:

Leigh B. Stoller's avatar
Leigh B. Stoller committed
466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497
	set mylinks [new EventGroup $ns]
	$mylinks add $link0 $link1 $link2 $link3

	$ns at 60.0 "$mylinks down"</pre></code>

From the command line:

	tevc -e proj/expt now mylinks down</pre></code>

<li> All of the members of an event group must be of the same type;
     you cannot, say, put a link and a program object into the same
     event group since they respond to entirely different commands.
     The parser will reject such groups.

<li> An object (such as a link or lan) can be in multiple event

<li> Event groups are not hierarchical; you cannot put one event group
     into another event group. If you need this functionality, then
     you need to put the objects themselves (such as a link or lan)
     into each event group directly:

	set mylinks1 [new EventGroup $ns]
	set mylinks2 [new EventGroup $ns]
	$mylinks1 add $link0 $link1 $link2 $link3
	$mylinks2 add $link0 $link1 $link2 $link3</pre></code>
498 499

Leigh B. Stoller's avatar
Leigh B. Stoller committed

501 502
<a NAME="ProgramObjects"></a>
504 505 506 507 508 509 510 511 512 513 514 515
Program Objects

We have added some extensions that allow you to use NS's <tt>at</tt>
syntax to invoke arbitrary commands on your experimental nodes. Once
you define a program object and initialize its command line and the
node on which the command should be run, you can schedule the command
to be started and stopped with NS <tt>at</tt> statements. To define a
program object:

Timothy Stack's avatar
Timothy Stack committed
	set prog0 [$nodeA program-agent -command "/bin/ls -lt"]

Timothy Stack's avatar
Timothy Stack committed
	set prog1 [$nodeB program-agent -command "/bin/sleep 60"]</pre></code>

Then in your NS file a set of static events to run these commands:
521 522 523 524 525 526 527 528 529

	$ns at 10 "$prog0 start"
	$ns at 20 "$prog1 start"
	$ns at 30 "$prog1 stop"</pre></code>

If you want to schedule starts and stops using dynamic events:

530 531
	tevc -e testbed/myexp now prog0 start
	tevc -e testbed/myexp now prog1 start
Timothy Stack's avatar
Timothy Stack committed
	tevc -e testbed/myexp +20 prog1 stop</pre></code>
533 534 535 536 537

If you want to change the command that is run (override the command
you specified in your NS file), then:

Timothy Stack's avatar
Timothy Stack committed
	tevc -e testbed/myexp now prog0 start COMMAND='ls'</pre></code>
539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556
Some points worth mentioning:

<li> A program must be "stopped" before it is started; if the program
     is currently running on the node, the start event will be
     silently ignored.

<li> The command line is passed to /bin/csh; any valid csh expression
     is allowed, although no syntax checking is done prior to invoking
     it. If the syntax is bad, the command will fail. It is a good
     idea to redirect output to a log file so you can track failures. 

<li> The "stop" command is implemented by sending a SIGTERM to the
     process group leader (the csh process). If the SIGTERM fails, a
     SIGKILL is sent.

557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599
<a NAME="Tracing"></a>
Link Tracing and Monitoring

Emulab has simple support for tracing and monitoring links and lans.
For example, to trace a link:
	set link0 [$ns duplex-link $nodeB $nodeA 30Mb 50ms DropTail]
	$link0 trace</code></pre>

The default mode for tracing a link (or a lan) is to capture just the
packet headers (first 64 bytes of the packet) and store them to a
tcpdump output file. You may also view a realtime summary of packets
through the link, via the web interface, by going to the 'Show
Experiment' page for the experiment, and clicking on the Link
Tracing/Monitoring' menu option.  You may also control the link
tracing programs on the fly, pausing them, restarting them, and
killing them.

In addition to capturing just the packet headers, you may also capture
the entire packet:
	$link0 trace packet</code></pre>

Or you may not capture any packets, but simply gather the realtime
summary so that you can view it via the web interface.
	$link0 trace monitor</code></pre>

By default, all packets traversing the link are captured by the
tracing agent. If you want to narrow the scope of the packets that are
captured, you may supply any valid tcpdump (pcap) style expression:
	$link0 trace monitor "icmp or tcp"</code></pre>

You may also set the <b>snaplen</b> for a link or lan, which sets the
number of bytes that will be captured by each of the trace agents (as
Mike Hibler's avatar
Mike Hibler committed
600 601
mentioned above, the default is 64 bytes, which is adequate for
determining the type of most packets):
602 603 604 605 606 607
	$link0 trace_snaplen 128</code></pre>

Tracing parameters may also be specified on a <b>per-node basis</b>,
for each node in a link or lan. For example, consider the duplex link
Mike Hibler's avatar
Mike Hibler committed
608 609
<tt>link0</tt> above between <tt>nodeA</tt> and <tt>nodeB</tt>. If you
want to set a different snaplen and trace expression for packets
610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639
<em>leaving</em> <tt>nodeA</tt>, then:
	[[$ns link $nodeA $nodeB] queue] set trace_type header
	[[$ns link $nodeA $nodeB] queue] set trace_snaplen 128
	[[$ns link $nodeA $nodeB] queue] set trace_expr "ip proto tcp"</code></pre>

To set the parameters for packets leaving <tt>nodeB</tt>, simply
reverse the arguments to the <tt>link</tt> statement:
	[[$ns link $nodeB $nodeA] queue] set trace_snaplen 128</code></pre>

For a lan, the syntax is slightly different. Consider a lan called
<tt>lan0</tt> with a node called <tt>nodeL</tt> on it:
	[[$ns lanlink $lan0 $nodeL] queue] set trace_type header
	[[$ns lanlink $lan0 $nodeL] queue] set trace_snaplen 128
	[[$ns lanlink $lan0 $nodeL] queue] set trace_expr "ip proto tcp"</code></pre>

When capturing packets (rather then just "monitoring"), the packet
data is written to tcpdump (pcap) output files in <tt>/local/logs</tt>
on the delay node. Note that while delay nodes are allocated for each
traced link/lan, packets are shaped only if the NS file requested
traffic shaping. Otherwise, the delay node acts simply as a bridge for
the packets, which provides a place to capture and monitor the
packets, without affecting the experiment directly. Whether the link
or lan is shaped or not, packets leaving each node are captured twice;
once when they arrive at the delay node, and again when they leave the
delay node. This allows you to precisely monitor how the delay node
affects your packets, whether the link is actually shaped or not.
Timothy Stack's avatar
Timothy Stack committed
640 641
You can use the <a href="docwrapper.php3?docname=loghole.html">loghole</a>
utility to copy the capture files back to the experiment's log directory.
642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676

When a link or lan is traced, you may monitor a realtime summary of
the packets being captured, via the web interface. From the "Show
Experiment" page for your experiment, click on the "Link Tracing"
link. You will see a list of each link and lan that is traced, and
each node in the link or lan. For each node, you can click on the
"Monitor" button to bring up a window that displays the realtime
summary for packets <em>leaving</em> that node. Currently, the
realtime summary is somewhat primitive, displaying the number of
packets (and total bytes) sent each second, as well as a breakdown of
the packet types for some common IP packet types.

Other buttons on the Link Tracing page allow you to temporarily pause
packet capture, restart it, or even kill the packet capture processes
completely (since they continue to consume CPU even when paused). The
"snapshot" button causes the packet capture process to close its
output files, rename them, and then open up new output files. The
files can then be saved off with the loghole utility, as mentioned

If you want to script the control of the packet tracing processes, you
can use the <a href=#DynamicEvents>event system</a> to send dynamic
events. For example, to tell the packet capture processes monitoring
<tt>link0</tt> to snapshot, pause, and restart:
	tevc -e myproj/myexp now link0-tracemon snapshot
	tevc -e myproj/myexp now link0-tracemon stop
	tevc -e myproj/myexp now link0-tracemon start</pre></code>

And of course, you may use the NS "at" syntax to schedule static
events from your NS file:
677 678 679
	$ns at 10 "$link0 trace stop"
	$ns at 20 "$link0 trace start"
	$ns at 30 "$link0 trace snapshot"</pre></code>
680 681 682 683 684 685 686 687 688 689 690

The output files that the capture process create, are stored in
<tt>/local/logs</tt>, and are named by the link and node name.
In the above link example, four capture files are created:

<li> trace_nodeA-link0.xmit
<li> trace_nodeA-link0.recv
<li> trace_nodeB-link0.xmit
<li> trace_nodeB-link0.recv
692 693 694 695 696 697 698 699

where the <tt>.recv</tt> files hold the packets that were sent by the
node and <em>received</em> by the delay node. The <tt>.xmit</tt> files
hold those packets that were <em>transmitted</em> by the delay node
and received by the other side of the link. So, for packets sent from
<tt>nodeA</tt> to <tt>nodeB</tt>, the packet would arrive at the delay
Mike Hibler's avatar
Mike Hibler committed
node and be recorded in </tt>trace_nodeA-link0.recv</tt>. Once the packet
traverses the delay node (subject to Dummynet traffic shaping) and it is
Mike Hibler's avatar
Mike Hibler committed
about to be transmitted, it is recorded in </tt>trace_nodeA-link0.xmit</tt>. 
703 704 705
By comparing these two files, you can see how the Dummynet traffic
shaping has affected your packets, in each direction. Note that even
if you have not specified traffic shaping, you still get the same set
Mike Hibler's avatar
Mike Hibler committed
706 707 708
of files.  In this case, the <tt>.recv</tt> and <tt>.xmit</tt> files
will be nearly identical, reflecting only the negligible propagation delay
through the software bridge.
709 710 711 712 713 714 715 716 717 718

When you issue a "snapshot" command, the above files are closed, and
renamed to:

<li> trace_nodeA-link0.xmit.0
<li> trace_nodeA-link0.recv.0
<li> trace_nodeB-link0.xmit.0
<li> trace_nodeB-link0.recv.0
720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742

and a new set of files is opened. Note that the files are not rolled;
the next time you issue the snapshot command, the current set of files
ending with <tt>.0</tt> are lost.

EndNode Tracing/Monitoring

<em>Endnode tracing/monitoring</em> refers to putting the trace hooks
on the end nodes of a link, instead of on delay nodes. This happens
when there are no delay nodes in use, such as when using
<a href=docwrapper.php3?docname=vnodes.html>multiplexed virtual
nodes</a>, or if you have explicitly requested
<a href="/doc/docwrapper.php3?docname=linkdelays.html#LINKDELAYS">end
node shaping</a> to reduce the number of nodes you need for an
experiment. You may also explicitly request tracing to be done on the
end nodes of a link (or a lan) with the following NS command:
	$link0 trace_endnode 1</code></pre>

Mike Hibler's avatar
Mike Hibler committed
743 744
(Note that if a delay node does exist, it will be used for traffic capture
even if endnode tracing is specified.)
745 746 747 748 749 750
When tracing/monitoring is done on an endnode, the output files are
again stored in <tt>/local/logs</tt>, and are named by the link and
node name. The difference is that there is just a <em>single</em>
output file, for those packets <em>leaving</em> the node. Packets are
captured after traffic shaping has been applied. 

Mike Hibler's avatar
Mike Hibler committed
751 752 753 754
Endnode tracing can also be used on PlanetLab nodes setup through
Emulab's Planetlab portal.  In this case, all packets sent or received
are recorded in the <tt>.xmit</tt> file.

755 756 757 758 759
<li> trace_nodeA-link0.xmit