advanced.html 15.7 KB
Newer Older
Leigh B. Stoller's avatar
Leigh B. Stoller committed
1
2
<!--
   EMULAB-COPYRIGHT
3
   Copyright (c) 2000-2003 University of Utah and the Flux Group.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
4
5
   All rights reserved.
  -->
6
7
8
9
10
11
12
13
14
15
16
17
<center>
<h1>Emulab Tutorial - A More Advanced Example</h1>
</center>

<p>
Here is a slightly more complex example demonstrating the use of RED
queues, traffic generation, and the event system. Where possible, we
adhere to the syntax and operational model of
<a href="http://www.isi.edu/nsnam/ns/">ns-2</a>, as described in the
<a href="http://www.isi.edu/nsnam/ns/doc/index.html">NS manual</a>.

<ul>
18
<li> <b>RED/GRED Queues</b>: In addition to normal DropTail links, Emulab
19
20
21
22
23
24
25
26
27
28
     supports the specification of the RED and GRED (Gentle RED) links
     in your NS file. RED/GRED queuing is handled via the
     insertion of a traffic shaping delay node, in much the same way
     that bandwidth, delay, and packet loss is handled. For a better
     understanding of how we support traffic shaping, see the
     <tt>ipfw</tt> and <tt>dummynet</tt> man pages on
     <tt>users.emulab.net</tt>. It is important to note that Emulab
     supports a smaller set of tunable parameters then NS does; please
     read the aforementioned manual pages!

29
<li> <b>Traffic Generation</b>: Emulab supports Constant Bit Rate (CBR)
30
31
32
     traffic generation, in conjunction with either Agent/UDP or
     Agent/TCP agents. We currently use the
     <a href="http://www.postel.org/tg">TG Tool Set</a> to generate
33
     traffic.
34

35
<li> <b>Traffic Generation using
36
     <a href="http://www.isi.edu/nsnam/ns/doc/node487.html">
37
     NS Emulation (NSE)</b></a>: Emulab supports TCP traffic generation
38
39
40
41
42
43
44
45
     using NS's Agent/TCP/FullTcp which is a BSD Reno derivative and
     its subclasses namely Newreno, Tahoe and Sack. Currently two
     application classes are supported: Application/FTP and 
     Application/Telnet. The former drives the FullTcp agent to send
     bulk-data according to connection dynamics. The latter uses the
     NS's <a href="http://citeseer.nj.nec.com/danzig91tcplib.html"> 
     tcplib</a> telnet distribution for telnet-like data. For
     configuration parameters and commands allowed on the objects,
46
47
     refer to NS documentation
     <a href="http://www.isi.edu/nsnam/ns/doc/node351.html">
48
49
     here.</a>

50
<li> <b>Event System</b>: Emulab supports limited use of the NS <em>at</em>
51
     syntax, allowing you to define a static set of events in your NS
52
53
54
55
56
57
58
59
60
     file, to be delivered to agents running on your nodes. There is
     also "dynamic events" support that can be used to inject events
     into the system on the fly, say from a script running on
     <tt>users.emulab.net</tt>.

<li> <b>Program Objects</b>: Emulab has added extensions that allow you to
     run arbitrary programs on your nodes, starting and stopping them
     at any point during your experiment run.

61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
</ul>

<p>
What follows is a <a href="advanced.ns">sample NS file</a> that
demonstrates the above features, with annotations where appropriate.
First we define the 2 nodes in the topology:
	<code><pre>
	set nodeA [$ns node]
	set nodeB [$ns node]			</code></pre>
</p>

<p>
Next define a duplex link between nodes nodeA and nodeB. Instead of a
standard DropTail link, it is declared to be a Random Early Detection
(RED) link. While this is obviously contrived, it allows us to ignore
<a href="tutorial.php3#Routing">routing</a> issues within this
example.
	<code><pre>
	set link0 [$ns duplex-link $nodeA $nodeB 100Mb 0ms RED]</code></pre>
</p>

<p>
Each link is has an NS "Queue" object associated with it, which you
84
85
86
87
88
89
can modify to suit your needs (<em>currently, there are two queue
objects per duplex link; one for each direction. You need to set the
parameters for both directions, which means you can set the parameters
asymmetrically if you want</em>). The following parameters can be
changed, and are defined in the NS manual (see Section 7.3). <b>Note,
only duplex links have queue objects, lans do not</b>.
90
91
92
	<code><pre>
	set queue0 [[$ns link $nodeA $nodeB] queue]
	$queue0 set gentle_ 0
93
	$queue0 set red_ 0
94
	$queue0 set queue-in-bytes_ 0
95
96
97
98
99
	$queue0 set limit_ 50
	$queue0 set maxthresh_ 15
	$queue0 set thresh_ 5
	$queue0 set linterm_ 10
	$queue0 set q_weight_ 0.002</code></pre>
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
</p>

<p>
A UDP agent is created and attached to nodeA, then a CBR traffic
generator application is created, and attached to the UDP agent:
	<code><pre>
	set udp0 [new Agent/UDP]
	$ns attach-agent $nodeA $udp0

	set cbr0 [new Application/Traffic/CBR]
	$cbr0 set packetSize_ 500
	$cbr0 set interval_ 0.005
	$cbr0 attach-agent $udp0</code></pre>
</p>

<p>
A TCP agent is created and also attached to nodeA, then a second CBR
traffic generator application is created, and attached to the TCP
agent:
	<code><pre>
	set tcp0 [new Agent/TCP]
	$ns attach-agent $nodeA $tcp0

	set cbr1 [new Application/Traffic/CBR]
	$cbr1 set packetSize_ 500
	$cbr1 set interval_ 0.005
	$cbr1 attach-agent $tcp0</code></pre>
</p>

<p>
You must define traffic sinks for each of the traffic generators
created above. The sinks are attached to nodeB:
	<code><pre>
	set null0 [new Agent/Null]
	$ns attach-agent $nodeB $null0

	set null1 [new Agent/TCPSINK]
	$ns attach-agent $nodeB $null1</code></pre>
</p>

<p>
Then you must connect the traffic generators on nodeA to the traffic sinks
on nodeB:
	<code><pre>
	$ns connect $udp0 $null0  
	$ns connect $tcp0 $null1</code></pre>
</p>

148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
<p>
Here is a good example for NSE FullTcp traffic generation. The
following code snippet attaches an FTP agent that drives a Reno
FullTcp on NodeA:
	<code><pre>
	set tcpfull0 [new Agent/TCP/FullTcp]
	$ns attach-agent $nodeA $tcpfull0

	set ftp0 [new Application/FTP]
	$ftp0 attach-agent $tcpfull0</code></pre>
      
</p>

<p>
You must then define the sink FullTcp endpoint and call the method
"listen" making this agent wait for an incoming connection:
	<code><pre>
	set tcpfull1 [new Agent/TCP/FullTcp/Sack]
	$tcpfull1 listen
	$ns attach-agent $nodeB $tcpfull1</code></pre>

</p>

<p>
Like all other source-sink traffic generators, you need to connect
them:
        <code><pre>
	$ns connect $tcpfull0 $tcpfull1</code></pre>

</p>

179
180
181
182
183
<p>
Lastly, a set of events to control your applications and link
characteristics: 
	<code><pre>
	$ns at 60.0  "$cbr0  start"
184
	$ns at 70.0  "$link0 bandwidth 10Mb duplex"
185
186
187
188
189
	$ns at 80.0  "$link0 delay 10ms"
	$ns at 90.0  "$link0 plr 0.05"
	$ns at 100.0 "$link0 down"
	$ns at 110.0 "$link0 up"
	$ns at 115.0 "$cbr0  stop"
190
191
192
193
194

	$ns at 120.0 "$ftp0 start"
	$ns at 140.0 "$tcpfull0 set segsize_ 256; $tcpfull0 set	segsperack_ 2"
	$ns at 145.0 "$tcpfull1 set nodelay_ true"
	$ns at 150.0 "$ftp0 stop"
195
196
197
198
199
	
	$ns at 120.0 "$cbr1  start"
	$ns at 130.0 "$cbr1  set packetSize_ 512"
	$ns at 130.0 "$cbr1  set interval_ 0.01"
	$ns at 140.0 "$link0 down"
200
201
202
	$ns at 150.0 "$cbr1  stop"

</code></pre>
203
204
205
206
207
</p>

<p>
When you receive email containing the experiment setup information (as
described in <a href="tutorial.php3#Beginning">Beginning an
Robert Ricci's avatar
Robert Ricci committed
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
Experiment</a>), you will notice an additonal section that gives a
summary of the events that will be delivered during your experiment:
<code><pre>
Event Summary:
--------------
Event count: 18
First event: 60.000 seconds
Last event: 160.000 seconds </code></pre>

<p>
You can get a full listing of the events for your experiment by running
<code>tbreport -v pid eid</code> on <tt>users.emulab.net</tt>. This
report will include a section like this:

<code><pre>
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
Event List:
Time         Node         Agent      Type       Event      Arguments
------------ ------------ ---------- ---------- ---------- ------------ 
60.000       nodeA        cbr0       TRAFGEN    START      PACKETSIZE=500
                                                           RATE=100000
                                                           INTERVAL=0.005
70.000       tbsdelay0    link0      LINK       MODIFY     BANDWIDTH=10000
80.000       tbsdelay0    link0      LINK       MODIFY     DELAY=10ms
90.000       tbsdelay0    link0      LINK       MODIFY     PLR=0.05
100.000      tbsdelay0    link0      LINK       DOWN       
110.000      tbsdelay0    link0      LINK       UP         
115.000      nodeA        cbr0       TRAFGEN    STOP       
120.000      nodeA        cbr1       TRAFGEN    START      PACKETSIZE=500
                                                           RATE=100000
                                                           INTERVAL=0.005
238
120.000      nodeA        ftp0       TRAFGEN    MODIFY     $ftp0 start
239
240
241
130.000      nodeA        cbr1       TRAFGEN    MODIFY     PACKETSIZE=512
130.000      nodeA        cbr1       TRAFGEN    MODIFY     INTERVAL=0.01
140.000      tbsdelay0    link0      LINK       DOWN       
242
243
244
140.000      nodeA        tcpfull0   TRAFGEN    MODIFY     $tcpfull0 set segsize_ 256
140.000      nodeA        tcpfull0   TRAFGEN    MODIFY     $tcpfull0 set segsperack_ 2
145.000      nodeB        tcpfull1   TRAFGEN    MODIFY     $tcpfull1 set nodelay_ true
245
150.000      tbsdelay0    link0      LINK       UP         
246
150.000      nodeA        ftp0       TRAFGEN    MODIFY     $ftp0 stop
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
160.000      nodeA        cbr1       TRAFGEN    STOP		</code></pre>

<p>
The above list represents the set of events for your experiments, and
are stored in the Emulab Database. When your experiment is swapped in,
an <em>event scheduler</em> is started that will process the list, and
send them at the time offset specified. In order to make sure that all
of the nodes are actually rebooted and ready, time does not start
ticking until all of the nodes have reported to the event system that
they are ready. At present, events are restricted to system level
agents (Emulab traffic generators and delay nodes), but in the future
we expect to provide an API that will allow experimentors to write
their own event agents.
</p>

<p>
<h3>
Dynamic Scheduling of Events
</h3>

<p>
NS scripts give you the ability to schedule events dynamically; an NS
script is just a TCL program and the argument to the "at" command is
any valid TCL expression. This gives you great flexibility in a
simulated world, but alas, this cannot be supported in a practical
manner in the real world. Instead, we provide a way for you to inject
events into the system dynamically, but leave it up to you to script
those events in whatever manner you are most comfortable with, be it a
PERL script, or a shell script, or even another TCL script!  Dynamic
event injection is accomplished via the <em>Testbed Event Client</em>
(tevc), which is installed on your experimental nodes and on
<tt>users.emulab.net</tt>. The command line syntax for <tt>tevc</tt>
is:
	<code><pre>
	tevc -e pid/eid time objname event [args ...]</code></pre>

where the <tt>time</tt> parameter is one of:

<blockquote>
<ul>
<li> now
<li> +seconds (floating point or integer)
<li> [[[[yy]mm]dd]HH]MMss
</ul>
</blockquote>

For example, you could issue this sequence of events. 
	<code><pre>
	tevc -e testbed/myexp now cbr0 set interval_=0.2
	tevc -e testbed/myexp +10 cbr0 start
	tevc -e testbed/myexp +15 link0 down
	tevc -e testbed/myexp +17 link0 up
	tevc -e testbed/myexp +20 cbr0 stop</code></pre>

Some points worth mentioning:

<ul>
<li> There is no "global" clock; Emulab nodes are kept in sync with
     NTP, which does a very good job of keeping all of the clocks
Leigh B. Stoller's avatar
Leigh B. Stoller committed
306
307
308
309
310
     within 1ms of each other.

<li> The times "now" and "+seconds" are relative to the time at which
     each event is submitted, not to each other or the start of the
     experiment. 
311
312
313
314
315

<li> The set of events you can send is currently limited to control of
     traffic generators and delay nodes. We expect to add more agents
     in the future.

Leigh B. Stoller's avatar
Leigh B. Stoller committed
316
317
318
319
320
321
322
<li> Sending dynamic events that intermix with statically scheduled events
     can result in unpredictable behavior if you are not careful.

<li> Currently, the event list is replayed each time the experiment is
     swapped in. This is almost certainly not the behaviour people
     expect; we plan to change that very soon.

323
324
325
326
327
328
329
330
<li> <tt>tevc</tt> does not provide any feedback; if you specify an
     object (say, cbr78 or link45) that is not a valid object in your
     experiment, the event is silently thrown away. Further, if you
     specify an operation or parameter that is not approprate (say,
     "link0 start" instead of "link0 up"), the event is silently
     dropped. We expect to add error feedback in the future.
</ul>

331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
<p>
<h3>
Supported Events
</h3>

This is a (mostly) comprehensive list of events that you can specify,
either in your NS file or as a dynamic event on the command line. In
the listings below, the use of "link0", "cbr0", etc. are included to
clarify the syntax; the actual object names will depend on your NS
file. Also note that when sending events from the command line with
<tt>tevc</tt>, you should not include the dollar ($) sign. For
example:

<blockquote>
<table border=0>
<tr>
 <td> NS File:</td>
 <td><code></pre>$ns at 3.0 "$link0 down"</pre></code></td>
</tr>
<tr>
 <td> tevc:</td>
 <td><code></pre>tevc -e pid/eid +3.0 link0 down</pre></code>
 </td>
</tr>
</table>
</blockquote>
 
<ul>
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
<li> Links: <pre>
   In "ns" script:
     $link0 bandwidth 10Mb duplex
     $link0 delay 10ms
     $link0 plr 0.05

   With "tevc":
     tevc ... link0 modify bandwidth=20000	# In kbits/second; 20000 = 20Mbps
     tevc ... link0 modify delay=10ms		# In msecs (the "ms" is ignoredd)
     tevc ... link0 modify plr=0.1

   Both:
     $link0 up
     $link0 down
     </pre>
374
375

<li> Queues: Queues are special. In your NS file you modify the actual
376
377
378
379
380
381
382
383
384
     queue, while on the command line you use the link to which the queue belongs.<pre>
      $queue0 set queue-in-bytes_ 0
      $queue0 set limit_ 75
      $queue0 set maxthresh_ 20
      $queue0 set thresh_ 7
      $queue0 set linterm_ 11
      $queue0 set q_weight_ 0.004
      </pre>

Mike Hibler's avatar
Mike Hibler committed
385
386
<li> CBR: interval_ and rate_ are two way of specifying the same thing.
     iptos_ allows you to set the IP_TOS socket option for a traffic
387
     stream.<pre>
388
389
390
      $cbr0 start
      $cbr0 set packetSize_ 512
      $cbr0 set interval_ 0.01
Robert Ricci's avatar
Robert Ricci committed
391
      $cbr0 set rate_ 10Mb
Mike Hibler's avatar
Mike Hibler committed
392
      $cbr0 set iptos_ 16
393
394
395
396
      $cbr0 stop
      </pre>

<li> FullTcp, FTP and Telnet: Refer to the NS documentation <a
397
398
     href="http://www.isi.edu/nsnam/ns/doc/ns_doc.pdf">here</a>.

399
400
</ul>

401
402
<p>
<h3>
403
<a NAME="ProgramObjects"></a>
404
405
406
407
408
409
410
411
412
413
414
415
Program Objects
</h3>

<p>
We have added some extensions that allow you to use NS's <tt>at</tt>
syntax to invoke arbitrary commands on your experimental nodes. Once
you define a program object and initialize its command line and the
node on which the command should be run, you can schedule the command
to be started and stopped with NS <tt>at</tt> statements. To define a
program object:

	<code><pre>
416
	set prog0 [new Program $ns]
417
418
419
	$prog0 set node $nodeA
	$prog0 set command "/bin/ls -lt >& /users/joe/logs/prog0"

420
	set prog1 [new Program $ns]
421
422
423
	$prog1 set node $nodeB
	$prog1 set command "/bin/sleep 60 >& /tmp/sleep.debug"</pre></code>

424
Then in your NS file a set of static events to run these commands:
425
426
427
428
429
430
431
432
433

	<code><pre>
	$ns at 10 "$prog0 start"
	$ns at 20 "$prog1 start"
	$ns at 30 "$prog1 stop"</pre></code>

If you want to schedule starts and stops using dynamic events:

	<code><pre>
434
435
	tevc -e testbed/myexp now prog0 start
	tevc -e testbed/myexp now prog1 start
436
	tevc -e testbed/myexp +20 prog1 stop</code></pre>
437
438
439
440
441
442

If you want to change the command that is run (override the command
you specified in your NS file), then:

	<code><pre>
	tevc -e testbed/myexp now prog0 start COMMAND='ls >/tmp/foo'</code></pre>
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
	
Some points worth mentioning:

<ul>
<li> A program must be "stopped" before it is started; if the program
     is currently running on the node, the start event will be
     silently ignored.

<li> The command line is passed to /bin/csh; any valid csh expression
     is allowed, although no syntax checking is done prior to invoking
     it. If the syntax is bad, the command will fail. It is a good
     idea to redirect output to a log file so you can track failures. 

<li> The "stop" command is implemented by sending a SIGTERM to the
     process group leader (the csh process). If the SIGTERM fails, a
     SIGKILL is sent.
459
460

<li> If you override 
461
462
</ul>