news.html 9.8 KB
Newer Older
Leigh B. Stoller's avatar
Leigh B. Stoller committed
1
2
<!--
   EMULAB-COPYRIGHT
3
   Copyright (c) 2000-2002 University of Utah and the Flux Group.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
4
5
   All rights reserved.
  -->
6
7
<center>
<h1>
8
    News
9
</h1>
Robert Ricci's avatar
Robert Ricci committed
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
August 15, 2002
</center>

<h3>uky.emulab.net now up</h3>

<p>
The first external site using our software to run their own Emulab is now
up. Other sites are in the process of, or have plans to, bring up their
own Emulab. If you're intrested in running your own, contact us!
</p>

<p>
<a href="http://www.uky.emulab.net">uky.emulab.net</a>, at the University of Kentucky,
will be used primarily for classes, starting this fall.
</p>

<center>
<hr noshade size=3>
28
29
30
31
32
33
34
35
36
37
38
39
40
April 19, 2002
</center>

<h3>New Routing Support</h3>

Emulab now has much improved routing support which should make setting
up routes in your experiments easier than ever! Please see the updated
<a href="tutorial/tutorial.php3#Routing">"Setting up IP routing
between nodes"</a> section of the
<a href="tutorial/tutorial.php3">Emulab Tutorial.</a>

<hr noshade size=3>
<center>
41
November 29, 2001
42
43
</center>

Leigh B. Stoller's avatar
Leigh B. Stoller committed
44
45
46
47
48
49
50
51
<h2>Contents</h2>
<ul>
<li> <a href="#BIGISSUES">Big Issues</a>
     <ul>
     <li> <a href="#BIGISSUES-FED">
          Planned new emulabs; federated emulabs; wireless; ITR proposal
          </a>
     </ul>
52

Leigh B. Stoller's avatar
Leigh B. Stoller committed
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
<li> <a href="#OPER">Operational, hardware issues, and questions</a>
     <ul>
     <li> <a href="#OPER-POWEROUT">
	  Upcoming power outage
          </a>
     <li> <a href="#OPER-NEWPCS">
	  New PCs available; node hardware status; need the Sharks?
          </a>
     <li> <a href="#OPER-GBIT">
	  Gigabit short-term plans; your needs?
          </a>
     </ul>

<li> <a href="#OTHER">Other News</a>
     <ul>
     <li> <a href="#OTHER-TEST">
          "Test" emulab
          </a>
     <li> <a href="#OTHER-IXP">
	  IXP1200 "network processor" nodes
          </a>
     <li> <a href="#OTHER-VIS">
	  "ns" file visualization
          </a>
     <li> <a href="#OTHER-THANKS">
	  Thanks!
          </a>
     </ul>
</ul>
<hr>


<a NAME="BIGISSUES"></a>
<h3>Big Issues</h3>
<ul>
<li><a NAME="BIGISSUES-FED"></a>
    <h3>New emulabs; federated emulabs; wireless; ITR proposal</h3>

    <p>
    There are several sites that are on their way to building their own
    emulabs using our software.  If you might be interested too, let us
    know; we will help.  We are preparing a Web page describing the
Jay Lepreau's avatar
Jay Lepreau committed
95
    hardware requirements: minimum, recommended, and optimal, but don't wait.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125

    <p>
    <ul>
    <li>Kentucky - instructional, 40 node, nodes degree 4 + 1,
	Cisco switches.  Hardware in progress.
    <li>Duke - research,  eventually 100+ existing nodes, some GigE,
	nodes degree 1 + 1 (probably), Cisco switch I think.
	Have our software.
    <li>CMU - instructional, 20 existing PCs with IXP1200 boards,
	degree 4 + 1.  Intel switches.  Start any time.
    <li>Cornell - research, 150+ node hi-end PCs, GigE.  Awaiting HW
        funding.
    </ul>
    
    <p>
    There are many more sites that have said they will adopt our
    software, but are probably further in the future.  These include:

    <p>
    <ul>
    <li>HPLabs (2 1000+ node clusters)
    <li>MIT (Grid wireless testbed, RON testbed)
    <li>Princeton (IXP1200 nodes)
    <li>Stuttgart (existing experimental cluster)
        and more I forget or are further out, including Intel and
        Berkeley.
    </ul>

    <p>
    We have plans to federate these emulabs.  The ideas are to 1) be
Jay Lepreau's avatar
Jay Lepreau committed
126
    flexible enough to accommodate vast heterogeneity of site
Leigh B. Stoller's avatar
Leigh B. Stoller committed
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
    hardware/software (eg, power control-p, serial console-p, replace
    OS-p, singleuser-p), administrative policies, and goals, and 2) give
    the local site primacy in everything.  Easier said than done, of
    course, but if successful, hundreds of sites will eventually join
    because it'll be so easy and useful

    <p>
    We are going to develop support for wireless nodes, by selecting, from
    a dense set of nodes, those that satisfy experimenter's criteria.  We
    will initially be using low-end PCs, Sharks and/or Berkeley motes,
    using the mote radios.  MIT will probably use their 802.11b Grid
    nodes.

    <p>
    We wrote an ITR proposal to do this right, with Brown (resource
    allocation and mapping), Duke (multiplex many virtual nodes onto one
    physical node when performance allows), MIT, and Mt. Holyoke.  ITRs
    are a crapshoot, but we'll be making steps along the way in any case.
    Contact me if you're interested in joining the effort in any way.  We
    could certainly use help, and we'll get funding one way or another--
    perhaps you could join that aspect too.
</ul>

<hr>

<a NAME="OPER"></a>
<h3>Operational, hardware issues, and questions</h3>
<ul>
<li><a NAME="OPER-POWEROUT"></a>
    <h3>Upcoming power outage</h3>

    <p>
    There will be a one-day power outage sometime in December, probably
    after the 14th.  This is for testing the gaseous fire suppression
    system in the remodeled machine room.  It might be on a weekday.  If
    there are days you cannot afford having emulab down, let us know.

<p>
<li><a NAME="OPER-NEWPCS"></a>
    <h3>New PCs available; node hardware status; need the Sharks?</h3>

    <p>
    Just before Thanksgiving, Rob got the last 13 of the 128 new
    PC850's integrated into emulab and available for use.  The holdup was
    a new rack for the 3rd "backplane" switch, which we had to have
    because of excess wire density in the "middle" of the emulab racks,
    and software mods to support three-switch VLANs and trunking.  We'd
    already done the "burn in" on those machines, finding hardware problems.

    <p>
    Current hardware status:
    <br>
    PCs:
    <blockquote>
        168 theoretically available
	<ul>
	  <li>128 PC850s
	  <li>40 PC600s
	</ul>

	<p>
	162 actually available today
	<ul>
	  <li>5 PC850s hardware or cable problems; most sent back for
	      replacement
	  <li>1 PC600 cable/interface problem
	</ul>

	<p>
	The PC850s are proving less reliable than the PC600's and their BIOS
        takes much longer to boot.  We might be able to get info from Intel
        to improve the latter.

        <p>
        (The extra 5 PCs, totaling 173, that you see when you do "Node
        Reservation Status," include 2 we are testing for Kentucky, who are
	building their own emulab, and 3 laptops that are not now available
	for real experiments.)
    </blockquote>

    Sharks:
    <blockquote>
        Can you use 160 Sharks?

	<p>
	The machine room expansion is still not complete, nor has
	emulab moved to its final location (yes, it is possible--
	barely-- to move 10 connected racks weighing over 3 tons).
	That won't be for 2-3 months.  We are hoping to connect the
	Sharks temporarily before that, but other things have taken
	priority.  If someone can really put those Sharks to use, do
	let us know and we'll adjust priorities.  There will
	definitely be bit-rot to fix, and my crew hate the Sharks for
	their idiosyncrasies and unreliability.  But, there are a lot
	of them.
    </blockquote>

<p>
225
<li><a NAME="OPER-GBIT"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
    <h3>Gigabit short-term plans; your needs?</h3>

    <p>
    We have a huge new 6513 switch, courtesy of Cisco, with some Gigabit
    blades to show up soon.  The thought-- not yet "plan"-- is to put
    single gigabit interfaces into, say, 32 of the PC600's, leaving them
    each with 3 10/100mb and 1 10/100/1000mb interfaces.

    <p>
    The PC850s are slot-poor 1U boxes and we'd lose two 100mb interfaces
    if we put GigE into them.

    <p>
    Note that all of our PCs have only 32/33Mhz PCI slots, so you're
    not going to get full GigE bandwidth.

    <p>
    Eventually, when we get some more hardware dollars or donation of PCs
    with fast PCI busses, is to move the GigE to them, perhaps with 2 GigE
    on each.

    <p>
    Please send comments to "testbed@fast.cs.utah.edu" as to your needs
    and interests in the GigE department.  Important to you?  Useful even
    on PC600's?  Number of GigE interfaces per node in the future PCs?
            
</ul>
<hr>

255
<a name="OTHER"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
256
257
258
259
260
261
262
263
264
265
266
267
268
<h3>Other News</h3>
<ul>
<li><a NAME="OTHER-TEST"></a>
    <h3>
    "Test" emulab
    </h3>

    <p>
    Out of spare and purchased parts we're near finishing a small "test
    emulab": 8 nodes, 2 servers, Intel switches, that will largely mimic
    our production hardware.  This will improve testing and speed up
    emulab feature release.  We already have an automatic regression
    testing system, as well as a test harness that steals actual free
Jay Lepreau's avatar
Jay Lepreau committed
269
    nodes from the live database, then works with a clones of the
Leigh B. Stoller's avatar
Leigh B. Stoller committed
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
    database.  However, without an entirely separate hardware artifact we
    can't faithfully test everything.

<p>
<li><a NAME="OTHER-IXP"></a>
    <h3>
    IXP1200 "network processor" nodes
    </h3>

    <p>
    Abhijeet Joglekar, one of our students, has done major work on Intel
    IXP1200 network processor nodes.  We have a dual goal: use them as
    high-capacity "delay nodes," and provide them to experimenters to
    customize as routers.  He's been working from the Princeton code base
    [SOSP'01].  These nodes should be available around February; in the
    pipeline from Intel are 20 4-port IXP1200 cards.

<p>
<li><a NAME="OTHER-VIS"></a>
    <h3>
    "ns" file visualization
    </h3>

    <p>
    We're about to install an afternoon's simple hack: Web-based
Jay Lepreau's avatar
Jay Lepreau committed
295
    visualization of ns files, using graphviz.  This is a really simple
Leigh B. Stoller's avatar
Leigh B. Stoller committed
296
297
298
    thing, but should be quite handy for users.

<p>
299
<li><a NAME="OTHER-THANKS"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
300
301
302
303
304
305
306
307
308
309
310
311
312
    <h3>
    Thanks!
    </h3>

    <p>
    I want to thank my great staff and students who are the real reason
    that emulab works so well.  They are so good that Leigh Stoller was on
    vacation for 2 weeks spanning Thanksgiving, and I am not sure that any
    users noticed.  Besides Leigh, people of special note include Rob
    Ricci, Mike Hibler, and Mac Newbold.  I also want to thank you users
    who wrote support letters for our ITR proposal, and for your patience
    when things go wrong.
</ul>
313
314

<i>Jay Lepreau, University of Utah</i>