news.html 9 KB
Newer Older
1 2 3 4 5 6 7 8 9
<html>
<head>
	<title>Emulab.Net - Late Breaking News</title>
	<link rel='stylesheet' href='tbstyle-plain.css' type='text/css'>
</head>
<body>

<center>
<h1>
10
    News
11
</h1>
12
November 29, 2001
13 14
</center>

Leigh B. Stoller's avatar
Leigh B. Stoller committed
15 16 17 18 19 20 21 22
<h2>Contents</h2>
<ul>
<li> <a href="#BIGISSUES">Big Issues</a>
     <ul>
     <li> <a href="#BIGISSUES-FED">
          Planned new emulabs; federated emulabs; wireless; ITR proposal
          </a>
     </ul>
23

Leigh B. Stoller's avatar
Leigh B. Stoller committed
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
<li> <a href="#OPER">Operational, hardware issues, and questions</a>
     <ul>
     <li> <a href="#OPER-POWEROUT">
	  Upcoming power outage
          </a>
     <li> <a href="#OPER-NEWPCS">
	  New PCs available; node hardware status; need the Sharks?
          </a>
     <li> <a href="#OPER-GBIT">
	  Gigabit short-term plans; your needs?
          </a>
     </ul>

<li> <a href="#OTHER">Other News</a>
     <ul>
     <li> <a href="#OTHER-TEST">
          "Test" emulab
          </a>
     <li> <a href="#OTHER-IXP">
	  IXP1200 "network processor" nodes
          </a>
     <li> <a href="#OTHER-VIS">
	  "ns" file visualization
          </a>
     <li> <a href="#OTHER-THANKS">
	  Thanks!
          </a>
     </ul>
</ul>
<hr>


<a NAME="BIGISSUES"></a>
<h3>Big Issues</h3>
<ul>
<li><a NAME="BIGISSUES-FED"></a>
    <h3>New emulabs; federated emulabs; wireless; ITR proposal</h3>

    <p>
    There are several sites that are on their way to building their own
    emulabs using our software.  If you might be interested too, let us
    know; we will help.  We are preparing a Web page describing the
Jay Lepreau's avatar
Jay Lepreau committed
66
    hardware requirements: minimum, recommended, and optimal, but don't wait.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96

    <p>
    <ul>
    <li>Kentucky - instructional, 40 node, nodes degree 4 + 1,
	Cisco switches.  Hardware in progress.
    <li>Duke - research,  eventually 100+ existing nodes, some GigE,
	nodes degree 1 + 1 (probably), Cisco switch I think.
	Have our software.
    <li>CMU - instructional, 20 existing PCs with IXP1200 boards,
	degree 4 + 1.  Intel switches.  Start any time.
    <li>Cornell - research, 150+ node hi-end PCs, GigE.  Awaiting HW
        funding.
    </ul>
    
    <p>
    There are many more sites that have said they will adopt our
    software, but are probably further in the future.  These include:

    <p>
    <ul>
    <li>HPLabs (2 1000+ node clusters)
    <li>MIT (Grid wireless testbed, RON testbed)
    <li>Princeton (IXP1200 nodes)
    <li>Stuttgart (existing experimental cluster)
        and more I forget or are further out, including Intel and
        Berkeley.
    </ul>

    <p>
    We have plans to federate these emulabs.  The ideas are to 1) be
Jay Lepreau's avatar
Jay Lepreau committed
97
    flexible enough to accommodate vast heterogeneity of site
Leigh B. Stoller's avatar
Leigh B. Stoller committed
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195
    hardware/software (eg, power control-p, serial console-p, replace
    OS-p, singleuser-p), administrative policies, and goals, and 2) give
    the local site primacy in everything.  Easier said than done, of
    course, but if successful, hundreds of sites will eventually join
    because it'll be so easy and useful

    <p>
    We are going to develop support for wireless nodes, by selecting, from
    a dense set of nodes, those that satisfy experimenter's criteria.  We
    will initially be using low-end PCs, Sharks and/or Berkeley motes,
    using the mote radios.  MIT will probably use their 802.11b Grid
    nodes.

    <p>
    We wrote an ITR proposal to do this right, with Brown (resource
    allocation and mapping), Duke (multiplex many virtual nodes onto one
    physical node when performance allows), MIT, and Mt. Holyoke.  ITRs
    are a crapshoot, but we'll be making steps along the way in any case.
    Contact me if you're interested in joining the effort in any way.  We
    could certainly use help, and we'll get funding one way or another--
    perhaps you could join that aspect too.
</ul>

<hr>

<a NAME="OPER"></a>
<h3>Operational, hardware issues, and questions</h3>
<ul>
<li><a NAME="OPER-POWEROUT"></a>
    <h3>Upcoming power outage</h3>

    <p>
    There will be a one-day power outage sometime in December, probably
    after the 14th.  This is for testing the gaseous fire suppression
    system in the remodeled machine room.  It might be on a weekday.  If
    there are days you cannot afford having emulab down, let us know.

<p>
<li><a NAME="OPER-NEWPCS"></a>
    <h3>New PCs available; node hardware status; need the Sharks?</h3>

    <p>
    Just before Thanksgiving, Rob got the last 13 of the 128 new
    PC850's integrated into emulab and available for use.  The holdup was
    a new rack for the 3rd "backplane" switch, which we had to have
    because of excess wire density in the "middle" of the emulab racks,
    and software mods to support three-switch VLANs and trunking.  We'd
    already done the "burn in" on those machines, finding hardware problems.

    <p>
    Current hardware status:
    <br>
    PCs:
    <blockquote>
        168 theoretically available
	<ul>
	  <li>128 PC850s
	  <li>40 PC600s
	</ul>

	<p>
	162 actually available today
	<ul>
	  <li>5 PC850s hardware or cable problems; most sent back for
	      replacement
	  <li>1 PC600 cable/interface problem
	</ul>

	<p>
	The PC850s are proving less reliable than the PC600's and their BIOS
        takes much longer to boot.  We might be able to get info from Intel
        to improve the latter.

        <p>
        (The extra 5 PCs, totaling 173, that you see when you do "Node
        Reservation Status," include 2 we are testing for Kentucky, who are
	building their own emulab, and 3 laptops that are not now available
	for real experiments.)
    </blockquote>

    Sharks:
    <blockquote>
        Can you use 160 Sharks?

	<p>
	The machine room expansion is still not complete, nor has
	emulab moved to its final location (yes, it is possible--
	barely-- to move 10 connected racks weighing over 3 tons).
	That won't be for 2-3 months.  We are hoping to connect the
	Sharks temporarily before that, but other things have taken
	priority.  If someone can really put those Sharks to use, do
	let us know and we'll adjust priorities.  There will
	definitely be bit-rot to fix, and my crew hate the Sharks for
	their idiosyncrasies and unreliability.  But, there are a lot
	of them.
    </blockquote>

<p>
196
<li><a NAME="OPER-GBIT"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225
    <h3>Gigabit short-term plans; your needs?</h3>

    <p>
    We have a huge new 6513 switch, courtesy of Cisco, with some Gigabit
    blades to show up soon.  The thought-- not yet "plan"-- is to put
    single gigabit interfaces into, say, 32 of the PC600's, leaving them
    each with 3 10/100mb and 1 10/100/1000mb interfaces.

    <p>
    The PC850s are slot-poor 1U boxes and we'd lose two 100mb interfaces
    if we put GigE into them.

    <p>
    Note that all of our PCs have only 32/33Mhz PCI slots, so you're
    not going to get full GigE bandwidth.

    <p>
    Eventually, when we get some more hardware dollars or donation of PCs
    with fast PCI busses, is to move the GigE to them, perhaps with 2 GigE
    on each.

    <p>
    Please send comments to "testbed@fast.cs.utah.edu" as to your needs
    and interests in the GigE department.  Important to you?  Useful even
    on PC600's?  Number of GigE interfaces per node in the future PCs?
            
</ul>
<hr>

226
<a name="OTHER"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
227 228 229 230 231 232 233 234 235 236 237 238 239
<h3>Other News</h3>
<ul>
<li><a NAME="OTHER-TEST"></a>
    <h3>
    "Test" emulab
    </h3>

    <p>
    Out of spare and purchased parts we're near finishing a small "test
    emulab": 8 nodes, 2 servers, Intel switches, that will largely mimic
    our production hardware.  This will improve testing and speed up
    emulab feature release.  We already have an automatic regression
    testing system, as well as a test harness that steals actual free
Jay Lepreau's avatar
Jay Lepreau committed
240
    nodes from the live database, then works with a clones of the
Leigh B. Stoller's avatar
Leigh B. Stoller committed
241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265
    database.  However, without an entirely separate hardware artifact we
    can't faithfully test everything.

<p>
<li><a NAME="OTHER-IXP"></a>
    <h3>
    IXP1200 "network processor" nodes
    </h3>

    <p>
    Abhijeet Joglekar, one of our students, has done major work on Intel
    IXP1200 network processor nodes.  We have a dual goal: use them as
    high-capacity "delay nodes," and provide them to experimenters to
    customize as routers.  He's been working from the Princeton code base
    [SOSP'01].  These nodes should be available around February; in the
    pipeline from Intel are 20 4-port IXP1200 cards.

<p>
<li><a NAME="OTHER-VIS"></a>
    <h3>
    "ns" file visualization
    </h3>

    <p>
    We're about to install an afternoon's simple hack: Web-based
Jay Lepreau's avatar
Jay Lepreau committed
266
    visualization of ns files, using graphviz.  This is a really simple
Leigh B. Stoller's avatar
Leigh B. Stoller committed
267 268 269
    thing, but should be quite handy for users.

<p>
270
<li><a NAME="OTHER-THANKS"></a>
Leigh B. Stoller's avatar
Leigh B. Stoller committed
271 272 273 274 275 276 277 278 279 280 281 282 283
    <h3>
    Thanks!
    </h3>

    <p>
    I want to thank my great staff and students who are the real reason
    that emulab works so well.  They are so good that Leigh Stoller was on
    vacation for 2 weeks spanning Thanksgiving, and I am not sure that any
    users noticed.  Besides Leigh, people of special note include Rob
    Ricci, Mike Hibler, and Mac Newbold.  I also want to thank you users
    who wrote support letters for our ITR proposal, and for your patience
    when things go wrong.
</ul>
284 285

<i>Jay Lepreau, University of Utah</i>