Commit 036f44fe authored by Leigh B. Stoller's avatar Leigh B. Stoller

New News page from Jay.

parent 5dd465b0
......@@ -59,7 +59,7 @@ function WRITESIDEBAR() {
echo "<table cellspacing=2 cellpadding=2 border=0 width=150>\n";
WRITESIDEBARBUTTON("Home", $TBDOCBASE, "index.php3");
WRITESIDEBARBUTTON("News (none)", $TBDOCBASE,
WRITESIDEBARBUTTON("News (new Nov 29)", $TBDOCBASE,
"docwrapper.php3?docname=news.html");
WRITESIDEBARBUTTON("Publications", $TBDOCBASE, "pubs.php3");
WRITESIDEBARBUTTON("Documentation", $TBDOCBASE, "doc.php3");
......
......@@ -12,5 +12,272 @@
</h1>
</center>
No Notable New News at this time ...
<h2>Contents</h2>
<ul>
<li> <a href="#BIGISSUES">Big Issues</a>
<ul>
<li> <a href="#BIGISSUES-FED">
Planned new emulabs; federated emulabs; wireless; ITR proposal
</a>
</ul>
<li> <a href="#OPER">Operational, hardware issues, and questions</a>
<ul>
<li> <a href="#OPER-POWEROUT">
Upcoming power outage
</a>
<li> <a href="#OPER-NEWPCS">
New PCs available; node hardware status; need the Sharks?
</a>
<li> <a href="#OPER-GBIT">
Gigabit short-term plans; your needs?
</a>
</ul>
<li> <a href="#OTHER">Other News</a>
<ul>
<li> <a href="#OTHER-TEST">
"Test" emulab
</a>
<li> <a href="#OTHER-IXP">
IXP1200 "network processor" nodes
</a>
<li> <a href="#OTHER-VIS">
"ns" file visualization
</a>
<li> <a href="#OTHER-THANKS">
Thanks!
</a>
</ul>
</ul>
<hr>
<a NAME="BIGISSUES"></a>
<h3>Big Issues</h3>
<ul>
<li><a NAME="BIGISSUES-FED"></a>
<h3>New emulabs; federated emulabs; wireless; ITR proposal</h3>
<p>
There are several sites that are on their way to building their own
emulabs using our software. If you might be interested too, let us
know; we will help. We are preparing a Web page describing the
hardware requirments: minimum, recommended, and optimal, but don't wait.
<p>
<ul>
<li>Kentucky - instructional, 40 node, nodes degree 4 + 1,
Cisco switches. Hardware in progress.
<li>Duke - research, eventually 100+ existing nodes, some GigE,
nodes degree 1 + 1 (probably), Cisco switch I think.
Have our software.
<li>CMU - instructional, 20 existing PCs with IXP1200 boards,
degree 4 + 1. Intel switches. Start any time.
<li>Cornell - research, 150+ node hi-end PCs, GigE. Awaiting HW
funding.
</ul>
<p>
There are many more sites that have said they will adopt our
software, but are probably further in the future. These include:
<p>
<ul>
<li>HPLabs (2 1000+ node clusters)
<li>MIT (Grid wireless testbed, RON testbed)
<li>Princeton (IXP1200 nodes)
<li>Stuttgart (existing experimental cluster)
and more I forget or are further out, including Intel and
Berkeley.
</ul>
<p>
We have plans to federate these emulabs. The ideas are to 1) be
flexible enough to accomodate vast heterogeneity of site
hardware/software (eg, power control-p, serial console-p, replace
OS-p, singleuser-p), administrative policies, and goals, and 2) give
the local site primacy in everything. Easier said than done, of
course, but if successful, hundreds of sites will eventually join
because it'll be so easy and useful
<p>
We are going to develop support for wireless nodes, by selecting, from
a dense set of nodes, those that satisfy experimenter's criteria. We
will initially be using low-end PCs, Sharks and/or Berkeley motes,
using the mote radios. MIT will probably use their 802.11b Grid
nodes.
<p>
We wrote an ITR proposal to do this right, with Brown (resource
allocation and mapping), Duke (multiplex many virtual nodes onto one
physical node when performance allows), MIT, and Mt. Holyoke. ITRs
are a crapshoot, but we'll be making steps along the way in any case.
Contact me if you're interested in joining the effort in any way. We
could certainly use help, and we'll get funding one way or another--
perhaps you could join that aspect too.
</ul>
<hr>
<a NAME="OPER"></a>
<h3>Operational, hardware issues, and questions</h3>
<ul>
<li><a NAME="OPER-POWEROUT"></a>
<h3>Upcoming power outage</h3>
<p>
There will be a one-day power outage sometime in December, probably
after the 14th. This is for testing the gaseous fire suppression
system in the remodeled machine room. It might be on a weekday. If
there are days you cannot afford having emulab down, let us know.
<p>
<li><a NAME="OPER-NEWPCS"></a>
<h3>New PCs available; node hardware status; need the Sharks?</h3>
<p>
Just before Thanksgiving, Rob got the last 13 of the 128 new
PC850's integrated into emulab and available for use. The holdup was
a new rack for the 3rd "backplane" switch, which we had to have
because of excess wire density in the "middle" of the emulab racks,
and software mods to support three-switch VLANs and trunking. We'd
already done the "burn in" on those machines, finding hardware problems.
<p>
Current hardware status:
<br>
PCs:
<blockquote>
168 theoretically available
<ul>
<li>128 PC850s
<li>40 PC600s
</ul>
<p>
162 actually available today
<ul>
<li>5 PC850s hardware or cable problems; most sent back for
replacement
<li>1 PC600 cable/interface problem
</ul>
<p>
The PC850s are proving less reliable than the PC600's and their BIOS
takes much longer to boot. We might be able to get info from Intel
to improve the latter.
<p>
(The extra 5 PCs, totaling 173, that you see when you do "Node
Reservation Status," include 2 we are testing for Kentucky, who are
building their own emulab, and 3 laptops that are not now available
for real experiments.)
</blockquote>
Sharks:
<blockquote>
Can you use 160 Sharks?
<p>
The machine room expansion is still not complete, nor has
emulab moved to its final location (yes, it is possible--
barely-- to move 10 connected racks weighing over 3 tons).
That won't be for 2-3 months. We are hoping to connect the
Sharks temporarily before that, but other things have taken
priority. If someone can really put those Sharks to use, do
let us know and we'll adjust priorities. There will
definitely be bit-rot to fix, and my crew hate the Sharks for
their idiosyncrasies and unreliability. But, there are a lot
of them.
</blockquote>
<p>
<li><a NAME="OPER-POWEROUT"></a>
<h3>Gigabit short-term plans; your needs?</h3>
<p>
We have a huge new 6513 switch, courtesy of Cisco, with some Gigabit
blades to show up soon. The thought-- not yet "plan"-- is to put
single gigabit interfaces into, say, 32 of the PC600's, leaving them
each with 3 10/100mb and 1 10/100/1000mb interfaces.
<p>
The PC850s are slot-poor 1U boxes and we'd lose two 100mb interfaces
if we put GigE into them.
<p>
Note that all of our PCs have only 32/33Mhz PCI slots, so you're
not going to get full GigE bandwidth.
<p>
Eventually, when we get some more hardware dollars or donation of PCs
with fast PCI busses, is to move the GigE to them, perhaps with 2 GigE
on each.
<p>
Please send comments to "testbed@fast.cs.utah.edu" as to your needs
and interests in the GigE department. Important to you? Useful even
on PC600's? Number of GigE interfaces per node in the future PCs?
</ul>
<hr>
<a name="#OTHER"></a>
<h3>Other News</h3>
<ul>
<li><a NAME="OTHER-TEST"></a>
<h3>
"Test" emulab
</h3>
<p>
Out of spare and purchased parts we're near finishing a small "test
emulab": 8 nodes, 2 servers, Intel switches, that will largely mimic
our production hardware. This will improve testing and speed up
emulab feature release. We already have an automatic regression
testing system, as well as a test harness that steals actual free
nodes from the live database, then works with a clonse of the
database. However, without an entirely separate hardware artifact we
can't faithfully test everything.
<p>
<li><a NAME="OTHER-IXP"></a>
<h3>
IXP1200 "network processor" nodes
</h3>
<p>
Abhijeet Joglekar, one of our students, has done major work on Intel
IXP1200 network processor nodes. We have a dual goal: use them as
high-capacity "delay nodes," and provide them to experimenters to
customize as routers. He's been working from the Princeton code base
[SOSP'01]. These nodes should be available around February; in the
pipeline from Intel are 20 4-port IXP1200 cards.
<p>
<li><a NAME="OTHER-VIS"></a>
<h3>
"ns" file visualization
</h3>
<p>
We're about to install an afternoon's simple hack: Web-based
visualization of ns files, using graphviz. This is a realy simple
thing, but should be quite handy for users.
<p>
<li><a NAME="OTHER-TEST"></a>
<h3>
Thanks!
</h3>
<p>
I want to thank my great staff and students who are the real reason
that emulab works so well. They are so good that Leigh Stoller was on
vacation for 2 weeks spanning Thanksgiving, and I am not sure that any
users noticed. Besides Leigh, people of special note include Rob
Ricci, Mike Hibler, and Mac Newbold. I also want to thank you users
who wrote support letters for our ITR proposal, and for your patience
when things go wrong.
</ul>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment