There are several sites that are on their way to building their own emulabs using our software. If you might be interested too, let us know; we will help. We are preparing a Web page describing the hardware requirements: minimum, recommended, and optimal, but don't wait.
There are many more sites that have said they will adopt our software, but are probably further in the future. These include:
We have plans to federate these emulabs. The ideas are to 1) be flexible enough to accommodate vast heterogeneity of site hardware/software (eg, power control-p, serial console-p, replace OS-p, singleuser-p), administrative policies, and goals, and 2) give the local site primacy in everything. Easier said than done, of course, but if successful, hundreds of sites will eventually join because it'll be so easy and useful
We are going to develop support for wireless nodes, by selecting, from a dense set of nodes, those that satisfy experimenter's criteria. We will initially be using low-end PCs, Sharks and/or Berkeley motes, using the mote radios. MIT will probably use their 802.11b Grid nodes.
We wrote an ITR proposal to do this right, with Brown (resource allocation and mapping), Duke (multiplex many virtual nodes onto one physical node when performance allows), MIT, and Mt. Holyoke. ITRs are a crapshoot, but we'll be making steps along the way in any case. Contact me if you're interested in joining the effort in any way. We could certainly use help, and we'll get funding one way or another-- perhaps you could join that aspect too.
There will be a one-day power outage sometime in December, probably after the 14th. This is for testing the gaseous fire suppression system in the remodeled machine room. It might be on a weekday. If there are days you cannot afford having emulab down, let us know.
Just before Thanksgiving, Rob got the last 13 of the 128 new PC850's integrated into emulab and available for use. The holdup was a new rack for the 3rd "backplane" switch, which we had to have because of excess wire density in the "middle" of the emulab racks, and software mods to support three-switch VLANs and trunking. We'd already done the "burn in" on those machines, finding hardware problems.
Current hardware status:
PCs:
168 theoretically availableSharks:
- 128 PC850s
- 40 PC600s
162 actually available today
- 5 PC850s hardware or cable problems; most sent back for replacement
- 1 PC600 cable/interface problem
The PC850s are proving less reliable than the PC600's and their BIOS takes much longer to boot. We might be able to get info from Intel to improve the latter.
(The extra 5 PCs, totaling 173, that you see when you do "Node Reservation Status," include 2 we are testing for Kentucky, who are building their own emulab, and 3 laptops that are not now available for real experiments.)
Can you use 160 Sharks?The machine room expansion is still not complete, nor has emulab moved to its final location (yes, it is possible-- barely-- to move 10 connected racks weighing over 3 tons). That won't be for 2-3 months. We are hoping to connect the Sharks temporarily before that, but other things have taken priority. If someone can really put those Sharks to use, do let us know and we'll adjust priorities. There will definitely be bit-rot to fix, and my crew hate the Sharks for their idiosyncrasies and unreliability. But, there are a lot of them.
We have a huge new 6513 switch, courtesy of Cisco, with some Gigabit blades to show up soon. The thought-- not yet "plan"-- is to put single gigabit interfaces into, say, 32 of the PC600's, leaving them each with 3 10/100mb and 1 10/100/1000mb interfaces.
The PC850s are slot-poor 1U boxes and we'd lose two 100mb interfaces if we put GigE into them.
Note that all of our PCs have only 32/33Mhz PCI slots, so you're not going to get full GigE bandwidth.
Eventually, when we get some more hardware dollars or donation of PCs with fast PCI busses, is to move the GigE to them, perhaps with 2 GigE on each.
Please send comments to "testbed@fast.cs.utah.edu" as to your needs and interests in the GigE department. Important to you? Useful even on PC600's? Number of GigE interfaces per node in the future PCs?
Out of spare and purchased parts we're near finishing a small "test emulab": 8 nodes, 2 servers, Intel switches, that will largely mimic our production hardware. This will improve testing and speed up emulab feature release. We already have an automatic regression testing system, as well as a test harness that steals actual free nodes from the live database, then works with a clones of the database. However, without an entirely separate hardware artifact we can't faithfully test everything.
Abhijeet Joglekar, one of our students, has done major work on Intel IXP1200 network processor nodes. We have a dual goal: use them as high-capacity "delay nodes," and provide them to experimenters to customize as routers. He's been working from the Princeton code base [SOSP'01]. These nodes should be available around February; in the pipeline from Intel are 20 4-port IXP1200 cards.
We're about to install an afternoon's simple hack: Web-based visualization of ns files, using graphviz. This is a really simple thing, but should be quite handy for users.
I want to thank my great staff and students who are the real reason that emulab works so well. They are so good that Leigh Stoller was on vacation for 2 weeks spanning Thanksgiving, and I am not sure that any users noticed. Besides Leigh, people of special note include Rob Ricci, Mike Hibler, and Mac Newbold. I also want to thank you users who wrote support letters for our ITR proposal, and for your patience when things go wrong.