Hardware Overview, "Emulab Classic"
Test Nodes
- 128 pc850
PC nodes (pc41-pc168), consisting of:
- 850MHz Intel Pentium III processors.
- Based on the
Intel ISP1100 1U server platform (old reliable BX chipset).
- 512MB PC133 ECC SDRAM.
- 5
Intel EtherExpress Pro 10/100Mbps Ethernet ports:
- 2 builtin on the motherboard
(
eth2/eth3
in Linux, fxp0/fxp1
in FreeBSD)
- 2 on an
Intel EtherExpress Pro 100+ Dual-Port Server Adapter
(
eth0/eth1
in Linux, fxp2/fxp3
in FreeBSD)
- 1 on a single-port Intel EtherExpress Pro/100B Adapter
(
eth4
in Linux, fxp4
in FreeBSD)
-
40GB IBM 60GXP 7200RPM ATA/100 IDE hard drive.
- Floppy drive.
- 40 pc600
PC nodes (pc1-40), consisting of:
- 600MHz Intel Pentium III "Coppermine" processors.
-
Asus P3B-F (6 PCI/1 ISA slot) motherboard (old reliable BX chipset).
- 256MB PC100 ECC SDRAM.
- 5
Intel EtherExpress Pro/100B 10/100Mbps Ethernet cards.
-
13GB IBM 34GXP DPTA-371360 7200RPM IDE hard drive.
- Floppy drive
- Cheap video card (Jaton Riva 128ZX AGP w/4MB video RAM)
- All in a nice but overweight rackmount case on rails:
Antec IPC3480B, with 300W PS and extra fan.
- (Currently unavailable)
160 diskless
Compaq DNARD "Sharks" edge nodes (sh[1-20]-[1-8]),
consisting of:
- 233 Mhz StrongARM processors.
- 32MB RAM.
- 1 10Mbps ethernet interface.
Servers
- a users, file, and serial line server (users.emulab.net), consisting of:
- Dual 500MHz Intel Pentium III processors
-
Intel L440GX+ motherboard (the GX+ chipset)
- 512MB PC100 ECC SDRAM
- 90 GB disk space: 5
Quantum Atlas IV 18GB 7200RPM Wide LVD SCSI hard drives
- 3 64-port
Cyclades-Ze PCI Multiport Serial Boards (model number SEZ0050).
- a DB, web, DNS and operations server, consisting of:
-
Dell PowerEdge 2550 Rack Mount Server
- Single 1000MHz Intel Pentium III processor
- 512MB PC133 ECC SDRAM
- Dual-Channel On-board RAID (5) Controller 128MB Cache (2-Int Channels)
- Five 18GB Ultra3 (Ultra160) SCSI 10K RPM Hot Plug Hard Drives
- Dual Redundant 330 Watt Power Supplies
- Integrated Broadcom Gigabit BaseT and Intel Pro/100+ NICs
- a second serial line server, consisting of:
- a serial line server for critical emulab machines, consisting of:
- An ISP1100 box, like the "pc850" nodes above.
- a 4-port serial card.
Switches and Routers
- 4
Cisco 6509 high-end switches.
Three function as the
testbed backplane ("programmable patch panel"),
each filled with a
Network Analysis Module
and seven 48-port 10/100 ethernet modules,
giving 336 100Mbps ethernet ports on each. They are linked with 2 Gbit interfaces.
The final 6509 contains an MSFC router card and functions as the
core router for the testbed,
providing "control" interfaces for the test nodes as well as
regulating access to the testbed servers and the outside world.
This switch is configured with full router software,
Gigabit ethernet, OC-12 ATM (~600Mbps), and more
10/100 Ethernet ports.
Power Controllers
- 10 APC MasterSwitch AP9210 8 port power controllers.
(The AP9210 is discontinued; replaced in the product line by the
AP9211.)
- 7
BayTech RPC27 20 port remote power controllers.
Racks
- 13
Wrightline "Tech I" racks: 44U, 34" deep, 2 are
24" wide with cable management; the rest are 19" wide.
(Aug 2001: these appear to have been discontinued or renamed.)
Layout
Four ethernet ports on each PC node are connected to the
testbed backplane.
All 672 ports can be connected in arbitrary ways by setting up VLANs
on the switches via remote configuration tools.
Cisco 6500 Switch backplane bandwidth is supposed to be near 50Gb/s,
though between the testbed backplane switches,
bandwidth is currently limited to 2Gb/s.
The fifth ethernet port on each PC is connected to the
core router.
Thus each PC has a full duplex 100Mbps connection to the servers.
These connections are for dumping
data off of the nodes and such, without interfering with
the experimental interfaces. The only impact on the node is
processor and disk use, and bandwidth on the PCI bus.
The DNARD Sharks are arranged by "shelves." A shelf holds 8 sharks
each of which is connected by a 10Mbps link to an
8+2 10/100 ethernet switch from Asante. The Asante switches
are connected via a 100Mbps link to the testbed backplane.
Thus each shelf of 8 sharks
is capable of generating up to 80Mbps to the backplane.