Commit 69c387bc authored by Mike Hibler's avatar Mike Hibler

Jay/Mike updates. General guide:

if its "spin", it was Jay,
if its boring, tedious detail, it was Mike
parent 6f81d5d9
......@@ -198,8 +198,8 @@
<p>
Yes. Each of the PCs has its own serial console line that you can
interact with using the unix <tt>tip</tt> utility. To "tip" to
"pc01" in your experiment, ssh into <b>users.emulab.net</b>, and
then type <tt>tip pc01</tt> at the unix prompt. You may then
"pc1" in your experiment, ssh into <b>users.emulab.net</b>, and
then type <tt>tip pc1</tt> at the unix prompt. You may then
interact with the serial console.
</p>
<p>
......
......@@ -14,7 +14,8 @@
<h3>Test Nodes</h3>
<ul>
<li>40 PC nodes (<b>pc1-40</b>), consisting of:
<a name="tbpc600"></a>
<li>40 "pc600" PC nodes (<b>pc1-40</b>), consisting of:
<ul>
<li> 600MHz Intel Pentium III "Coppermine" processors.
......@@ -23,7 +24,10 @@
href="http://www.asus.com/Products/Motherboard/Pentiumpro/P3b-f/index.html">
Asus P3B-F (6 PCI/1 ISA slot)</a> motherboard (old reliable BX chipset).
<li> 256MB PC100 ECC SDRAM.
<li> 5 Intel EtherExpress Pro/100B 10/100Mbps Ethernet cards.
<li> 5
<a
href="http://support.intel.com/support/network/adapter/pro100/index.htm">
Intel EtherExpress Pro/100B</a> 10/100Mbps Ethernet cards.
<li> <a href="http://www.storage.ibm.com/hardsoft/diskdrdl/desk/ds34gxp.htm">
13GB IBM 34GXP DPTA-371360 7200RPM IDE</a> hard drive.
<li> Floppy drive
......@@ -34,7 +38,8 @@ Antec IPC3480B</a>, with 300W PS and extra fan.
</ul>
<p>
<li>128 PC nodes (<b>pc41-pc168</b>), consisting of:
<a name="tbpc850"></a>
<li>128 "pc850" PC nodes (<b>pc41-pc168</b>), consisting of:
<ul>
<li> 850MHz Intel Pentium III processors.
......@@ -43,7 +48,10 @@ Antec IPC3480B</a>, with 300W PS and extra fan.
href="http://www.intel.com/network/products/isp1100.htm">
Intel ISP1100 </a> 1U server platform (old reliable BX chipset).
<li> 512MB PC133 ECC SDRAM.
<li> 5 Intel EtherExpress Pro 10/100Mbps Ethernet ports:
<li> 5
<a
href="http://support.intel.com/support/network/adapter/pro100/index.htm">
Intel EtherExpress Pro</a> 10/100Mbps Ethernet ports:
<ul>
<li>2 builtin on the motherboard
(<code>eth2/eth3</code> in Linux, <code>fxp0/fxp1</code> in FreeBSD)</i>
......@@ -61,6 +69,7 @@ Intel ISP1100 </a> 1U server platform (old reliable BX chipset).
<p>
<a name="tbshark"></a>
<li>160 diskless <a href = "http://www.research.digital.com/SRC/iag/">
Compaq DNARD "Sharks"</a> edge nodes (<b>sh[1-20]-[1-8]</b>),
consisting of:
......@@ -79,7 +88,7 @@ Intel ISP1100 </a> 1U server platform (old reliable BX chipset).
<li> a users, file, and serial line server (<b>users.emulab.net</b>), consisting of:
<ul>
<li> Dual 500MHz Intel Pentium III cpus
<li> Dual 500MHz Intel Pentium III processors
<li> <a href="http://developer.intel.com/design/servers/l440gx/index.htm">
Intel L440GX+</a> motherboard (the GX+ chipset)
<li> 512MB PC100 ECC SDRAM
......@@ -92,11 +101,13 @@ Cyclades-Ze PCI Multiport Serial Boards</a> (model number SEZ0050).
</ul>
<p>
<li> a DB, web, and operations server, consisting of:
<li> a DB, web, DNS and operations server, consisting of:
<ul>
<li> Dell PowerEdge 2550 Rack Mount Server
<li> Single 1000MHz Intel Pentium III cpu
<li> <a
href="http://www.dell.com/us/en/hied/products/model_pedge_pedge_2550.htm">
Dell PowerEdge 2550</a> Rack Mount Server
<li> Single 1000MHz Intel Pentium III processor
<li> 512MB PC133 ECC SDRAM
<li> Dual-Channel On-board RAID (5) Controller 128MB Cache (2-Int Channels)
<li> Five 18GB Ultra3 (Ultra160) SCSI 10K RPM Hot Plug Hard Drives
......@@ -125,18 +136,24 @@ Cyclades-Ze PCI Multiport Serial Boards</a> (model number SEZ0050).
<h3>Switches and Routers</h3>
<ul>
<li> 3 <a href = "http://www.cisco.com/warp/public/cc/pd/si/casi/ca6000/prodlit/c6000_ds.htm">
<li> 4 <a href = "http://www.cisco.com/warp/public/cc/pd/si/casi/ca6000/prodlit/c6000_ds.htm">
Cisco 6509 high-end switches</a>.
Two function as the "testbed backplane" ("programmable patch panel"),
Two (soon to be three) function as the
<a name="tbbackplane"></a>
<em>testbed backplane</em> ("programmable patch panel"),
each filled with a
<a href = "http://www.cisco.com/warp/public/cc/pd/si/casi/ca6000/prodlit/6nam_ds.htm">
Network Analysis Module</a>
and seven 48-port 10/100 ethernet modules,
giving 336 100Mbps ethernet ports on each. They are linked with 2 Gbit interfaces.
The third 6509 contains an MSFC router card and functions as the core router for the testbed.
It is configured with full router software,
The final 6509 contains an MSFC router card and functions as the
<a name="tbcorerouter"></a>
<em>core router</em> for the testbed,
providing "control" interfaces for the test nodes as well as
regulating access to the testbed servers and the outside world.
This switch is configured with full router software,
Gigabit ethernet, OC-12 ATM (~600Mbps), and more
10/100 Ethernet ports. A fourth 6509 provides expansion room.
10/100 Ethernet ports.
<!-- Another, but no pix: http://www.cisco.com/univercd/cc/td/doc/pcat/ca6000.htm -->
<p>
......@@ -167,24 +184,30 @@ Cisco 6509 high-end switches</a>.
<p><h2>Layout</h2><p>
The first 4 ethernet ports of each PC node are connected to one of the
big Cisco switches.
All 160 ports can be connected in arbitrary ways by setting up VLANs
on the switches via remote configuration tools.
Cisco 6500 Switch backplane bandwidth is supposed to be near 50Gb/s.
<!-- but we've heard rumors that it's worse. -->
Four ethernet ports on each PC node are connected to the
<a href="#tbbackplane">testbed backplane</a>.
All 672 ports can be connected in arbitrary ways by setting up VLANs
on the switches via remote configuration tools.
Cisco 6500 Switch backplane bandwidth is supposed to be near 50Gb/s,
<!-- but we've heard rumors that it's worse. -->
though between the testbed backplane switches,
bandwidth is currently limited to 2Gb/s.
<p>
The fifth ethernet port is connected to another Cisco 6509. They
each have full duplex 100Mbps connections. These are for dumping
The fifth ethernet port on each PC is connected to the
<a href="#tbcorerouter">core router</a>.
Thus each PC has a full duplex 100Mbps connection to the servers.
These connections are for dumping
data off of the nodes and such, without interfering with
the experimental interfaces. The only impact on the node is
processor and disk use, and bandwidth on the PCI bus.
<p>
The DNARD Sharks are also attached to the big Cisco switch by way of a
8+2 10/100 ethernet switch from Asante. Each shelf of 8 sharks
is capable of generating up to 80Mbps, and shares one
100Mbps link to the Cisco.
The DNARD Sharks are arranged by "shelves." A shelf holds 8 sharks
each of which is connected by a 10Mbps link to an
8+2 10/100 ethernet switch from Asante. The Asante switches
are connected via a 100Mbps link to the testbed backplane.
Thus each shelf of 8 sharks
is capable of generating up to 80Mbps to the backplane.
......@@ -19,23 +19,26 @@ PAGEHEADER("Home");
Late Breaking News</a></b>
<p>
Welcome to Emulab. Emulab (sometimes called the Utah Network Testbed)
Welcome to Emulab. Emulab (sometimes called the Utah Network Testbed)
is a new and unique type of experimental environment: a
universally-available "Internet in a room" which will provide a new,
much anticipated balance between control and realism. Several hundred
PCs, combined with secure, user-friendly web-based tools, allow you
universally-available "Internet Emulator" which provides a
new balance between control and realism. Several hundred
machines, combined with secure, user-friendly web-based tools, and driven
by <it>ns</it>-compatible scripts, allow you
to remotely reserve, configure and control machines and links down to
the hardware level: error models, latency, bandwidth, packet ordering,
the hardware level:
packet loss, latency, bandwidth, packet ordering,
buffer space all can be user-defined. Even the operating system disk
contents can be securely and fully replaced with custom images.
<p>
The Testbed currently features high-speed Cisco switches connecting,
with over 2 miles of cabling, 160 end nodes
with over 5 miles of cabling, 160 edge nodes
<a href = "http://www.research.digital.com/SRC/iag/">
(Compaq DNARD Sharks)</a> and 40 core nodes (PCs). The core nodes can be
used as end nodes, simulated routers or traffic-shaping nodes, or
traffic generators. During an experiment's time slots, the experiment
(Compaq DNARD Sharks)</a> and 128 core nodes (PCs) (40 more will
be available shortly). The core nodes can be
used as edge nodes, simulated routers, traffic-shaping nodes, or
traffic generators. During an experiment's time slots, the experiment
(and associated researchers) get exclusive use of the assigned
machines, including root access if desired. Until we finish designing
and building smarter scheduling and state-saving software, and obtain
......@@ -43,8 +46,8 @@ the disk space, scheduling is manual and done at coarse granularity
(days).
<p>
We provide some default software (e.g. Linux and FreeBSD on the PCs,
NetBSD on the Sharks) that many users may want. The basic software
We provide some default software (e.g. Redhat Linux and FreeBSD on the PCs,
NetBSD on the Sharks) that many users want. The basic software
configuration on your nodes includes accounts for project members,
root access, DNS service, compilers and linkers. But fundamentally,
the software you run on it, including all bits on the disks, is
......
......@@ -13,25 +13,45 @@
</center>
<ul>
<li><b>users.emulab.net</b>: Control node, NFS server, serial line server
<li><b>boss.emulab.net</b>: Master node, database, web server, name server, trusted disk-image server
<p>
Runs FreeBSD 4.3-RELEASE. This is the main server machine for
the testbed and is where home directories and all project files
live. While most of the Testbed configuration process is done via
Also known as <b>www.emulab.net</b>.
Runs FreeBSD 4.3-RELEASE. This is the master machine for the testbed
software. Runs all the critical software components and thus is not
directly accessible by testbed users. Moderates (via the database)
access to node power cycling and disk-image loading as well as providing
DNS and web services.
<p>
<li><b>users.emulab.net</b>: Control node, NFS server, test node serial line server and console access point
<p>
Also known as <b>ops.emulab.net</b>.
Runs FreeBSD 4.3-RELEASE. This is the main server machine for users
of the testbed and is where home directories and all project files
live. While most of the testbed configuration process is done via
the Web interface, a few things must be done while logged into
users.emulab.net. These Testbed specific commands and programs are
users.emulab.net. These testbed specific commands and programs are
contained in <code>/usr/testbed/bin</code>. Your skeleton login
files will already have this directory in your path.
<p>
This is also our serial line server, which allows experimentors to
access the console port of each node in their experiment.
This is also our "serial-line console" server. Experimenters can access
the console of any testbed node (using <code>tip</code>) from here.
Console output of all nodes is also logged here.
<p>
<li><b>tipserv1.emulab.net</b>: additional test node serial line server
<p>
Runs FreeBSD 4.3-RELEASE.
Provides physical serial line ports for additional testbed nodes.
Not directly accessible by testbed users, hosted serial lines are
accessed by users via a proxy agent on users.emulab.net.
<p>
<li><b>pc[1-40].emulab.net</b>: Testbed PC nodes
<li><b>pc[1-40].emulab.net</b>: <a href="hardware.html#tbpc600">pc600</a> testbed PC nodes
<p>
The testbed nodes dual boot FreeBSD 4.3 and RedHat Linux 7.1. You
may also boot your own OSKit kernels on them. Alternatively, you
The testbed nodes can dual boot FreeBSD 4.3 and RedHat Linux 7.1.
You may also boot your own OSKit kernels on them. Alternatively, you
can run whatever OS you like by loading your own OS image onto the
the 4th DOS slice using the Testbed configuration software.
......@@ -40,19 +60,36 @@
interfaces are connected to the "experimental network," and are
used to "wire up" your specific network topology. The last
interface is connected to the "control network," and is used
for configuration and for login access from users.emulab.net. In
FreeBSD this card is named `fxp4', and in Linux it is `eth0'.
for configuration and for login access from users.emulab.net.
In FreeBSD this card is named <code>fxp4</code>,
and in Linux and OSKit kernels it is <code>eth4</code>
<p>
All of the Testbed PCs have their COM1 serial interface (console
All of these nodes have their COM1 serial interface (console
port) connected to users.emulab.net. The port is configured to run
at 115K baud, and are accessible from users.emulab.net (via the
tip command) using the appropriate "pc" names; e.g., "pc6."
at 115K baud, and are accessible from users.emulab.net via
<code>tip</code> using the appropriate "pc" names; e.g., "pc6."
<p>
<li><b>pc[41-168].emulab.net</b>: <a href="hardware.html#tbpc850">pc850</a> testbed PC nodes
<p>
Same as "pc600" nodes from a software perspective:
dual booting FreeBSD 4.3 and RedHat Linux 7.1, or capable of running
custom OSKit kernels.
However, due to differences in the hardware configuration,
the "control" interface is <code>fxp0</code> under FreeBSD,
<code>eth2</code> under Linux, and <code>eth0</code> under OSKit kernels.
<p>
As these testbed nodes support true console redirection,
BIOS interaction, as well as OS kernel interaction, is possible via
the console serial lines. However, the BIOS is password-protected
and only read-only access is allowed without the password.
<p>
<li><b>sh[1-20]-[1-8].emulab.net</b>: Testbed Shark nodes
<li><b>sh[1-20]-[1-8].emulab.net</b>: testbed <a href="hardware.html#tbshark">Shark</a> nodes
<p>
The Sharks NetBSD by default, with the filesystems provided via
The Sharks run NetBSD by default, with the filesystems provided via
NFS. You may also boot your own OSKit kernels. At this time, no support
is provided for running your own operating system on the Sharks.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment