Commit 62e9aee6 authored by Robert Ricci's avatar Robert Ricci

Added notes about which SNMP MIBs Cisco switches need to support, and

a link to the page that tells you which switches support witch MIBs.

As per Jay's request, added a note about finding switches for cheap on
EBay.

Also, reformatted long lines to look correct when using a tab width of
8 (instead of 4)
parent bcc198c6
......@@ -24,66 +24,72 @@ recommendations, and outlines the consequences of each choice.
<dl>
<dt><b>NICs</b></dt>
<dd>At least 2 - one for control net, one for the experimental network.
Our attempts to use nodes with only 1 NIC have not met with much
success. 3 interfaces lets you have delay nodes, but only linear
topolgies (no branching.) 4 lets you have (really simple) routers. We
opted to go with 5. Control net interface should have PXE capability.
All experimental interfaces should be the same, unless you are
purposely going for a heterogenous environment. Control net interface
can be different. Intel EtherExpress (fxp) and 3Com 3c??? (xl) are
known to work. Note that not all these cards have PXE, and different
versions of PXE have different sets of bugs, so it's best to get a
sample first and try it out before committing to a card. Depedning on
usage, it may be OK to get a large number of nodes with 2 interfaces to
be edge nodes, and a smaller number with more interfaces to be
routers. <b>Note:</b> Our software currently does not support
nodes that are connected to multiple experimental-net switches.
For the time being, all NICs on a node must be attached to the same
switch.</dd>
<dd>At least 2 - one for control net, one for the experimental
network. Our attempts to use nodes with only 1 NIC have not
met with much success. 3 interfaces lets you have delay nodes,
but only linear topolgies (no branching.) 4 lets you have
(really simple) routers. We opted to go with 5. Control net
interface should have PXE capability. All experimental
interfaces should be the same, unless you are purposely going
for a heterogenous environment. Control net interface can be
different. Intel EtherExpress (fxp) and 3Com 3c??? (xl) are
known to work. Note that not all these cards have PXE, and
different versions of PXE have different sets of bugs, so it's
best to get a sample first and try it out before committing to
a card. Depedning on usage, it may be OK to get a large number
of nodes with 2 interfaces to be edge nodes, and a smaller
number with more interfaces to be routers. <b>Note:</b> Our
software currently does not support nodes that are connected to
multiple experimental-net switches. For the time being, all
NICs on a node must be attached to the same switch.</dd>
<dt><b>Case/Chassis</b></dt>
<dd>This will depend on your space requirements. The cheapest option is
to buy standard desktop machines, but these are not very
space-efficient. 3U or 4U rackmount cases generally have plenty of
space for PCI cards and CPU cooling fans, but still may consume too
much space for a large-scale testbed. Smaller cases (1U or 2U) have
fewer PCI slots (usually 2 for 2U cases, and 1 or 2 in 1U cases), and
often require custom motherboards. Heat is an issue in smaller cases,
as they do not have room for CPU fans. This limits the processor speed
that can be used. For our first round of machines, we bought standard
motherboards and 4U cases, and assmbled the machines ourselves. For our
second round of PCs, we opted for the <a
href="http://www.intel.com">Intel</a> ISP1100 server platform, which
includes a 1U case, custom motherboard with 2 onboard NICs and serial
console redirection, and a power supply. This product has been
discontinued, but others like it are available from other vendors.</dd>
<dd>This will depend on your space requirements. The cheapest
option is to buy standard desktop machines, but these are not
very space-efficient. 3U or 4U rackmount cases generally have
plenty of space for PCI cards and CPU cooling fans, but still
may consume too much space for a large-scale testbed. Smaller
cases (1U or 2U) have fewer PCI slots (usually 2 for 2U cases,
and 1 or 2 in 1U cases), and often require custom motherboards.
Heat is an issue in smaller cases, as they do not have room for
CPU fans. This limits the processor speed that can be used.
For our first round of machines, we bought standard
motherboards and 4U cases, and assmbled the machines ourselves.
For our second round of PCs, we opted for the <a
href="http://www.intel.com">Intel</a> ISP1100 server platform,
which includes a 1U case, custom motherboard with 2 onboard
NICs and serial console redirection, and a power supply. This
product has been discontinued, but others like it are available
from other vendors.</dd>
<dt><b>CPU</b></dt>
<dd>Take your pick. Note that small case size (ie. 1U) may limit your
options, due to heat issues. Many experiments will not be CPU bound, but
some uses (ie. cluster computing experiments) will appreciate fast
CPUs.</dd>
<dd>Take your pick. Note that small case size (ie. 1U) may
limit your options, due to heat issues. Many experiments will
not be CPU bound, but some uses (ie. cluster computing
experiments) will appreciate fast CPUs.</dd>
<dt><b>Memory</b></dt>
<dd>Any amount is probably OK. Most of our experimenters don't seem to
need much, but at least one has wished we had more than 512MB. We chose
to go with ECC, since it is not much more expensive than non-ECC, and
with our large scale, ECC will help protect against failure.</dd>
<dd>Any amount is probably OK. Most of our experimenters don't
seem to need much, but at least one has wished we had more than
512MB. We chose to go with ECC, since it is not much more
expensive than non-ECC, and with our large scale, ECC will help
protect against failure.</dd>
<dt><b>Motherboard</b></dt>
<dd>Serial console redirection is nice, and the BIOS should have
the ability to PXE boot from a card. The ability to set IRQs per slot
is desirable (to avoid setups where multiple cards share one IRQ.)
Health monitoring hardware (temperature, fans, etc.) is good too, but
not required. All of our boards so far have been based on Intel's
aging put proven BX chipset. Onboard network interfaces can allow you
get get more NICs, something especially valuable for small cases with a
limited number of PCI slots.</dd>
<dd>Serial console redirection is nice, and the BIOS should
have the ability to PXE boot from a card. The ability to set
IRQs per slot is desirable (to avoid setups where multiple
cards share one IRQ.) Health monitoring hardware (temperature,
fans, etc.) is good too, but not required. All of our boards so
far have been based on Intel's aging put proven BX chipset.
Onboard network interfaces can allow you get get more NICs,
something especially valuable for small cases with a limited
number of PCI slots.</dd>
<dt><b>Hard Drive</b></dt>
<dd>Pretty much any one is OK. With a large number of nodes, you are
likely to run into failures, so they should be reasonably reliable</dd>
<dd>Pretty much any one is OK. With a large number of nodes,
you are likely to run into failures, so they should be
reasonably reliable</dd>
<dt><b>Floppy</b></dt>
<dd>Handy for BIOS updates, and may be used for 'neutering' PXE
......@@ -105,133 +111,148 @@ recommendations, and outlines the consequences of each choice.
<dd><dl>
<dt><b>Number of ports</b></dt>
<dd>You'll need one port for each testbed node, plus ports for
control nodes, power controllers, etc.</dd>
<dd>You'll need one port for each testbed node, plus
ports for control nodes, power controllers, etc.</dd>
<dt><b>VLANs</b></dt>
<dd>Protects control hw (switches, power, etc.) from nodes
and world, and private machines. Without them, control
hardware can be attached to an additional NIC on the
boss node (requires extra cheap switch.)</dd>
<dd>Protects control hw (switches, power, etc.) from
nodes and world, and private machines. Without them,
control hardware can be attached to an additional NIC
on the boss node (requires extra cheap switch.)</dd>
<dt><b>Router</b></dt>
<dd>Required if VLANs are used. Must have DHCP/bootp forwarding.
(Cisco calls this the 'IP Helper') A MSFC card in a Cisco Catalyst
supervisor module works well for us, but a PC would probably
suffice.</dd>
<dd>Required if VLANs are used. Must have DHCP/bootp
forwarding. (Cisco calls this the 'IP Helper') A MSFC
card in a Cisco Catalyst supervisor module works well
for us, but a PC would probably suffice.</dd>
<dt><b>Firewall</b></dt>
<dd>Can be the router, without it VLAN security is pretty much
useless, meaning that it may be possible for unautorized people to
reboot nodes and configure experimental network switches.</dd>
<dd>Can be the router, without it VLAN security is
pretty much useless, meaning that it may be possible
for unautorized people to reboot nodes and configure
experimental network switches.</dd>
<dt><b>Port MAC security</b></dt>
<dd>Helps prevent nodes from impersonating each other and control
nodes. This is an unlikely attack, since it must be attempted by
an experimenter or someone who has compromised an experimental
node already.</dd>
<dd>Helps prevent nodes from impersonating each other
and control nodes. This is an unlikely attack, since
it must be attempted by an experimenter or someone who
has compromised an experimental node already.</dd>
<dt><b>Multicast</b></dt>
<dd>Switch multicast support (IGMP snooping) and router (if
present) is used to allow multicast loading of disk images.
Otherwise, they must be done unicast and serially, which is an
impediment to re-loading disks after experiments (to return nodes
to a clean state.)</dd>
<dd>Switch multicast support (IGMP snooping) and
router (if present) is used to allow multicast loading
of disk images. Otherwise, they must be done unicast
and serially, which is an impediment to re-loading
disks after experiments (to return nodes to a clean
state.)</dd>
</dl>
<dt><b>Experimental net</b></dt>
<dd><dl>
<dt><b>Vendor</b></dt>
<dd>Our software currently supports Cisco Catalyst 6000 series
and Intel 510T switches. Other switches by the same vendors will
likely be easy to support. Switches from other vendors can
theoretically be used if they support SNMP for management, but
will likely require significant work.</dd>
<dt><b>Vendor</b></dt> <dd>Our software currently supports
Cisco Catalyst 6000 series and Intel 510T switches. Other
switches by the same vendors will likely be easy to support.
Cisco switches that support the 'CISCO-STACK-MIB' and
'CISCO-VTP-MIB' MIBs. (You can see a list of which MIBs are
supported by which devices at <a
href="http://www.cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml">Cisco's
MIB page.)</a> If you will be using multiple Ciscos for the
experimental net, and trunking between them, your switch
should also support the 'CISCO-PAGP-MIB', though this is not
required. Switches from other vendors can theoretically be
used if they support SNMP for management, but will likely
require significant work. We have found that you can often
find good deals on new or used equipment on EBay.</dd>
<dt><b>Number of ports</b></dt>
<dd>You'll need as many ports as you have experimental interfaces
on your nodes. In addition, you need 1 port to connect the
experimental net switch to the control net (for SNMP
configuration.) If you're using multiple switches, you need
sufficient ports to 'stack' them together - If your switches are
100Mbit, Gigabit ports are useful for this.</dd>
<dd>You'll need as many ports as you have experimental
interfaces on your nodes. In addition, you need 1 port
to connect the experimental net switch to the control
net (for SNMP configuration.) If you're using multiple
switches, you need sufficient ports to 'stack' them
together - If your switches are 100Mbit, Gigabit ports
are useful for this.</dd>
<dt><b>VLAN support</b></dt>
<dd>Optimally, configurable via SNMP (we have no tools to configure
it otherwise.) If not available, all experimental isolation is
lost, delay nodes can't be used well, and method for coming up with
globablly unique IP addresses will be required. So, VLANs are
<dd>Optimally, configurable via SNMP (we have no tools
to configure it otherwise.) If not available, all
experimental isolation is lost, delay nodes can't be
used well, and method for coming up with globablly
unique IP addresses will be required. So, VLANs are
basically necessary.</dd>
<dt><b>Trunking</b></dt>
<dd>If multiple switches, should have a way to trunk them (or
experiments are limited to 1 switch.) Ideally, all trunked switches
should support VLAN management (like Cisco's VTP) and a VLAN
trunking protocol like 802.1q . It's good if the trunk links are
at least an order of magnitude faster than node links, and link
aggregation (ie. EtherChannel) is desirable.</dd>
</dl>
</dl>
<dd>If multiple switches, should have a way to trunk
them (or experiments are limited to 1 switch.) Ideally,
all trunked switches should support VLAN management
(like Cisco's VTP) and a VLAN trunking protocol like
802.1q . It's good if the trunk links are at least an
order of magnitude faster than node links, and link
aggregation (ie. EtherChannel) is desirable.</dd> </dl>
</dl>
<hr>
<a NAME="SERVERS"></a><h2>Servers</h2>
Two is pereferable, though 1 could be made to work. The NFS server needs
enough space for /users and /proj , as well as a few extra gigs for build
trees, logs, and images. If you have more than 128 nodes, and plan to use
Cyclades for serial ports, you need 1 tip server per 128 serial lines.
>100Mbit line is suggested for disk image distribution machine (usually
boss.) Database machine should have reasonably fast CPU and plenty of RAM.
Two is pereferable, though 1 could be made to work. The NFS server
needs enough space for /users and /proj , as well as a few extra gigs
for build trees, logs, and images. If you have more than 128 nodes, and
plan to use Cyclades for serial ports, you need 1 tip server per 128
serial lines. >100Mbit line is suggested for disk image distribution
machine (usually boss.) Database machine should have reasonably fast
CPU and plenty of RAM.
<hr>
<a NAME="OTHER"></a><h2>Other Hardware</h2>
<dl>
<dt><b>Network cables</b></dt>
<dd>We use Cat5E, chosen becasue they are not much more expensive than
Cat5, and can be used in the future for gigabit ethernet. It has been
our experience that 'boots' on cables do more harm than good. The
main problems are that they make it difficult to disconnect the cables
one connected, and that they get in the way on densely-connected
switches. Cables with 'molded strain releif' are better than cables
with boots, but are often much more extensive. We buy cables in
two-foot increments, which keeps slack low without making the order
too complicated. Our standard so far has been to make control net
cables red, experimental net cables yellow, serial cables white,
and cables for control hardware (such as power controllers) green.
We've bought all of our cables from
<a href="http://www.dataaccessories.com">dataaccessories.com</a>,
<dd>We use Cat5E, chosen becasue they are not much more
expensive than Cat5, and can be used in the future for gigabit
ethernet. It has been our experience that 'boots' on cables do
more harm than good. The main problems are that they make it
difficult to disconnect the cables one connected, and that they
get in the way on densely-connected switches. Cables with
'molded strain releif' are better than cables with boots, but
are often much more extensive. We buy cables in two-foot
increments, which keeps slack low without making the order too
complicated. Our standard so far has been to make control net
cables red, experimental net cables yellow, serial cables
white, and cables for control hardware (such as power
controllers) green. We've bought all of our cables from <a
href="http://www.dataaccessories.com">dataaccessories.com</a>,
and have had excellent luck with them.</dd>
<dt><b>Serial cables</b></dt>
<dd>We use Cat5E, but with a special pin pattern on the ends to avoid
interference between the transmit/receive pairs. We use RJ-45
connectors on both ends, and a custom serial hood to connect to the
DB-9 serial ports on the nodes. Contact us to get our custom cable
specs.</dd>
<dd>We use Cat5E, but with a special pin pattern on the ends to
avoid interference between the transmit/receive pairs. We use
RJ-45 connectors on both ends, and a custom serial hood to
connect to the DB-9 serial ports on the nodes. Contact us to
get our custom cable specs.</dd>
<dt><b>Power controllers</b></dt>
<dd>Without them, nodes can not be reliably rebooted. We started out
with 8-port SNMP-controlled power contollers from <a
href="http://www.apc.com">APC</a>. Our newer nodes use the RPC-27 from
<a href="http://www.baytechdcd.com/">BayTech</a>, 20-outlet
vertically-mounted, serial-controlled power controllers. The serial
controllers are genrally cheaper, and the more ports on each
controller, the cheaper.</dd>
<dd>Without them, nodes can not be reliably rebooted. We
started out with 8-port SNMP-controlled power contollers from
<a href="http://www.apc.com">APC</a>. Our newer nodes use the
RPC-27 from <a href="http://www.baytechdcd.com/">BayTech</a>,
20-outlet vertically-mounted, serial-controlled power
controllers. The serial controllers are genrally cheaper, and
the more ports on each controller, the cheaper.</dd>
<dt><b>Serial (console) ports</b></dt>
<dd>Custom kernels/OSes (specifically, the OSKit) may not support ssh,
etc. Also useful if an experimenter somehow scrogs network. We use the
<a href="http://cyclades.com">Cyclades</a> Cyclom Ze serial adapters,
which allow up to 128 serial ports in a single PC.</dd>
<dd>Custom kernels/OSes (specifically, the OSKit) may not
support ssh, etc. Also useful if an experimenter somehow scrogs
network. We use the <a href="http://cyclades.com">Cyclades</a>
Cyclom Ze serial adapters, which allow up to 128 serial ports
in a single PC.</dd>
<dt><b>Serial port reset controllers</b><dt>
<dd>It may be possible to build (or buy) serial-port passthroughs that
are wired to reset pins on MB. Some motherboard chipsets (eg. Intel
LX+) have this feature built on. NOT TESTED by Utah, and may not be
100% reliable (MB may be able to get into a state where reset pins are
not functional.) Theoretically nicer to the hardware than power
controllers.<dd>
</dl>
<dd>It may be possible to build (or buy) serial-port
passthroughs that are wired to reset pins on MB. Some
motherboard chipsets (eg. Intel LX+) have this feature built
on. NOT TESTED by Utah, and may not be 100% reliable (MB may be
able to get into a state where reset pins are not functional.)
Theoretically nicer to the hardware than power controllers.<dd>
</dl>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment