Commit 93cd84ee authored by Robert Ricci's avatar Robert Ricci
Browse files

Start hardware-mail.mbox, an mbox file containing mail we've exchanged

with others about hardware recommendations.

Right now, it's all mail with the DETER folks. Most of it is our
replies to them.
parent fb3b39c2
......@@ -39,6 +39,8 @@ Good places to start would include:
- Diagrams and brief explanations of the state machines used in the
system (www/doc/states.html and www/doc/*.gif)
- The instructions for building a running system from scratch (doc/setup*.txt)
- Mail we have exchanged with others about hardware recommendations
(doc/hardware-mail.mbox)
QUICK TOUR
- The Database maintains most testbed state.
......
From ricci@cs.utah.edu Mon Oct 27 17:25:49 2003
Date: Mon, 27 Oct 2003 17:25:49 -0700
From: Robert P Ricci <ricci@cs.utah.edu>
To: Bob Braden <braden@ISI.EDU>
Cc: testbed-ops@emulab.net, deter-isi@ISI.EDU, lepreau@cs.utah.edu
Subject: Re: Hardware configuration for Emulab clone
Message-ID: <20031027172549.X95279@cs.utah.edu>
Mail-Followup-To: Bob Braden <braden@ISI.EDU>, testbed-ops@emulab.net,
deter-isi@ISI.EDU, lepreau@cs.utah.edu
References: <200310272219.OAA28834@gra.isi.edu>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5.1i
In-Reply-To: <200310272219.OAA28834@gra.isi.edu>; from braden@ISI.EDU on Mon, Oct 27, 2003 at 02:19:38PM -0800
Status: RO
Content-Length: 5681
Lines: 113
You may already know many of these things.
Thus spake Bob Braden on Mon, Oct 27, 2003 at 02:19:38PM -0800:
> 2) DETER will purchase 4 additional 1000bT Ethernet interfaces for each
> node. Ideally, the 2 64bit/33MHz PCI slots of the hosts will be
> populated with dual 1000bT interface cards.
Unless these two PCI slots are on independent busses, you can probably
expect to drive no more than two gigabit interfaces full speed, and only
at half-duplex. The theoretical PCI bandwidth on 64/33 PCI is, I
believe, not much more than 2Gbps. But, of course, you'll probably have
trouble generating more than 2Gbps of traffic on a PC anyway.
> 3) DETER will purchase a Cisco 6513 switch with a supervisor blade and
> 6 blades of 48 1000bT ports. That will support 4 x 64 Ethernet ports
> to the nodes.
This is a very complicated issue. My views on this come from our
perspective on Emulab, in which we want to guarantee (or at least be
pretty darn sure) that there won't be any artifacts due to switch
limitations. I don't know if your goals are as stringent, and of course
there's always budget limitations... (Reading Steve Schwab's comments
farther down, it looks like you guys also want guaranteed bandwidth.)
Are the 48-port gigabit modules you're looking at WS-X6548-GE-TX? This is
the only 48-port GigE module I'm aware of from Cisco. If it's something
else, from the WS-X7 series, for example, a whole different set of
issues apply.
From my understand of things (which came from reading some Cisco
documents, and from talking to an ex-Cisco engineer), this module is
very oversubscribed. It has a single 8Gbps (full-duplex) connection to
the switching fabric. I was told by the Cisco engineer that these
modules are 8x oversubscribed, though the math doesn't quite add up on
that (48 ports into an 8Gbps line would seem to be 6x oversubscribed.)
So, there may be some other bottleneck in it.
In the documentation I have about the architecture of the 65xx series,
it claims that 'SFM Single-Attached Fabric-Enabled Card's (which I think
all WS-X65 modules are), have a 16Gbps bus internally. Meaning that
you're not going to get more than 8 full-duplex, full-speed gigabit
flows out of them. If you were told that they have 80Gbps backplanes, I
can't say for sure that's wrong, but I would certainly double-check that
number. The white paper I'm referring to is online at:
http://www.cisco.com/en/US/customer/products/hw/switches/ps708/products_white_paper09186a0080092389.shtml
... in particular, I believe Figure 6 and the text below it are relevant
to the 48-port GigE modules.
So, it seems that you're not going to be able to use all this equipment
at full speed. If you want to save some cash on bandwidth you won't be
able to use, you might consider switching some of your GigE equipment to
100Mbps Ethernet.
Our conclusion was that the WS-X6516-GE-TX modules were the most
economical choice to get close to guaranteed bandwidth, though not _too_
close - they have 16 GigE ports, so they're 2x oversubscribed.
Another possibility would be to build in some links that don't go
through a switch at all - just connect up some of the nodes directly.
There's an obvious loss of flexibility, though it's clearly more
economical. Our software theoretically supports this, though we haven't
tried anything like it recently.
> 4) The control plane on the Emulab cluster will be offloaded to cheaper
> unmanaged switch ports. The PXE boot-capable 10/100 interface of each
> node will be connected with the boot server machine using multiple 48
> port 1U switches on a separate LAN . Examples of the switch would be a
> 3Com 2800 series unmanaged switch. For the first 64 machines of the
> cluster, DETER would purchase 2 such switches.
It's pretty important that this set of switch support multicast. Many
unmanaged switches simply treat multicast like broadcast. This could be
pretty disastrous when loading disk images, which consumes a whole lot
of bandwidth. Check to see if these switches support IGMP snooping to
create multicast groups.
> 5) DETER will purchase remote power strips and console terminal muxes.
> The DETER project would appreciate suggestions from the ISD staff for
> which equipment models to buy.
We use Cyclades serial expander boxes in one of our servers - by putting
them in a PC, we get very good control over who is allowed to access
which ones, when. We use Cyclom Ze boxes:
http://www.cyclades.com/products/8/z_series
... which let you get 128 serial lines into one PC.
We use two types of power controllers - 8-port APC Ethernet-connected
controllers, and 20-port serial controllers from BayTech. Since you'll
have serial lines, we recommend the BayTechs, because they are cheaper
per-port. The ones we have are RPC-27s:
http://www.baytech.net/cgi-private/prodlist?show=RPC27
> Our first idea was to use the same 6513 chassis and add 5
> blades of 48 1000bT port to it. This would provide complete
> symmetry among all the 128 niodes. However, there is some
> doubt about the difficulty of wiring 4 x 128 ports to one
> 6513. It may therefore be better to purchase a second 6513
> chassis for Phase 1b.
We've managed to fill up a couple 6509s with 48-port modules. Not easy,
but we managed it.
> ??Is there any limitation on Emulab support of the planned 6513 switch
> configuration??
Nope, our software should support it just fine.
--
/-----------------------------------------------------------
| Robert P Ricci <ricci@cs.utah.edu> | <ricci@flux.utah.edu>
| Research Associate, University of Utah Flux Group
| www.flux.utah.edu | www.emulab.net
\-----------------------------------------------------------
From ricci@cs.utah.edu Tue Oct 28 10:46:59 2003
Date: Tue, 28 Oct 2003 10:46:59 -0700
From: Robert P Ricci <ricci@cs.utah.edu>
To: Bob Lindell <bob@jensar.us>
Cc: Bob Braden <braden@ISI.EDU>, testbed-ops@emulab.net, deter-isi@ISI.EDU,
lepreau@cs.utah.edu
Subject: Re: [Deter-isi] Re: Hardware configuration for Emulab clone
Message-ID: <20031028104659.C95279@cs.utah.edu>
Mail-Followup-To: Bob Lindell <bob@jensar.us>, Bob Braden <braden@ISI.EDU>,
testbed-ops@emulab.net, deter-isi@ISI.EDU, lepreau@cs.utah.edu
References: <20031027172549.X95279@cs.utah.edu> <A29CA1D8-0910-11D8-BB39-000393DC7572@jensar.us>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5.1i
In-Reply-To: <A29CA1D8-0910-11D8-BB39-000393DC7572@jensar.us>; from bob@jensar.us on Mon, Oct 27, 2003 at 10:33:25PM -0800
Status: RO
Content-Length: 2195
Lines: 37
Thus spake Bob Lindell on Mon, Oct 27, 2003 at 10:33:25PM -0800:
> WS-X6748-GE-TX Cat6500 48-port 10/100/1000 GE Mod: fabric enabled, RJ-45
Hmm, it looks to me like this module was not available at the time I was
investigating GigE on Ciscos. It's got a different architecture than the
one I was assuming you were talking about, so many of the things I said
yesterday don't apply. I'm a bit confused though, because the data
sheets do list it as having two 20Gbps connections to the switch fabric.
But, the architecture white papers about the 6500 series clearly label
the switch fabric connectors as being 8Gbps each (with some slots having
dual connectors.) So, hopefully this means that they are able to drive
those busses at a higher rate than originally spec'ed, and that the
whitepaper is just out of date. But, it could also mean that the 20Gbps
numbers are just marketing - it could mean, for example, that the
internal buses have 40Gbps of total bandwidth, but that the module only
gets 15Gbps (full duplex) to the fabric module. If you can get your
salesperson to put you in touch with an engineer, that would probably be
the best way to find out what the truth about this matter is. If you
find anything out, we'd definitely be interested to hear it, because me
might consider these newer modules for our own gigabit expansion.
From the fact that your specs now list a 6509 instead of a 6513, I'm
guessing you already know this, but the 6513 can only handle 5 modules
with dual switch fabric interfaces. Essentially, the maximum number of
fabric connections is 18 - so the 6509s have two to every slot, while the
6513s have 5 slots with dual connectors, and 8 with a single connector.
So, if you plan to fill a switch with these dual-ported modules, you can
get better density in a 6509. If you were going to put in some 10/100
modules with a single fabric connection, you could still do this in a
6513.
--
/-----------------------------------------------------------
| Robert P Ricci <ricci@cs.utah.edu> | <ricci@flux.utah.edu>
| Research Associate, University of Utah Flux Group
| www.flux.utah.edu | www.emulab.net
\-----------------------------------------------------------
From ricci@cs.utah.edu Wed Oct 29 13:31:57 2003
Date: Wed, 29 Oct 2003 13:31:57 -0700
From: Robert P Ricci <ricci@cs.utah.edu>
To: Stephen_Schwab@NAI.com, John Mehringer <mehringe@isi.edu>
Cc: braden@ISI.EDU, testbed-ops@emulab.net, deter-isi@ISI.EDU,
lepreau@cs.utah.edu
Subject: Re: [Deter-isi] Re: Hardware configuration for Emulab clone
Message-ID: <20031029133157.R51103@cs.utah.edu>
Mail-Followup-To: Stephen_Schwab@NAI.com, John Mehringer <mehringe@isi.edu>,
braden@ISI.EDU, testbed-ops@emulab.net, deter-isi@ISI.EDU,
lepreau@cs.utah.edu
References: <613FA566484CA74288931B35D971C77E13429A@losexmb1.corp.nai.org>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5.1i
In-Reply-To: <613FA566484CA74288931B35D971C77E13429A@losexmb1.corp.nai.org>; from Stephen_Schwab@NAI.com on Wed, Oct 29, 2003 at 11:27:45AM -0800
Status: RO
Content-Length: 1644
Lines: 30
Thus spake Stephen_Schwab@NAI.com on Wed, Oct 29, 2003 at 11:27:45AM -0800:
> Could we just add two blades (possible cheap ones) to our 6509 and use
> those, with VLAN support, for our split-out control nets. That way we
> would also get the multicast support we need at boot time?
I think this is probably not a good idea. Routing needs to be done
between these segments. So, this would mean enabling some layer 3 and
above features on the experimental network switches. This has the
potential to interfere with experimental net traffic in unexpected ways
- as an example, we found out that our switches were checking TCP
checksums and discarding packets with bad ones. This, despite the fact
that we had no layer 4 services enabled at all on the switch. Turning on
layer 3 services on the experimental net is probably just asking for
trouble. I would think that it's also a security risk - bugs in the IOS
that runs on the MSFC card when doing routing in a Cat6k are now exposed
to the experimental net, so it could be possible to exploit one and find
a way out.
As for the idea of buying multiple unmanaged switches, the problem with
unmanaged switches is that you're going to want to be able to cut off
access to the outside world for nodes on which you're going to be trying
out worms, etc. An unmanaged switch isn't going to give you the ability
to do this.
--
/-----------------------------------------------------------
| Robert P Ricci <ricci@cs.utah.edu> | <ricci@flux.utah.edu>
| Research Associate, University of Utah Flux Group
| www.flux.utah.edu | www.emulab.net
\-----------------------------------------------------------
From mailnull@bas.flux.utah.edu Wed Oct 29 13:50:27 2003
Received: from bas.flux.utah.edu (localhost [127.0.0.1])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9TKoRLj096989
for <testbed-ops-hidden@bas.flux.utah.edu>; Wed, 29 Oct 2003 13:50:27 -0700 (MST)
(envelope-from mailnull@bas.flux.utah.edu)
Received: (from mailnull@localhost)
by bas.flux.utah.edu (8.12.9/8.12.5/Submit) id h9TKoRbB096988
for testbed-ops-hidden; Wed, 29 Oct 2003 13:50:27 -0700 (MST)
Received: from slow.flux.utah.edu (slow.flux.utah.edu [155.98.63.200])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9TKoRLj096984
for <testbed-ops@[155.98.60.2]>; Wed, 29 Oct 2003 13:50:27 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by slow.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9TKoQPe006870
for <testbed-ops@flux.utah.edu>; Wed, 29 Oct 2003 13:50:26 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from ops.emulab.net (ops.emulab.net [155.101.129.74])
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id NAA05684
for <testbed-ops@fast.flux.utah.edu>; Wed, 29 Oct 2003 13:50:21 -0700 (MST)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by ops.emulab.net (8.12.9/8.12.6) with ESMTP id h9TKnpbD056579
for <testbed-ops@emulab.net>; Wed, 29 Oct 2003 13:49:51 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (lepreau@localhost)
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id NAA05680;
Wed, 29 Oct 2003 13:49:24 -0700 (MST)
Message-Id: <200310292049.NAA05680@fast.cs.utah.edu>
From: Jay Lepreau <lepreau@cs.utah.edu>
To: Stephen_Schwab@NAI.com, John Mehringer <mehringe@isi.edu>, braden@isi.edu,
testbed-ops@emulab.net, deter-isi@isi.edu
Subject: Re: [Deter-isi] Re: Hardware configuration for Emulab clone
In-Reply-To: <20031029133157.R51103@cs.utah.edu>; from Robert P Ricci on Wed, 29 Oct 2003 13:31:57 MST
Date: Wed, 29 Oct 2003 13:49:24 MST
X-Spam-Status: No, hits=-8 required=5 tests=ACADEMICS,GLOB_WHITELIST version=FluxMilter1.2
X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang)
Status: RO
X-Status: A
Content-Length: 902
Lines: 18
Here's another datapoint from Kentucky's experience:
After we and they struggled for days or weeks to use some cheap 29xx
(?) router for this purpose, always running into unexplained glitches,
we suggested they toss it and use a PC running FreeBSD as a router, at
least to get going. Worked great.
I don't think it's going to be fast or secure enough for you long
term, but it will get you off the ground. However, I would want to
hear Rob's comments. I'm sure there are small Cisco or other vendoes'
routers that would work... but which ones?
Aside: I noticed in your equip list you had "MSFC memory". Not sure
that is correct, as an MSFC is the daughter card that is required to
turn a 65xx switch into a router.
We keep MSFC's out of our switches, partly to save money, but partly
to make triple sure that some higher layer stuff doesn't get turned
on by accident. These Ciscos are complex.
From ricci@cs.utah.edu Wed Oct 29 14:14:21 2003
Date: Wed, 29 Oct 2003 14:14:21 -0700
From: Robert P Ricci <ricci@cs.utah.edu>
To: Jay Lepreau <lepreau@cs.utah.edu>
Cc: Stephen_Schwab@NAI.com, John Mehringer <mehringe@isi.edu>,
braden@isi.edu, testbed-ops@emulab.net, deter-isi@isi.edu
Subject: Re: [Deter-isi] Re: Hardware configuration for Emulab clone
Message-ID: <20031029141421.U51103@cs.utah.edu>
Mail-Followup-To: Jay Lepreau <lepreau@cs.utah.edu>, Stephen_Schwab@NAI.com,
John Mehringer <mehringe@isi.edu>, braden@isi.edu,
testbed-ops@emulab.net, deter-isi@isi.edu
References: <20031029133157.R51103@cs.utah.edu> <200310292049.NAA05680@fast.cs.utah.edu>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5.1i
In-Reply-To: <200310292049.NAA05680@fast.cs.utah.edu>; from lepreau@cs.utah.edu on Wed, Oct 29, 2003 at 01:49:24PM -0700
Status: RO
Content-Length: 1661
Lines: 31
Thus spake Jay Lepreau on Wed, Oct 29, 2003 at 01:49:24PM -0700:
> I don't think it's going to be fast or secure enough for you long
> term, but it will get you off the ground. However, I would want to
> hear Rob's comments. I'm sure there are small Cisco or other vendoes'
> routers that would work... but which ones?
I think you want a router with at least 4 ports - one to connect to the
outside world, one to connect to the private VLAN, one to connect to the
public VLAN, and one to the nodes' control network interfaces. You
_could_ combine the private and public VLANs, but, as outlined in the
document I sent, this makes boss (which needs to be fairly secure, since
it's the source of all configuration informations and commands) more
open to attack from ops, a machine on which we traditionally give all
users shells. Since you're building a security testbed, I would think
you'd want to keep the infrastructure as safe from attack as possible,
and not make this shortcut.
To actually get any security out of this arrangement, you'll need a
router that can do firewalling. I believe all Cisco IOS routers can do
this, but my experience with the router side of Cisco is very limited,
so you'd have to check with a sales rep about this.
Yeah, if you have to be budget-conscious, a PC could do this job. As you
suggest, I would only view this as a temporary thing, though.
--
/-----------------------------------------------------------
| Robert P Ricci <ricci@cs.utah.edu> | <ricci@flux.utah.edu>
| Research Associate, University of Utah Flux Group
| www.flux.utah.edu | www.emulab.net
\-----------------------------------------------------------
From lepreau@fast.cs.utah.edu Mon Oct 27 23:52:02 2003
Received: from slow.flux.utah.edu (slow.flux.utah.edu [155.98.63.200])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S6q2Lj023116
for <ricci@[155.98.60.2]>; Mon, 27 Oct 2003 23:52:02 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by slow.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S6pwPe092690
for <ricci@flux.utah.edu>; Mon, 27 Oct 2003 23:51:59 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (lepreau@localhost)
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id XAA14364;
Mon, 27 Oct 2003 23:51:48 -0700 (MST)
Message-Id: <200310280651.XAA14364@fast.cs.utah.edu>
From: Jay Lepreau <lepreau@cs.utah.edu>
To: Bob Braden <braden@ISI.EDU>, deter-isi@ISI.EDU
cc: ricci@flux.utah.edu, testbed-ops@emulab.net
Subject: Re: Hardware configuration for Emulab clone
In-Reply-To: <20031027172549.X95279@cs.utah.edu>; from Robert P Ricci on Mon, 27 Oct 2003 17:25:49 MST
Date: Mon, 27 Oct 2003 23:51:48 MST
X-Spam-Status: No, hits=-8 required=5 tests=ACADEMICS,GLOB_WHITELIST version=FluxMilter1.2
X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang)
Status: RO
Content-Length: 1548
Lines: 32
Bob:
> 5) DETER will purchase remote power strips and console terminal muxes.
> The DETER project would appreciate suggestions from the ISD staff for
> which equipment models to buy.
We use Cyclades serial expander boxes in one of our servers - by putting
them in a PC, we get very good control over who is allowed to access
which ones, when. We use Cyclom Ze boxes:
http://www.cyclades.com/products/8/z_series
... which let you get 128 serial lines into one PC.
Our software supports multiple terminal servers. We actually run
serial lines in two servers now, since we have >128 hosts.
We use two types of power controllers - 8-port APC Ethernet-connected
controllers, and 20-port serial controllers from BayTech. Since you'll
have serial lines, we recommend the BayTechs, because they are cheaper
per-port. The ones we have are RPC-27s:
http://www.baytech.net/cgi-private/prodlist?show=RPC27
I dis-recommend anything except the above two, although probably
others from the same vendors would be ok. That is because this type
of device can be idiosyncratic and cost you and us time. In
particular, the RPCs have little operating systems inside them with
idiosyncrasies and we had to evolve our software to cope.
Eg, we had to batch power requests because they have N second dead
times after processing a command. Don't want to go through the
same trial and error with another vendor/device.
All our hardware is listed on our site, with URLs to the vendor's pages.
http://www.emulab.net/docwrapper.php3?docname=hardware.html
From mailnull@bas.flux.utah.edu Tue Oct 28 00:26:14 2003
Received: from bas.flux.utah.edu (localhost [127.0.0.1])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7QELj023947
for <testbed-ops-hidden@bas.flux.utah.edu>; Tue, 28 Oct 2003 00:26:14 -0700 (MST)
(envelope-from mailnull@bas.flux.utah.edu)
Received: (from mailnull@localhost)
by bas.flux.utah.edu (8.12.9/8.12.5/Submit) id h9S7QEVF023946
for testbed-ops-hidden; Tue, 28 Oct 2003 00:26:14 -0700 (MST)
Received: from slow.flux.utah.edu (slow.flux.utah.edu [155.98.63.200])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7QELj023942
for <testbed-ops@[155.98.60.2]>; Tue, 28 Oct 2003 00:26:14 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by slow.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7QBPe092882
for <testbed-ops@flux.utah.edu>; Tue, 28 Oct 2003 00:26:11 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from ops.emulab.net (ops.emulab.net [155.101.129.74])
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id AAA14522
for <testbed-ops@fast.flux.utah.edu>; Tue, 28 Oct 2003 00:26:05 -0700 (MST)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by ops.emulab.net (8.12.9/8.12.6) with ESMTP id h9S7PZbD021125
for <testbed-ops@emulab.net>; Tue, 28 Oct 2003 00:25:35 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (lepreau@localhost)
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id AAA14518;
Tue, 28 Oct 2003 00:25:24 -0700 (MST)
Message-Id: <200310280725.AAA14518@fast.cs.utah.edu>
From: Jay Lepreau <lepreau@cs.utah.edu>
To: Bob Braden <braden@ISI.EDU>
Cc: testbed-ops@emulab.net, deter-isi@ISI.EDU
Subject: Re: Hardware configuration for Emulab clone
In-Reply-To: <200310272219.OAA28834@gra.isi.edu>; from Bob Braden on Mon, 27 Oct 2003 14:19:38 PST
Date: Tue, 28 Oct 2003 00:25:24 MST
X-Spam-Status: No, hits=-10 required=5 tests=ACADEMICS,GEEKWORDS1,GLOB_WHITELIST version=FluxMilter1.2
X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang)
Status: RO
Content-Length: 1522
Lines: 31
> A candidate for the Boot Server/Data Logger Equipment would be:
> ...
The boot server is our so-called "boss" (as in "master") server, and
you should make sure all the devices on it will work with FreeBSD. A
port of the Emulab servers to Linux could be done... but it won't be
by us. It would greatly complicate maintenance and upgrades and QA.
We also have a so-called "users" server (for user login accounts and
terminal service) and a logically separate fileserver (but that has
always been the same as the "users" machine, so there would probably
be small glitches in splitting that off). "users" is also a FreeBSD
machine; porting it to Linux would probably be much easier thatn boss,
and users would find it friendlier.
Arguments can be made both ways about the security of having
logins on a persistent server. But Emulab currently needs it,
including for a few non-login related things.
> An understanding of the needed modifications to Emulab software will
> become more evident as the project progresses. For example, it is very
> plausible that Emulab will need to be modified to allow the ability to
> mirror traffic from a given link(s) in the emulated topology to a given
> piece of monitoring equipment that can perform protocol analysis or
> data logging at link rate.
In fact, that's a good example. When people need that, we provide
it manually. Would be nice to provide more generally, but there
hasn't been sufficient demand. OTOH, what is easy to use oftern
determines what gets used.
From mailnull@bas.flux.utah.edu Tue Oct 28 00:45:41 2003
Received: from bas.flux.utah.edu (localhost [127.0.0.1])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7jeLj024385
for <testbed-ops-hidden@bas.flux.utah.edu>; Tue, 28 Oct 2003 00:45:40 -0700 (MST)
(envelope-from mailnull@bas.flux.utah.edu)
Received: (from mailnull@localhost)
by bas.flux.utah.edu (8.12.9/8.12.5/Submit) id h9S7jen4024384
for testbed-ops-hidden; Tue, 28 Oct 2003 00:45:40 -0700 (MST)
Received: from slow.flux.utah.edu (slow.flux.utah.edu [155.98.63.200])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7jeLj024380
for <testbed-ops@[155.98.60.2]>; Tue, 28 Oct 2003 00:45:40 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by slow.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9S7jePe092959
for <testbed-ops@flux.utah.edu>; Tue, 28 Oct 2003 00:45:40 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from ops.emulab.net (ops.emulab.net [155.101.129.74])
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id AAA14610
for <testbed-ops@fast.flux.utah.edu>; Tue, 28 Oct 2003 00:45:34 -0700 (MST)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by ops.emulab.net (8.12.9/8.12.6) with ESMTP id h9S7j4bD021384
for <testbed-ops@emulab.net>; Tue, 28 Oct 2003 00:45:04 -0700 (MST)
(envelope-from lepreau@fast.cs.utah.edu)
Received: from fast.cs.utah.edu (lepreau@localhost)
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id AAA14588;
Tue, 28 Oct 2003 00:44:51 -0700 (MST)
Message-Id: <200310280744.AAA14588@fast.cs.utah.edu>
From: Jay Lepreau <lepreau@cs.utah.edu>
To: Bob Braden <braden@ISI.EDU>, bob@jensar.us, Stephen_Schwab@NAI.com
Cc: testbed-ops@emulab.net, deter-isi@ISI.EDU
Subject: Re: Hardware configuration for Emulab clone
In-Reply-To: <200310272219.OAA28834@gra.isi.edu>; from Bob Braden on Mon, 27 Oct 2003 14:19:38 PST
Date: Tue, 28 Oct 2003 00:44:51 MST
X-Spam-Status: No, hits=-10 required=5 tests=ACADEMICS,GEEKWORDS2,GLOB_WHITELIST version=FluxMilter1.2
X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang)
Status: RO
Content-Length: 1745
Lines: 46
> [which blades to get]
> ...
> This would provide complete
> symmetry among all the 128 niodes.
> ...
> We are generally trying to obtain as much homogeneity as
> possible, but in the near term we won't need the maximum
> capacity so we can compromise to save money.
As I said in our phone call, strong homogeneity of nodes wrt to their
links (link symmetry) is not generally needed, as Emulab abstracts over
that, and experimenters don't specify large completely uniform topologies.
They do care that nodes themselves (eg CPUs) be homogeneous.
The only downside of modest link asymmetry is that the mapper will
take a little longer to run, and it will be harder to "approximate
the mapping in your head," which is sometimes handy.
For Dummynet, you probably do want an even number of links of the same
speed on each node.
Steve S:
> In any
> event, any time our topology carries enough traffic to saturate
> the VLANs on the switch, the illusion of multiple simulated
> networks is going to break. Over-provisioning the switch is
> one way to avoid having to worry about how this affects the
> correctness of our experiments. But if we have to worry about
> this, then so be it.]
We've talked about changing the switch model fed to our resource
mapper to be hierarchical, ie adding a "blade" with higher intra-blade
BW than inter-blade. I would think this wouldn't be hard, but I think
Rob said it could be. If that was done, then we could accurately
conservatively allocate resources.
However, Cisco BW probably depends on packet size.
Bob Lindell:
> Either way, 48 GE ports is 48Gb/s FD. That will slightly over
> subscribe the blade to backplane interface.
What blade to backplane BW have you been told?
How sure are you?
From mailnull@bas.flux.utah.edu Tue Oct 28 22:56:31 2003
Received: from bas.flux.utah.edu (localhost [127.0.0.1])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9T5uULj074631
for <testbed-ops-hidden@bas.flux.utah.edu>; Tue, 28 Oct 2003 22:56:30 -0700 (MST)
(envelope-from mailnull@bas.flux.utah.edu)
Received: (from mailnull@localhost)
by bas.flux.utah.edu (8.12.9/8.12.5/Submit) id h9T5uUAL074630
for testbed-ops-hidden; Tue, 28 Oct 2003 22:56:30 -0700 (MST)
Received: from slow.flux.utah.edu (slow.flux.utah.edu [155.98.63.200])
by bas.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9T5uULj074626
for <testbed-ops@[155.98.60.2]>; Tue, 28 Oct 2003 22:56:30 -0700 (MST)
(envelope-from Stephen_Schwab@NAI.com)
Received: from fast.cs.utah.edu (fast.cs.utah.edu [155.99.212.1])
by slow.flux.utah.edu (8.12.9/8.12.5) with ESMTP id h9T5uQPe002020
for <testbed-ops@flux.utah.edu>; Tue, 28 Oct 2003 22:56:26 -0700 (MST)
(envelope-from Stephen_Schwab@NAI.com)
Received: from ops.emulab.net (ops.emulab.net [155.101.129.74])
by fast.cs.utah.edu (8.9.1/8.9.1) with ESMTP id WAA27611
for <testbed-ops@fast.flux.utah.edu>; Tue, 28 Oct 2003 22:56:21 -0700 (MST)
From: Stephen_Schwab@NAI.com
Received: from RelayDAL.nai.com (relaydal.nai.com [205.227.136.197])
by ops.emulab.net (8.12.9/8.12.6) with ESMTP id h9T5tobD042474
for <testbed-ops@emulab.net>; Tue, 28 Oct 2003 22:55:50 -0700 (MST)
(envelope-from Stephen_Schwab@NAI.com)
Received: from dalexwsout2.na.nai.com (dalexwsout2.na.nai.com [161.69.212.93] (may be forged))
by RelayDAL.nai.com (Switch-2.2.8/Switch-2.2.6) with SMTP id h9T5qMV15609;
Tue, 28 Oct 2003 23:52:22 -0600 (CST)
Received: from mail.na.nai.com(161.69.111.81) by dalexwsout2.na.nai.com via csmap
id 278c5c60_09d4_11d8_880c_00304811fc74_7761;
Tue, 28 Oct 2003 23:53:00 -0600 (CST)
Received: from losexmb1.corp.nai.org ([161.69.83.203]) by DALEXBR1.corp.nai.org with Microsoft SMTPSVC(5.0.2195.5329);
Tue, 28 Oct 2003 23:55:38 -0600
content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="us-ascii"
Subject: shedding some light on the new Cisco 720Gb/s switch fabric
X-MimeOLE: Produced By Microsoft Exchange V6.0.6487.1
Date: Tue, 28 Oct 2003 21:55:37 -0800
Message-ID: <613FA566484CA74288931B35D971C77E13428C@losexmb1.corp.nai.org>
Thread-Topic: shedding some light on the new Cisco 720Gb/s switch fabric
Thread-Index: AcOd4UaDpbU7QPVxR5SqgTKHrLS9EQ==
To: <deter-isi@isi.edu>, <testbed-ops@emulab.net>
X-OriginalArrivalTime: 29 Oct 2003 05:55:38.0460 (UTC) FILETIME=[473899C0:01C39DE1]
X-Spam-Status: No, hits=-2.715 required=5 tests=ACADEMICS,NO_REAL_NAME version=FluxMilter1.2
X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang)
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by bas.flux.utah.edu id h9T5uULj074626
Status: RO
X-Status: A
Content-Length: 2269
Lines: 27
Hi,
I think I see the confusion -- it appears that Cisco dropped a new switch fabric into the 6500s by putting the switch fabric on the supervisor module.
If you search through this web page:
http://www.cisco.com/en/US/products/hw/switches/ps708/products_data_sheet09186a00800ff916.html
you can find a reference buried where it describe the switch fabrics.
I can't quite see how they wire this beast -- perhaps they physically re-cable the slot connectors from the internal 256 Gb/s switch fabric to the 720 Gb/s switch fabric on the supervisor module.
There is a reference somewhere else to the auto-sensing/auto-switching cababilities of the 720 Gb/s switch fabric -- so if you happen to plug in older 16 or 8 Gb/s blades, the switch fabric will still talk to them.
The WS-X6748-GE-TX blades are definitely designed to talk to the 720 Gb/s switch fabric. The way we will use them, it is unlikely that more than 24 Gb/s will ever be sourced or sinked on a blade, so 40 Gb/s will be enough headroom.
But there is a gotcha: we didn't plan to order any WS-F6700-DFC3A daughter cards!