Commit 21b4e4fd authored by Robert Ricci's avatar Robert Ricci

Add a description of the hardware

parent ed128c2e
......@@ -8,3 +8,11 @@
(define (TODO what)
(bold (string-append "TODO: " what)))
(define nodetype
(lambda (typename howmany . properties)
(tabular #:style 'boxed #:sep (hspace 3)
(cons
(list (bold typename)
(string-append (number->string howmany) " nodes"))
properties))))
#lang scribble/manual
@(require "defs.rkt")
@title[#:tag "hardware"]{Hardware}
@margin-note{Apt does not yet allow users to select which cluster their
experiment is instantiated on---this feature is expected soon.}
Apt can allocate experiments on any one of several federated clusters.
@section[#:tag "aptcluster"]{Apt Cluster}
@margin-note{This cluster is in the process of being brougt up.}
The main Apt cluster is housed in the University of Utah's Downtown Data
Center in Salt Lake City, Utah. It contains two classes of nodes:
@(nodetype "r320" 128
(list "CPU" "1x Xeon E5-2450 processor (8 cores, 2.1Ghz)")
(list "RAM" "16GB Memory (4 x 2GB RDIMMs, 1.6Ghz)")
(list "Disks" "4 x 500GB 7.2K SATA Drives (RAID5)")
(list "NIC" "1GbE Dual port embedded NIC (Broadcom)")
(list "NIC" "1 x Mellanox MX354A Dual port FDR CX3 adapter w/1 x QSA adapter")
)
@(nodetype "c6220" 64
(list "CPU" "2 x Xeon E5-2650v2 processors (8 cores each, 2.6Ghz)")
(list "RAM" "64GB Memory (8 x 8GB DDR-3 RDIMMs, 1.86Ghz)")
(list "Disks" "2 x 1TB SATA 3.5” 7.2K rpm hard drives")
(list "NIC" "4 x 1GbE embedded Ethernet Ports (Broadcom)")
(list "NIC" "1 x Intel X520 PCIe Dual port 10Gb Ethernet NIC")
(list "NIC" "1 x Mellanox FDR CX3 Single port mezz card")
)
All nodes are connected to three networks:
@itemlist[
@item{A 1 Gbps @italic{Ethernet} @bold{``control network''}---this network
is used for remote access, experiment management, etc., and is
connected to the public Internet. When you log in to nodes in your
experiment using @code{ssh}, this is the network you are using.
@italic{You should not use this network as part of the experiments you
run in Apt.}
}
@item{A @bold{``flexible fabric''} that can run up to 56 Gbps and runs
@italic{either FDR Infiniband or Ethernet}. This fabric uses NICs and
switches with @hyperlink["http://www.mellanox.com/"]{Mellanox's}
VPI technology. This means that we can, on demand, configure each
port to be either FDR Inifiniband or 40 Gbps (or even non-standard
56 Gbps) Ethernet. This fabric consists of seven edge switches
(Mellanox SX6036G) with 28 connected nodes each. There are two core
switches (also SX6036G), and each edge switch connects to both cores
with a 3.5:1 blocking factor. This fabric is ideal if you need
@bold{very low latency, Infiniband, or a few, high-bandwidth Ethernet
links}.
}
@item{A 10 Gbps @italic{Ethernet} @bold{``commodity fabric''}. One the
@code{r320} nodes, a port on the Mellanox NIC (permanently set to
Ethernet mode) is used to connect to this fabric; on the @code{c6220}
nodes, a dedicated Intel 10 Gbps NIC is used. This fabric is built
from two Dell Z9000 switches, each of which has 96 nodes connected
to it. It is idea for creating @bold{large LANs}: each of the two
switches has full bisection bandwidth for its 96 ports, and there is a
3.5:1 blocking factor between the two switches.
}
]
@section[#:tag "igddc"]{IG-DDC Cluster}
@margin-note{This is the cluster that is currently used by default for all
experiments on Apt.}
This small cluster is an @hyperlink["http://www.instageni.net"]{InstaGENI
Rack} also housed in the University of Utah's Downtown Data Center. It
has nodes of only a single type:
@nodetype["dl360" 33
(list "CPU" "2x Xeon E5-2450 processors (8 cores each, 2.1Ghz)")
(list "RAM" "48GB Memory (6 x 8GB RDIMMs, 1.6Ghz)")
(list "Disk" "1 x 1TB 7.2K SATA Drive")
(list "NIC" "1GbE 4-port embedded NIC")
]
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment