Commit 3502551f authored by Robert Ricci's avatar Robert Ricci

Update Wisconsin and Clemson hardware

parent 554b1175
......@@ -53,8 +53,8 @@
@clab-only{
@section[#:tag "cloudlab-wisconsin"]{CloudLab Wisconsin}
The CloudLab cluster at the University of Wisconsin is being built in
partnership with Cisco and Seagate. The initial cluster, which is is
The CloudLab cluster at the University of Wisconsin is built in
partnership with Cisco, Seagate, and HP. The cluster, which is in
Madison, Wisconsin, has 100 servers with a total of 1,600 cores connected
in a CLOS topology with full bisection bandwidth. It has 525 TB of storage,
including SSDs on every node.
......@@ -80,6 +80,33 @@
(list "NIC" "Dual-port Cisco VIC1227 10Gb NIC (PCIe v3.0, 8 lanes")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "C240M4" 10
(list "CPU" "Two Intel E5-2630 v3 8-core CPUs at 2.40 GHz (Haswell w/ EM64T)")
(list "RAM" "128GB ECC Memory (8x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs")
(list "Disk" "One 1 TB 7.2K RPM SAS 3.5\" HDD")
(list "Disk" "One 480 GB 6G SAS SSD")
(list "Disk" "Twelve 3 TB 3.5\" HDDs donated by Seagate")
(list "NIC" "Dual-port Cisco VIC1227 10Gb NIC (PCIe v3.0, 8 lanes")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c220g2" 163
(list "CPU" "Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)")
(list "RAM" "160GB ECC Memory (10x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs - 5 memory channels)")
(list "Disk" "Two 1.2 TB 10K RPM 6G SAS SFF HDDs")
(list "Disk" "One 480 GB 6G SAS SSD")
(list "NIC" "Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c240g2" 7
(list "CPU" "Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)")
(list "RAM" "160GB ECC Memory (10x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs - 5 memory channels)")
(list "Disk" "Two 1.2 TB 10K RPM 6G SAS SFF HDDs")
(list "Disk" "One 480 GB 6G SAS SSD")
(list "Disk" "Twelve 3 TB 3.5\" HDDs donated by Seagate")
(list "NIC" "Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes")
(list "NIC" "Onboard Intel i350 1Gb"))
All nodes are connected to two networks:
@itemlist[
......@@ -106,11 +133,11 @@
@clab-only{
@section[#:tag "cloudlab-clemson"]{CloudLab Clemson}
The CloudLab cluster at Clemson University is being built in
partnership with Dell. The initial cluster has 100
servers with a total of 2,000 cores, 424TB of disk space, and
26TB of RAM. All nodes have both Ethernet and Infiniband networks.
It is located in Clemson, South Carolina.
The CloudLab cluster at Clemson University has been built
partnership with Dell. The cluster so far has 186
servers with a total of 4,400 cores, 596TB of disk space, and
48TB of RAM. All nodes have 10GB Ethernet, and about half have QDR
Infiniband as well. It is located in Clemson, South Carolina.
More technical details can be found at @url[(@apturl "hardware.php#clemson")]
......@@ -131,7 +158,21 @@
(list "NIC" "Dual-port Intel 10Gbe NIC (PCIe v3.0, 8 lanes")
(list "NIC" "Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)"))
All nodes are connected to three networks:
@(nodetype "c6320" 84
(list "CPU" "Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)")
(list "RAM" "256GB ECC Memory")
(list "Disk" "Two 1 TB 7.2K RPM 3G SATA HDDs")
(list "NIC" "Dual-port Intel 10Gbe NIC (X520)"))
@(nodetype "c4130" 2
(list "CPU" "Two Intel E5-2680 v3 12-core processors at 2.50 GHz (Haswell)")
(list "RAM" "256GB ECC Memory")
(list "Disk" "Two 1 TB 7.2K RPM 3G SATA HDDs")
(list "GPU" "Two Tesla K40m GPUs")
(list "NIC" "Dual-port Intel 1Gbe NIC (i350)")
(list "NIC" "Dual-port Intel 10Gbe NIC (X710)"))
There are three networks at the Clemson site:
@itemlist[
@item{A 1 Gbps Ethernet @bold{``control network''}---this network
......@@ -152,9 +193,11 @@
the two leaf switches.
}
@item{A 40 Gbps QDR Infiniband @bold{``experiment network''}--each
has one connection to this network, which is implemented using
a large Mellanox chassis switch with full bisection bandwidth.}
@item{A 40 Gbps QDR Infiniband @bold{``experiment network''}--for
nodes with an Infinband NIC, each has one connection to this
network, which is implemented using a large Mellanox chassis switch
with full bisection bandwidth.
}
]
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment