Skip to content
GitLab
Projects
Groups
Snippets
Help
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
T
testbed-manual
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
1
Issues
1
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Operations
Operations
Incidents
Environments
Packages & Registries
Packages & Registries
Container Registry
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
emulab
testbed-manual
Commits
3502551f
Commit
3502551f
authored
Apr 18, 2016
by
Robert Ricci
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Update Wisconsin and Clemson hardware
parent
554b1175
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
54 additions
and
11 deletions
+54
-11
hardware.scrbl
hardware.scrbl
+54
-11
No files found.
hardware.scrbl
View file @
3502551f
...
@@ -53,8 +53,8 @@
...
@@ -53,8 +53,8 @@
@clab-only
{
@clab-only
{
@section
[
#:tag
"cloudlab-wisconsin"
]{
CloudLab
Wisconsin
}
@section
[
#:tag
"cloudlab-wisconsin"
]{
CloudLab
Wisconsin
}
The
CloudLab
cluster
at
the
University
of
Wisconsin
is
b
eing
b
uilt
in
The
CloudLab
cluster
at
the
University
of
Wisconsin
is
built
in
partnership
with
Cisco
and
Seagate
.
The
initial
cluster
,
which
is
is
partnership
with
Cisco
,
Seagate
,
and
HP
.
The
cluster
,
which
is
in
Madison
,
Wisconsin
,
has
100
servers
with
a
total
of
1
,
600
cores
connected
Madison
,
Wisconsin
,
has
100
servers
with
a
total
of
1
,
600
cores
connected
in
a
CLOS
topology
with
full
bisection
bandwidth
.
It
has
525
TB
of
storage
,
in
a
CLOS
topology
with
full
bisection
bandwidth
.
It
has
525
TB
of
storage
,
including
SSDs
on
every
node
.
including
SSDs
on
every
node
.
...
@@ -80,6 +80,33 @@
...
@@ -80,6 +80,33 @@
(
list
"NIC"
"Dual-port Cisco VIC1227 10Gb NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Dual-port Cisco VIC1227 10Gb NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Onboard Intel i350 1Gb"
))
(
list
"NIC"
"Onboard Intel i350 1Gb"
))
@
(
nodetype
"C240M4"
10
(
list
"CPU"
"Two Intel E5-2630 v3 8-core CPUs at 2.40 GHz (Haswell w/ EM64T)"
)
(
list
"RAM"
"128GB ECC Memory (8x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs"
)
(
list
"Disk"
"One 1 TB 7.2K RPM SAS 3.5\" HDD"
)
(
list
"Disk"
"One 480 GB 6G SAS SSD"
)
(
list
"Disk"
"Twelve 3 TB 3.5\" HDDs donated by Seagate"
)
(
list
"NIC"
"Dual-port Cisco VIC1227 10Gb NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Onboard Intel i350 1Gb"
))
@
(
nodetype
"c220g2"
163
(
list
"CPU"
"Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)"
)
(
list
"RAM"
"160GB ECC Memory (10x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs - 5 memory channels)"
)
(
list
"Disk"
"Two 1.2 TB 10K RPM 6G SAS SFF HDDs"
)
(
list
"Disk"
"One 480 GB 6G SAS SSD"
)
(
list
"NIC"
"Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Onboard Intel i350 1Gb"
))
@
(
nodetype
"c240g2"
7
(
list
"CPU"
"Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)"
)
(
list
"RAM"
"160GB ECC Memory (10x 16 GB DDR4 2133 MHz PC4-17000 dual rank RDIMMs - 5 memory channels)"
)
(
list
"Disk"
"Two 1.2 TB 10K RPM 6G SAS SFF HDDs"
)
(
list
"Disk"
"One 480 GB 6G SAS SSD"
)
(
list
"Disk"
"Twelve 3 TB 3.5\" HDDs donated by Seagate"
)
(
list
"NIC"
"Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Onboard Intel i350 1Gb"
))
All
nodes
are
connected
to
two
networks:
All
nodes
are
connected
to
two
networks:
@itemlist
[
@itemlist
[
...
@@ -106,11 +133,11 @@
...
@@ -106,11 +133,11 @@
@clab-only
{
@clab-only
{
@section
[
#:tag
"cloudlab-clemson"
]{
CloudLab
Clemson
}
@section
[
#:tag
"cloudlab-clemson"
]{
CloudLab
Clemson
}
The
CloudLab
cluster
at
Clemson
University
is
being
built
in
The
CloudLab
cluster
at
Clemson
University
has
been
built
partnership
with
Dell
.
The
initial
cluster
has
100
partnership
with
Dell
.
The
cluster
so
far
has
186
servers
with
a
total
of
2
,
000
cores
,
424
TB
of
disk
space
,
and
servers
with
a
total
of
4
,
400
cores
,
596
TB
of
disk
space
,
and
26
TB
of
RAM
.
All
nodes
have
both
Ethernet
and
Infiniband
networks
.
48
TB
of
RAM
.
All
nodes
have
10
GB
Ethernet
,
and
about
half
have
QDR
It
is
located
in
Clemson
,
South
Carolina
.
I
nfiniband
as
well
.
I
t
is
located
in
Clemson
,
South
Carolina
.
More
technical
details
can
be
found
at
@url
[(
@apturl
"hardware.php#clemson"
)]
More
technical
details
can
be
found
at
@url
[(
@apturl
"hardware.php#clemson"
)]
...
@@ -131,7 +158,21 @@
...
@@ -131,7 +158,21 @@
(
list
"NIC"
"Dual-port Intel 10Gbe NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Dual-port Intel 10Gbe NIC (PCIe v3.0, 8 lanes"
)
(
list
"NIC"
"Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)"
))
(
list
"NIC"
"Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)"
))
All
nodes
are
connected
to
three
networks:
@
(
nodetype
"c6320"
84
(
list
"CPU"
"Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)"
)
(
list
"RAM"
"256GB ECC Memory"
)
(
list
"Disk"
"Two 1 TB 7.2K RPM 3G SATA HDDs"
)
(
list
"NIC"
"Dual-port Intel 10Gbe NIC (X520)"
))
@
(
nodetype
"c4130"
2
(
list
"CPU"
"Two Intel E5-2680 v3 12-core processors at 2.50 GHz (Haswell)"
)
(
list
"RAM"
"256GB ECC Memory"
)
(
list
"Disk"
"Two 1 TB 7.2K RPM 3G SATA HDDs"
)
(
list
"GPU"
"Two Tesla K40m GPUs"
)
(
list
"NIC"
"Dual-port Intel 1Gbe NIC (i350)"
)
(
list
"NIC"
"Dual-port Intel 10Gbe NIC (X710)"
))
There
are
three
networks
at
the
Clemson
site:
@itemlist
[
@itemlist
[
@item
{
A
1
Gbps
Ethernet
@bold
{
`
`control
network
''
}
---this
network
@item
{
A
1
Gbps
Ethernet
@bold
{
`
`control
network
''
}
---this
network
...
@@ -152,9 +193,11 @@
...
@@ -152,9 +193,11 @@
the
two
leaf
switches
.
the
two
leaf
switches
.
}
}
@item
{
A
40
Gbps
QDR
Infiniband
@bold
{
`
`experiment
network
''
}
--each
@item
{
A
40
Gbps
QDR
Infiniband
@bold
{
`
`experiment
network
''
}
--for
has
one
connection
to
this
network
,
which
is
implemented
using
nodes
with
an
Infinband
NIC
,
each
has
one
connection
to
this
a
large
Mellanox
chassis
switch
with
full
bisection
bandwidth
.
}
network
,
which
is
implemented
using
a
large
Mellanox
chassis
switch
with
full
bisection
bandwidth
.
}
]
]
}
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment