Commit 8b41cb00 authored by Robert Ricci's avatar Robert Ricci

Merge branch 'master' into link-samples-to-profiles

parents c59541ad ab5c5d1c
apt-manual/
cloudlab-manual/
phantomnet-manual/
powder-manual/
emulab-manual/
compiled/
pdf/
......
......@@ -104,7 +104,7 @@
('apt "https://groups.google.com/forum/#!forum/apt-users")
('clab "https://groups.google.com/forum/#!forum/cloudlab-users")
('pnet "https://groups.google.com/forum/#!forum/phantomnet-users")
('powder "https://groups.google.com/forum/#!forum/phantomnet-users")
('powder "https://groups.google.com/forum/#!forum/powder-users")
('elab "https://groups.google.com/forum/#!forum/emulab-users")))
(define tb (lambda () (case (tb-mode)
......
......@@ -49,7 +49,7 @@ All nodes are connected to two networks:
connected to the public Internet. When you log in to nodes in your
experiment using @code{ssh}, this is the network you are using.
@italic{You should not use this network as part of the experiments you
run in Apt.}
run in Emulab.}
}
@item{A 10/100Mb, 1/10Gb Ethernet
......
......@@ -263,7 +263,9 @@ disk image has been loaded.) The second service, described by the
@tt{command}. In this example (as is common), the command refers directly
to a file saved by the immediately preceding @geni-lib["geni.rspec.pg.Install" 'id] service. This
behaviour works, because @(tb) guarantees that all @geni-lib["geni.rspec.pg.Install" 'id] services
complete before any @geni-lib["rspec.pg.Execute" 'id] services are started.
complete before any @geni-lib["rspec.pg.Execute" 'id] services are started. The command executes
every time the node boots, so you can use it start daemons, etc. that are necessary for your
experiment.
@section[#:tag "geni-lib-example-parameters"]{Profiles with user-specified parameters}
......
......@@ -19,13 +19,10 @@
@section[#:tag "cloudlab-utah"]{CloudLab Utah}
The CloudLab cluster at the University of Utah is being built in partnership
with HP. The consists of 315 64-bit ARM servers and 270 Intel Xeon-D severs.
Each has 8 cores, for a total of 4,680 cores. The servers are
built on HP's Moonshot platform.
The cluster is housed in the University of Utah's Downtown
Data Center in Salt Lake City.
More technical details can be found at @url[(@apturl "hardware.php#utah")]
with HP. It consists of 200 Intel Xeon E5 servers, 270 Xeon-D servers, and
315 64-bit ARM servers and 270 Intel Xeon-D severs, for a total of 6,680
cores. The cluster is housed in the University of Utah's Downtown Data
Center in Salt Lake City.
@(nodetype "m400" 315 "64-bit ARM"
(list "CPU" "Eight 64-bit ARMv8 (Atlas/A57) cores at 2.4 GHz (APM X-GENE)")
......@@ -55,6 +52,31 @@
We have plans to enable some users to allocate entire chassis; when
allocated in this mode, it will be possible to have complete administrator
control over the switches in addition to the nodes.
In phase two we added 50 Apollo R2200 chassis each with four HPE ProLiant
XL170r server modules. Each server has 10 cores for a total of 2000 cores.
@(nodetype "xl170" 200 "Intel Broadwell, 10 core, 1 disk"
(list "CPU" "Ten-core Intel E5-2640v4 at 2.4 GHz")
(list "RAM" "64GB ECC Memory (4x 16 GB DDR4-2400 DIMMs)")
(list "Disk" "Intel DC S3520 480 GB 6G SATA SSD")
(list "NIC" "Two Dual-port Mellanox ConnectX-4 25 GB NIC (PCIe v3.0, 8 lanes")
)
Each server is connected via a 10Gbps control link (Dell switches) and a
25Gbps experimental link to Mellanox 2410 switches in groups of 40 servers.
Each of the five groups' experimental switches are connected to a Mellanox
2700 spine switch at 5x100Gbps. That switch in turn interconnects with the
rest of the Utah CloudLab cluster via 6x40Gbps uplinks to the HP FlexFabric
12910 switch.
A unique feature of the phase two nodes is the addition of eight ONIE
bootable "user allocatable" switches that can run a variety of Open Network
OSes: six Dell S4048-ONs and two Mellanox MSN2410-BB2Fs. These switches and
all 200 nodes are connected to two NetScout 3903 layer-1 switches, allowing
flexible combinations of nodes and switches in an experiment.
@margin-note{The layer one network is still being deployed.}
}
@clab-only{
......@@ -130,15 +152,40 @@
links to Internet2.
}
]
Phase II added 260 new nodes, 36 with one or more GPUs:
@(nodetype "c220g5" 224 "Intel Skylake, 20 core, 2 disks"
(list "CPU" "Two Intel Xeon Silver 4114 10-core CPUs at 2.20 GHz")
(list "RAM" "192GB ECC DDR4-2666 Memory")
(list "Disk" "One 1 TB 7200 RPM 6G SAS HDs")
(list "Disk" "One Intel DC S3500 480 GB 6G SATA SSD")
(list "NIC" "Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c240g5" 32 "Intel Skylake, 20 core, 2 disks, GPU"
(list "CPU" "Two Intel Xeon Silver 4114 10-core CPUs at 2.20 GHz")
(list "RAM" "192GB ECC DDR4-2666 Memory")
(list "Disk" "One 1 TB 7200 RPM 6G SAS HDs")
(list "Disk" "One Intel DC S3500 480 GB 6G SATA SSD")
(list "GPU" "One NVIDIA 12GB PCI P100 GPU")
(list "NIC" "Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c4130" 4 "Intel Broadwell, 16 core, 2 disks, 4 GPUs"
(list "CPU" "Two Intel Xeon E5-2667 8-core CPUs at 3.20 GHz")
(list "RAM" "128GB ECC Memory")
(list "Disk" "Two 960 GB 6G SATA SSD")
(list "GPU" "Four NVIDIA 16GB Tesla V100 SMX2 GPUs"))
}
@clab-only{
@section[#:tag "cloudlab-clemson"]{CloudLab Clemson}
The CloudLab cluster at Clemson University has been built
partnership with Dell. The cluster so far has 186
servers with a total of 4,400 cores, 596TB of disk space, and
48TB of RAM. All nodes have 10GB Ethernet and QDR
partnership with Dell. The cluster so far has 260
servers with a total of 6,736 cores, 1,272TB of disk space, and
73TB of RAM. All nodes have 10GB Ethernet and most have QDR
Infiniband. It is located in Clemson, South Carolina.
More technical details can be found at @url[(@apturl "hardware.php#clemson")]
......@@ -176,6 +223,17 @@
(list "NIC" "Dual-port Intel 10Gbe NIC (X710)")
(list "NIC" "Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)"))
There are also two, storage intensive (270TB each!) nodes
that should only be used if you need a huge amount of volatile
storage. These nodes have only 10GB Ethernet.
@(nodetype "dss7500" 2 "Haswell, 12 core, 270TB disk"
(list "CPU" "Two Intel E5-2620 v3 6-core CPUs at 2.40 GHz (Haswell)")
(list "RAM" "128GB ECC Memory")
(list "Disk" "Two 120 GB 6Gbps SATA SSDs")
(list "Disk" "45 6 TB 7.2K RPM 6Gbps SATA HDDs")
(list "NIC" "Dual-port Intel 10Gbe NIC (X520)"))
There are three networks at the Clemson site:
@itemlist[
......@@ -200,6 +258,21 @@
with full bisection bandwidth.
}
]
Phase two added 18 Dell C6420 chassis each with four dual-socket Skylake-based
servers. Each of the 72 servers has 32 cores for a total of 2304 cores.
@(nodetype "c6420" 72 "Intel Skylake, 32 core, 2 disk"
(list "CPU" "Two Sixteen-core Intel Xeon Gold 6142 CPUs at 2.6 GHz")
(list "RAM" "384GB ECC DDR4-2666 Memory")
(list "Disk" "Two Seagate 1TB 7200 RPM 6G SATA HDs")
(list "NIC" "Dual-port Intel X710 10Gbe NIC")
)
Each server is connected via a 1Gbps control link (Dell D3048 switches) and a
10Gbps experimental link (Dell S5048 switches).
These Phase II machines do not include Infiniband.
}
@section[#:tag "apt-cluster"]{Apt Cluster}
......
......@@ -2,16 +2,15 @@
* Overrides some of Scribble's default stying to match PhantomNet's colors
*/
@import url(http://fonts.googleapis.com/css?family=Roboto+Slab:300,400,700);
@import url(http://fonts.googleapis.com/css?family=Roboto:400,400italic,700,700italic,300,300italic);
@import url(https://www.powderwireless.net/powder/fonts/raleway/stylesheet.css);
.navsettop, .navsetbottom, .tocset {
background-color: #c00;
color: white;
background-color: #a2d2df;
color: black;
}
.navsettop, .navsetbottom, .tocset td a {
color: white;
color: black;
}
.tocset td a.tocviewselflink {
......@@ -24,15 +23,15 @@
}
.navsettop .nonavigation, .navsetbottom .nonavigation {
color: #bbb;
color: #666;
}
.navsettop a:hover, .navsetbottom a:hover {
background: #7f3300;
background: #dcf3ff;
}
.tocviewsublist, .tocviewsublistonly, .tocviewsublisttop, .tocviewsublistbottom {
border-left: 1px solid #bbb;
border-left: 1px solid #666;
}
.refcolumn {
......@@ -45,19 +44,17 @@
*/
body, .main {
font-family: 'Roboto', sans-serif;
font-weight: lighter;
font-family: 'Raleway', sans-serif;
}
h1, h2, h3, h4, h5, h6 {
font-family: 'Roboto Slab', serif;
font-family: 'Raleway Bold', serif;
}
.SAuthorListBox {
font-family: 'Roboto Slab', serif;
font-weight: lighter;
font-family: 'Raleway Italic', serif;
}
.tocset td, .navsettop, .navsetbottom {
font-family: 'Roboto', sans-serif;
font-family: 'Raleway', sans-serif;
}
......@@ -6,7 +6,7 @@
#:date (date->string (current-date))]{The POWDER Manual}
@author[
"Jacobus (Kobus) Van der Merwe" "Robert Ricci" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler" "Eric Eide"
"The Powder Team"
]
@;{
......@@ -38,4 +38,5 @@ The Powder facility is built on top of
@include-section["advanced-topics.scrbl"]
@include-section["hardware.scrbl"]
@include-section["planned.scrbl"]
@include-section["powder-tutorial.scrbl"]
@include-section["getting-help.scrbl"]
This diff is collapsed.
......@@ -3,53 +3,6 @@
@title[#:tag "users" #:version apt-version]{@(tb) Users}
@apt-only{
You may either use @(tb) as a @seclink["guest-users"]{guest} or as a
@seclink["registered-users"]{registered user}.
Using @(tb) as a guest is a great way to give it a try; if you find it
useful and want to start using it for ``real work,'' you should
@seclink["register"]{sign up for a (free) account}, because a guest account
(1) won't let you hold your experiments for very long and (2) only allows
you to use @seclink["virtual-machines"]{virtual machines}, which are not
ideal for @seclink["repeatable-research"]{reproducing results}, since they
don't have strong performance isolation from other users.
@section[#:tag "guest-users"]{Guest Users}
You may become a guest user simply by entering your email address on
@(tb)'s @hyperlink[(apturl "instantiate.php")]{``Instantiate an
Experiment''} page and picking a username. @(tb) will send you an email
with a verification code - be sure to check your spam folder if you don't
receive it within a few minutes.
You'll remain logged in to @(tb) as long as you use the same browser and it
retains its cookies. If you get logged out for any reason, simply enter the
same email address and username again, and you'll be sent a new
verification code.
Guest users are limited in several ways:
@itemlist[
@item{Guests are only allowed to hold experiments for a short period of
time---a few hours to start with, and they can extend this up to a day}
@item{Access to some resources (such as bare metal and large VMs) is not
allowed, meaning that some profiles which require these things are not
available}
@item{Experiments held by guest user are very heavily firewalled---no
outgoing connections are allowed, and almost all incoming traffic is
blocked}
@item{Guest users are only allowed to have one active experiment at a time}
@item{Guest users may not create profiles}
]
If you are going to use @(tb) for much serious work, we encourage you to
@seclink["register"]{register for an account}.
}
@section[#:tag "registered-users"]{Registered Users}
Registering for an account is @seclink["register"]{quick and easy}. Registering
doesn't cost anything, it's simply for accountability. We just ask that if
you're going to use @(tb) for anything other than light use, you tell us a bit
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment