Commit 8b41cb00 authored by Robert Ricci's avatar Robert Ricci

Merge branch 'master' into link-samples-to-profiles

parents c59541ad ab5c5d1c
apt-manual/ apt-manual/
cloudlab-manual/ cloudlab-manual/
phantomnet-manual/ phantomnet-manual/
powder-manual/
emulab-manual/ emulab-manual/
compiled/ compiled/
pdf/ pdf/
......
...@@ -104,7 +104,7 @@ ...@@ -104,7 +104,7 @@
('apt "https://groups.google.com/forum/#!forum/apt-users") ('apt "https://groups.google.com/forum/#!forum/apt-users")
('clab "https://groups.google.com/forum/#!forum/cloudlab-users") ('clab "https://groups.google.com/forum/#!forum/cloudlab-users")
('pnet "https://groups.google.com/forum/#!forum/phantomnet-users") ('pnet "https://groups.google.com/forum/#!forum/phantomnet-users")
('powder "https://groups.google.com/forum/#!forum/phantomnet-users") ('powder "https://groups.google.com/forum/#!forum/powder-users")
('elab "https://groups.google.com/forum/#!forum/emulab-users"))) ('elab "https://groups.google.com/forum/#!forum/emulab-users")))
(define tb (lambda () (case (tb-mode) (define tb (lambda () (case (tb-mode)
......
...@@ -49,7 +49,7 @@ All nodes are connected to two networks: ...@@ -49,7 +49,7 @@ All nodes are connected to two networks:
connected to the public Internet. When you log in to nodes in your connected to the public Internet. When you log in to nodes in your
experiment using @code{ssh}, this is the network you are using. experiment using @code{ssh}, this is the network you are using.
@italic{You should not use this network as part of the experiments you @italic{You should not use this network as part of the experiments you
run in Apt.} run in Emulab.}
} }
@item{A 10/100Mb, 1/10Gb Ethernet @item{A 10/100Mb, 1/10Gb Ethernet
......
...@@ -263,7 +263,9 @@ disk image has been loaded.) The second service, described by the ...@@ -263,7 +263,9 @@ disk image has been loaded.) The second service, described by the
@tt{command}. In this example (as is common), the command refers directly @tt{command}. In this example (as is common), the command refers directly
to a file saved by the immediately preceding @geni-lib["geni.rspec.pg.Install" 'id] service. This to a file saved by the immediately preceding @geni-lib["geni.rspec.pg.Install" 'id] service. This
behaviour works, because @(tb) guarantees that all @geni-lib["geni.rspec.pg.Install" 'id] services behaviour works, because @(tb) guarantees that all @geni-lib["geni.rspec.pg.Install" 'id] services
complete before any @geni-lib["rspec.pg.Execute" 'id] services are started. complete before any @geni-lib["rspec.pg.Execute" 'id] services are started. The command executes
every time the node boots, so you can use it start daemons, etc. that are necessary for your
experiment.
@section[#:tag "geni-lib-example-parameters"]{Profiles with user-specified parameters} @section[#:tag "geni-lib-example-parameters"]{Profiles with user-specified parameters}
......
...@@ -19,13 +19,10 @@ ...@@ -19,13 +19,10 @@
@section[#:tag "cloudlab-utah"]{CloudLab Utah} @section[#:tag "cloudlab-utah"]{CloudLab Utah}
The CloudLab cluster at the University of Utah is being built in partnership The CloudLab cluster at the University of Utah is being built in partnership
with HP. The consists of 315 64-bit ARM servers and 270 Intel Xeon-D severs. with HP. It consists of 200 Intel Xeon E5 servers, 270 Xeon-D servers, and
Each has 8 cores, for a total of 4,680 cores. The servers are 315 64-bit ARM servers and 270 Intel Xeon-D severs, for a total of 6,680
built on HP's Moonshot platform. cores. The cluster is housed in the University of Utah's Downtown Data
The cluster is housed in the University of Utah's Downtown Center in Salt Lake City.
Data Center in Salt Lake City.
More technical details can be found at @url[(@apturl "hardware.php#utah")]
@(nodetype "m400" 315 "64-bit ARM" @(nodetype "m400" 315 "64-bit ARM"
(list "CPU" "Eight 64-bit ARMv8 (Atlas/A57) cores at 2.4 GHz (APM X-GENE)") (list "CPU" "Eight 64-bit ARMv8 (Atlas/A57) cores at 2.4 GHz (APM X-GENE)")
...@@ -55,6 +52,31 @@ ...@@ -55,6 +52,31 @@
We have plans to enable some users to allocate entire chassis; when We have plans to enable some users to allocate entire chassis; when
allocated in this mode, it will be possible to have complete administrator allocated in this mode, it will be possible to have complete administrator
control over the switches in addition to the nodes. control over the switches in addition to the nodes.
In phase two we added 50 Apollo R2200 chassis each with four HPE ProLiant
XL170r server modules. Each server has 10 cores for a total of 2000 cores.
@(nodetype "xl170" 200 "Intel Broadwell, 10 core, 1 disk"
(list "CPU" "Ten-core Intel E5-2640v4 at 2.4 GHz")
(list "RAM" "64GB ECC Memory (4x 16 GB DDR4-2400 DIMMs)")
(list "Disk" "Intel DC S3520 480 GB 6G SATA SSD")
(list "NIC" "Two Dual-port Mellanox ConnectX-4 25 GB NIC (PCIe v3.0, 8 lanes")
)
Each server is connected via a 10Gbps control link (Dell switches) and a
25Gbps experimental link to Mellanox 2410 switches in groups of 40 servers.
Each of the five groups' experimental switches are connected to a Mellanox
2700 spine switch at 5x100Gbps. That switch in turn interconnects with the
rest of the Utah CloudLab cluster via 6x40Gbps uplinks to the HP FlexFabric
12910 switch.
A unique feature of the phase two nodes is the addition of eight ONIE
bootable "user allocatable" switches that can run a variety of Open Network
OSes: six Dell S4048-ONs and two Mellanox MSN2410-BB2Fs. These switches and
all 200 nodes are connected to two NetScout 3903 layer-1 switches, allowing
flexible combinations of nodes and switches in an experiment.
@margin-note{The layer one network is still being deployed.}
} }
@clab-only{ @clab-only{
...@@ -130,15 +152,40 @@ ...@@ -130,15 +152,40 @@
links to Internet2. links to Internet2.
} }
] ]
Phase II added 260 new nodes, 36 with one or more GPUs:
@(nodetype "c220g5" 224 "Intel Skylake, 20 core, 2 disks"
(list "CPU" "Two Intel Xeon Silver 4114 10-core CPUs at 2.20 GHz")
(list "RAM" "192GB ECC DDR4-2666 Memory")
(list "Disk" "One 1 TB 7200 RPM 6G SAS HDs")
(list "Disk" "One Intel DC S3500 480 GB 6G SATA SSD")
(list "NIC" "Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c240g5" 32 "Intel Skylake, 20 core, 2 disks, GPU"
(list "CPU" "Two Intel Xeon Silver 4114 10-core CPUs at 2.20 GHz")
(list "RAM" "192GB ECC DDR4-2666 Memory")
(list "Disk" "One 1 TB 7200 RPM 6G SAS HDs")
(list "Disk" "One Intel DC S3500 480 GB 6G SATA SSD")
(list "GPU" "One NVIDIA 12GB PCI P100 GPU")
(list "NIC" "Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)")
(list "NIC" "Onboard Intel i350 1Gb"))
@(nodetype "c4130" 4 "Intel Broadwell, 16 core, 2 disks, 4 GPUs"
(list "CPU" "Two Intel Xeon E5-2667 8-core CPUs at 3.20 GHz")
(list "RAM" "128GB ECC Memory")
(list "Disk" "Two 960 GB 6G SATA SSD")
(list "GPU" "Four NVIDIA 16GB Tesla V100 SMX2 GPUs"))
} }
@clab-only{ @clab-only{
@section[#:tag "cloudlab-clemson"]{CloudLab Clemson} @section[#:tag "cloudlab-clemson"]{CloudLab Clemson}
The CloudLab cluster at Clemson University has been built The CloudLab cluster at Clemson University has been built
partnership with Dell. The cluster so far has 186 partnership with Dell. The cluster so far has 260
servers with a total of 4,400 cores, 596TB of disk space, and servers with a total of 6,736 cores, 1,272TB of disk space, and
48TB of RAM. All nodes have 10GB Ethernet and QDR 73TB of RAM. All nodes have 10GB Ethernet and most have QDR
Infiniband. It is located in Clemson, South Carolina. Infiniband. It is located in Clemson, South Carolina.
More technical details can be found at @url[(@apturl "hardware.php#clemson")] More technical details can be found at @url[(@apturl "hardware.php#clemson")]
...@@ -176,6 +223,17 @@ ...@@ -176,6 +223,17 @@
(list "NIC" "Dual-port Intel 10Gbe NIC (X710)") (list "NIC" "Dual-port Intel 10Gbe NIC (X710)")
(list "NIC" "Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)")) (list "NIC" "Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)"))
There are also two, storage intensive (270TB each!) nodes
that should only be used if you need a huge amount of volatile
storage. These nodes have only 10GB Ethernet.
@(nodetype "dss7500" 2 "Haswell, 12 core, 270TB disk"
(list "CPU" "Two Intel E5-2620 v3 6-core CPUs at 2.40 GHz (Haswell)")
(list "RAM" "128GB ECC Memory")
(list "Disk" "Two 120 GB 6Gbps SATA SSDs")
(list "Disk" "45 6 TB 7.2K RPM 6Gbps SATA HDDs")
(list "NIC" "Dual-port Intel 10Gbe NIC (X520)"))
There are three networks at the Clemson site: There are three networks at the Clemson site:
@itemlist[ @itemlist[
...@@ -200,6 +258,21 @@ ...@@ -200,6 +258,21 @@
with full bisection bandwidth. with full bisection bandwidth.
} }
] ]
Phase two added 18 Dell C6420 chassis each with four dual-socket Skylake-based
servers. Each of the 72 servers has 32 cores for a total of 2304 cores.
@(nodetype "c6420" 72 "Intel Skylake, 32 core, 2 disk"
(list "CPU" "Two Sixteen-core Intel Xeon Gold 6142 CPUs at 2.6 GHz")
(list "RAM" "384GB ECC DDR4-2666 Memory")
(list "Disk" "Two Seagate 1TB 7200 RPM 6G SATA HDs")
(list "NIC" "Dual-port Intel X710 10Gbe NIC")
)
Each server is connected via a 1Gbps control link (Dell D3048 switches) and a
10Gbps experimental link (Dell S5048 switches).
These Phase II machines do not include Infiniband.
} }
@section[#:tag "apt-cluster"]{Apt Cluster} @section[#:tag "apt-cluster"]{Apt Cluster}
......
...@@ -2,16 +2,15 @@ ...@@ -2,16 +2,15 @@
* Overrides some of Scribble's default stying to match PhantomNet's colors * Overrides some of Scribble's default stying to match PhantomNet's colors
*/ */
@import url(http://fonts.googleapis.com/css?family=Roboto+Slab:300,400,700); @import url(https://www.powderwireless.net/powder/fonts/raleway/stylesheet.css);
@import url(http://fonts.googleapis.com/css?family=Roboto:400,400italic,700,700italic,300,300italic);
.navsettop, .navsetbottom, .tocset { .navsettop, .navsetbottom, .tocset {
background-color: #c00; background-color: #a2d2df;
color: white; color: black;
} }
.navsettop, .navsetbottom, .tocset td a { .navsettop, .navsetbottom, .tocset td a {
color: white; color: black;
} }
.tocset td a.tocviewselflink { .tocset td a.tocviewselflink {
...@@ -24,15 +23,15 @@ ...@@ -24,15 +23,15 @@
} }
.navsettop .nonavigation, .navsetbottom .nonavigation { .navsettop .nonavigation, .navsetbottom .nonavigation {
color: #bbb; color: #666;
} }
.navsettop a:hover, .navsetbottom a:hover { .navsettop a:hover, .navsetbottom a:hover {
background: #7f3300; background: #dcf3ff;
} }
.tocviewsublist, .tocviewsublistonly, .tocviewsublisttop, .tocviewsublistbottom { .tocviewsublist, .tocviewsublistonly, .tocviewsublisttop, .tocviewsublistbottom {
border-left: 1px solid #bbb; border-left: 1px solid #666;
} }
.refcolumn { .refcolumn {
...@@ -45,19 +44,17 @@ ...@@ -45,19 +44,17 @@
*/ */
body, .main { body, .main {
font-family: 'Roboto', sans-serif; font-family: 'Raleway', sans-serif;
font-weight: lighter;
} }
h1, h2, h3, h4, h5, h6 { h1, h2, h3, h4, h5, h6 {
font-family: 'Roboto Slab', serif; font-family: 'Raleway Bold', serif;
} }
.SAuthorListBox { .SAuthorListBox {
font-family: 'Roboto Slab', serif; font-family: 'Raleway Italic', serif;
font-weight: lighter;
} }
.tocset td, .navsettop, .navsetbottom { .tocset td, .navsettop, .navsetbottom {
font-family: 'Roboto', sans-serif; font-family: 'Raleway', sans-serif;
} }
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
#:date (date->string (current-date))]{The POWDER Manual} #:date (date->string (current-date))]{The POWDER Manual}
@author[ @author[
"Jacobus (Kobus) Van der Merwe" "Robert Ricci" "Leigh Stoller" "Kirk Webb" "Jon Duerig" "Gary Wong" "Keith Downie" "Mike Hibler" "Eric Eide" "The Powder Team"
] ]
@;{ @;{
...@@ -38,4 +38,5 @@ The Powder facility is built on top of ...@@ -38,4 +38,5 @@ The Powder facility is built on top of
@include-section["advanced-topics.scrbl"] @include-section["advanced-topics.scrbl"]
@include-section["hardware.scrbl"] @include-section["hardware.scrbl"]
@include-section["planned.scrbl"] @include-section["planned.scrbl"]
@include-section["powder-tutorial.scrbl"]
@include-section["getting-help.scrbl"] @include-section["getting-help.scrbl"]
#lang scribble/manual
@(require "defs.rkt")
@title[#:tag "powder-tutorial" #:version apt-version]{@(tb) OAI Tutorial}
This tutorial will walk you through the process of creating a small LTE network on
@(tb) using OAI. Your copy of OAI will run on bare-metal machines
that are dedicated for your use for the duration of your experiment. You will
have complete administrative access to these machines, meaning that you have
full ability to customize and/or configure your installation of OAI.
OAI can simulate a UE and RAN, but in this tutorial, we will focus on using an
off the shelf Nexus 5 phone as a UE communicating with an eNodeB with a B210 USRP SDR as
the base station.
@section{Objectives}
In the process of taking this tutorial, you will learn to:
@itemlist[
@item{Log in to @(tb)}
@item{Create your own LTE network by using a pre-defined profile}
@item{Access resources in the network that you create}
@item{Create an end-to-end connection from the UE, through the RAN, across an EPC, and out to the commodity Internet.}
@item{Clean up your experiment when finished}
@item{Learn where to get more information}
]
@section{Prerequisites}
This tutorial assumes that you have an existing account on @(tb) (Instructions for getting
an account can be found @seclink["register"]{here}.)
@section[#:tag "powder-login-body"]{Logging In}
The first step is to log in to @(tb); @(tb) is available to
researchers and educators who work in radio networking and have accepted the @(tb) AUP.
If you have
an account at one of its federated facilities, like
@link["https://www.emulab.net"]{Emulab} or
@link["http://cloudlab.us"]{CloudLab}, then you already have an account at
@(tb).
@screenshot["powder-front-page.png"]
@section[#:tag "powder-tutorial-body"]{Building Your Own OAI Network}
Once you have logged in to @(tb), you will ``instantiate'' a @seclink["profiles"]{``profile''}
to create an @seclink["experiments"]{experiment}. Profiles are @(tb)'s way of packaging up
configurations and experiments
so that they can be shared with others. Each experiment is separate:
the experiment that you create for this tutorial will be an instance of a profile provided by
the facility, but running on resources that are dedicated to you, which you
have complete control over. This profile uses local disk space on the nodes, so
anything you store there will be lost when the experiment terminates.
@margin-note{The OAI network we will build in this tutorial is very small, but @(tb)
will have city-scale infrastructure that can be used for larger-scale
experiments.}
For this tutorial, we will use a basic profile that brings up a small LTE network.
The @(tb) staff have built this profile by capturing
@seclink["disk-images"]{disk images} of a partially-completed OAI installation
and scripting the remainder of the install (customizing it for the specific
machines that will get allocated, the user that created it, the SIM card in the allocated phone, etc.)
See this manual's @seclink["profiles"]{section on profiles} for more
information about how they work.
@itemlist[#:style 'ordered
@instructionstep["Start Experiment"]{
@screenshot["powder-start-experiment-menu.png"]
After logging in, you are taken to your main status
@link["https://powderwireless.net/user-dashboard.php"]{dashboard}.
Select ``Start Experiment'' from
the ``Experiments'' menu.
}
@instructionstep["Select a profile"]{
@screenshot["powder-start-experiment.png"]
The ``Start an Experiment'' page is where you will select a profile
to instantiate. We will use the @bold{OAI-Real-Hardware} profile; if
it is not selected, follow
@link["https://powderwireless.net/p/PhantomNet/OAI-Real-Hardware"]{this link}
or click the ``Change Profile'' button, and select
``OAI-Real-Hardware'' from the list on the left.
Once you have the correct profile selected, click ``Next''
@screenshot["powder-click-next.png"]
}
@instructionstep["Set parameters"
#:screenshot "powder-set-parameters.png"]{
Profiles in CloudLab can have @emph{parameters} that affect how they are
configured; for example, this profile has parameters that allow you to
specify whether the experiment will use real hardware over-the-air, across an attenuator, or with a simulated RAN.
For this tutorial, we will leave all parameters at their defaults and
just click ``next''.
}
@instructionstep["Click Finish!"
#:screenshot "powder-click-finish.png"]{
When you click the ``finish'' button, @(tb) will start
provisioning the resources that you requested.
@margin-note{You may optionally give your experiment a name---this
can be useful if you have many experiments running at once.}
}
@instructionstep["Powder instantiates your profile"]{
@(tb) will take a few minutes to bring up your copy of OAI, as
many things happen at this stage, including selecting suitable
hardware, loading disk images on local storage, booting bare-metal
machines, re-configuring the network topology, etc. While this is
happening, you will see this status page:
@screenshot["powder-waiting.png"]
@margin-note{Provisioning is done using the
@link["http://groups.geni.net/geni/wiki/GeniApi"]{GENI APIs}; it
is possible for advanced users to bypass the @(tb) portal and
call these provisioning APIs from their own code. A good way to
do this is to use the @link["https://geni-lib.readthedocs.org"]{@tt{geni-lib} library for Python.}}
As soon as a set of resources have been assigned to you, you will see
details about them at the bottom of the page (though you will not be
able to log in until they have gone through the process of imaging and
booting.) While you are waiting for your resources to become available,
you may want to have a look at the
@link["http://docs.powderwireless.net"]{@(tb)
user manual}, or use the ``Sliver'' button to watch the logs of the
resources (``slivers'') being provisioned and booting.
}
@instructionstep["Your network is ready!"
#:screenshot "powder-ready.png"]{
When the web interface reports the state as ``Booted'', your network
is provisioned, and you can proceed to the next section.
}
]
@section{Exploring Your Experiment}
Now that your experiment is ready, take a few minutes to look at various parts
of the @(tb) status page to help you understand what resources you've got and what
you can do with them.
@subsection{Experiment Status}
The panel at the top of the page shows the status of your experiment---you can
see which profile it was launched with, when it will expire, etc. The
buttons in this area let you make a copy of the profile (so that you can
@seclink["creating-profiles"]{customize it}), ask to hold on to the resources
for longer, or release them immediately.
@screenshot["powder-status.png"]
Note that the default lifetime for experiments on @(tb) is less than a day;
after this time, the resources will be reclaimed and their disk contents will
be lost. If you need to use them for longer, you can use the ``Extend'' button
and provide a description of why they are needed. Longer extensions require
higher levels of approval from @(tb) staff. You might also consider
@seclink["creating-profiles"]{creating a profile} of your own if you might need
to run a customized environment multiple times or want to share it with others.
You can click the title of the panel to expand or collapse it.
@subsection{Profile Instructions}
Profiles may contain written instructions for their use. Clicking on the title
of the ``Profile Instructions'' panel will expand (or collapse) it; in this
case, the instructions provide details on how to start running OAI services and a link to further documentation on how this profile works and how to work with it.
@screenshot["powder-instructions.png"]
@subsection{Topology View}
At the bottom of the page, you can see the topology of your experiment. This
profile has separate nodes for the OAI EPC and eNodeB. In addition, it has a VM
(``adb-tgt'') that lets you control the Nexus 5 UE. The names given for each
node are
the names assigned as part of the profile; this way, every time you instantiate
a profile, you can refer to the nodes using the same names, regardless of which
physical hardware was assigned to them. The green boxes around each node
indicate that they are up; click the ``Refresh Status'' button to initiate a
fresh check.
@screenshot["powder-topology-view.png"]
If an experiment has ``startup services'' (programs that run at the beginning
of the experiment to set it up), their status is indicated by a small icon in
the upper right corner of the node. You can mouse over this icon to see a
description of the current status.
It is important to note that most nodes in @(tb) have at least @italic{two}
network interfaces: one ``control network'' that carries public IP
connectivity, and one ``experiment network'' that is isolated from the Internet
and all other experiments. It is the experiment net that is shown in this
topology view. You will use the control network to @(ssh) into your nodes and
interact with them. This separation gives you more
freedom and control in the private experiment network, and sets up a clean
environment for @seclink["repeatable-research"]{repeatable research}.
@subsection[#:tag "powder-tutorial-list-view"]{List View}
The list view tab shows similar information to the topology view, but in a
different format. It shows the identities of the nodes you have been
assigned, and the full @(ssh) command lines to connect to them. In some
browsers (those that support the @tt{ssh://} URL scheme), you can click on the
SSH commands to automatically open a new session. On others, you may need to
cut and paste this command into a terminal window. Note that only public-key
authentication is supported, and you must have set up an @(ssh) keypair on your
account @bold{before} starting the experiment in order for authentication to
work.
@screenshot["powder-list-view.png"]
@subsection{Manifest View}
The third default tab shows a
@link["http://groups.geni.net/geni/wiki/GENIExperimenter/RSpecs#ManifestRSpec"]{manifest} detailing the hardware that has been assigned to you. This is the
@seclink["rspecs"]{``request'' RSpec} that is used to define the profile,
annotated with details of the hardware that was chosen to instantiate your
request. This information is available on the nodes themselves using the
@link["http://groups.geni.net/geni/wiki/GeniGet"]{@tt{geni-get}} command,
enabling you to do rich scripting that is fully aware of both the requested
topology and assigned resources.
@margin-note{Most of the information displayed on the @(tb) status page comes
directly from this manifest; it is parsed and laid out in-browser.}
@screenshot["powder-manifest-view.png"]
@subsection{Graphs View}
The final default tab shows a page of CPU load and network traffic
graphs for the nodes in your experiment. On a freshly-created
experiment, it may take several minutes for the first data to appear.
After clicking on the ``Graphs'' tab the first time, a small reload icon
will appear on the tab, which you can click to refresh the data and
regenerate the graphs. For instance, here is the load average graph for
an OAI experiment running this profile for over 6 hours. Scroll
past this screenshot to see the control and experiment network traffic
graphs. In your experiment, you'll want to wait 20-30 minutes before
expecting to see anything interesting.
@screenshot["powder-graph-view.png"]
@subsection[#:tag "powder-tutorial-actions"]{Actions}
In both the topology and list views, you have access to several actions that
you may take on individual nodes. In the topology view, click on the node to
access this menu; in the list view, it is accessed through the icon in the
``Actions'' column. Available actions include rebooting (power cycling) a node,
and re-loading it with a fresh copy of its disk image (destroying all data on
the node). While nodes are in the process of rebooting or re-imaging, they
will turn yellow in the topology view. When they have completed, they will
become green again. The @seclink["powder-tutorial-web-shell"]{shell} action
is described in more detail below.
@screenshot["powder-actions-menu.png"]
@subsection[#:tag "powder-tutorial-web-shell"]{Web-based Shell}