Commit 1bc93167 authored by David Johnson's avatar David Johnson

Add documentation on the Cloudlab profile: running tests, etc.

parent 370c4b57
Pipeline #1861 passed with stage
in 2 seconds
Using Capnet in a OpenStack-Capnet Cloudlab experiment
------------------------------------------------------
(Note: it may be useful to read and understand ``../README.rst`` so that
you fully understand the automated test scripts that you can run inside
this profile.)
All the Capnet components are preinstalled in an experiment instantiated
from the OpenStack-Capnet Cloudlab profile. Moreover, the OpenStack
Capnet configuration is fully completed for the network parameters you
selected during profile instantiation, so you don't have to configure
anything. Finally, the profile setup scripts also create one shared
Capnet virtual OpenStack network, as well as several test projects and
users (currently 4 user tenants (`tenant-$i`), and 4 service tenants
(`service-$i`). The tenant user credentials are placed in
``/root/setup/``, in files like ``/root/setup/tenant-0-user-openrc.sh``
and ``/root/setup/service-0-user-openrc.sh``. This allows you to start
playing with some example workflow agents immediately.
The test scripts can be found in ``cloudlab/tests`` (or in
``/root/setup/capnet/networking-capnet/cloudlab/tests`` in a running
experiment). We're going to focus on ``test-hadoop.sh``,
``test-cleanup-tenant.sh``, and
``test-cleanup-sw-restart-controller.sh``. You'll currently need to be
`root` to easily run the tests, although this is not necessary.
The ``test-hadoop.sh`` script runs our Hadoop SaaS example of two Capnet
workflow agents, each from a different tenant, collaborating via
capabilities. The idea here is a the service tenant will run a workflow
agent that provides some kind of service (i.e., install Hadoop); and a
user tenant will allocate some nodes and create a workflow agent that
uses the SaaS Capnet model to request the service workflow agent to
configure its nodes. Once the service workflow agent has configured the
nodes and the flows between them, it returns to the user workflow agent,
which cuts off the service agent's access, and the user agent then uses
its freshly configured nodes to do a computation (word count on a large
file, currently).
You can run ``test-hadoop.sh`` as follows::
USAGE: test-hadoop.sh <testdir> <user-tenant> <service-tenant> <networkname>
<bridgename> <num-slaves> [<hadoop-args>]
The `testdir` argument specifies a directory where results from the test
(logfiles, OpenVSwitch flow tables, etc) are placed. If this directory
doesn't exist, it will be created. `user-tenant` and `service-tenant`
are project names; the user tenant will host nodes and a workflow agent;
the service tenant will host only a service workflow agent that installs
Hadoop. `networkname` is the virtual shared OpenStack Capnet network
you want the nodes and workflow agents to be attached to; in this
profile, it will be called ``capnetlan-1``. Because the test scripts
suck back statistics including OpenVSwitch flow tables, they must be
told which OpenVSwitch bridge hosts `networkname`; in this profile, it
is ``br-capnetlan-1``. Finally, you must specify the number of slave
VM nodes that will be instantiated (recall that in addition to slave
nodes, Hadoop requires a `resourcemanager` and `master` node --- so if
you specify two slaves, you'll wind up with four VMs. You could do that
like this::
cd /root/setup/capnet/networking-capnet/cloudlab
mkdir test1
tests/test-hadoop.sh test1 tenant-0 service-0 capnetlan-1 br-capnetlan-1 2
Once you've run the test, you can run the cleanup scripts.
``test-cleanup-tenant.sh`` removes all nodes and workflow agents from a
given pre-created tenant. ``test-cleanup-sw-restart-controller.sh``
removes all flow rules from all switches; restarts ``neutron-server`` on
the ``ctl`` node; and restarts the ``neutron-plugin-capnet-agent``
processes on all physical nodes in the experiment. Restarting
``neutron-plugin-capnet-agent`` on the ``nm`` node also has the side
affect of restarting MUL and the Capnet controller, and recreating the
controller metadata files. So to clean up from the above example, you
would do::
tests/test-cleanup-tenant.sh service-0
tests/test-cleanup-tenant.sh user-0
tests/test-cleanup-sw-restart-controller.sh br-capnetlan-1
(See the following section to understand why you must currently clean up
the OpenVSwitch bridges and Neutron components!)
Monitoring and debugging your experiment
----------------------------------------
First, we enable Neutron debug logging on all nodes; Neutron logs are in
``/var/log/neutron/``. If something is going wrong with the
``neutron-server`` or any of the capnet Neutron agents, that's where
you'll want to look. They are quite verbose, but if you grep for
"error" or "exception", you'll catch the obvious.
Second, the Capnet controller logs, metadata files, and logs from any
workflow agents you create, are placed in ``/var/tmp`` on the ``nm``
node. Examining the metadata files (they start with the
``cnc.metadata`` prefix) might tell you if there's a problem with
communicating the OpenStack metadata to the Capnet controller. You
might want to tail the workflow agent logs, for instance, to watch
Hadoop being set up, and then to watch it being used to run wordcount.
The workflow agent logs start with prefixes like ``wfagent.service-0``
and ``wfagent.tenant-0``.
Finally, since the Capnet controller does not yet handle port deletion,
you cannot sanely delete an OpenStack VM or Capnet workflow agent yet!
The controller will either crash or maintain inconsisten, insecure
state. This is why the ``test-cleanup-sw-restart-controller.sh`` script
exists. We plan to add this functionality as quickly as possible, of
course.
This is the source code (a `geni-lib` script, `osp-capnet.py`) to create
This is the source code (a ``geni-lib`` script, ``osp-capnet.py``) to create
a CloudLab profile to setup and run Capnet in an OpenStack. This
profile lives at https://www.cloudlab.us/p/TCloud/OpenStack-Capnet . It
is basically the CloudLab OpenStack profile
......@@ -9,9 +9,9 @@ that install Capnet and configure it based on the user's specified
profile parameters. It relies on this extension support to be present
in the core CloudLab OpenStack profile tarball.
To make the extension tarball, in this directory, do:
To make the extension tarball, in this directory, do::
$ tar -czvf setup-ext-capnet-vX.tar.gz setup-ext-capnet
tar -czvf setup-ext-capnet-vX.tar.gz setup-ext-capnet
Then if you need to change the canonical, official one installed on
boss.emulab.net, that the official profile references, get someone with
......
Updating a running OpenStack-TCloud Cloudlab experiment
Updating a running OpenStack-Capnet Cloudlab experiment
-------------------------------------------------------
If you want to update a running Cloudlab experiment to pull in recent
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment