Commit bfd788c9 authored by David Johnson's avatar David Johnson

An ML2 driver for using Capnet networks in OpenStack.

parents
Pipeline #1091 skipped
===============================
networking-capnet
===============================
An ML2 Neutron driver, agent, and associated extensions to manage Capnet
capability-based networks.
* Source: https://gitlab.flux.utah.edu/tcloud/networking-capnet
* Bugs: https://gitlab.flux.utah.edu/tcloud/networking-capnet/issues
Capnet
----------------
Capnet is an application of capability theory to network privileges. A
Capnet network is a least-privilege, Layer 2 network, where the right
to send data depends on if the sender has a capability to the receiver.
Installing
----------------
You must install several source packages:
* https://gitlab.flux.utah.edu/tcloud/capnet
This is the core Capnet SDN (OpenFlow) switch controller. It
depends on the two packages listed below, `libcap` and `openmul`.
* https://gitlab.flux.utah.edu/xcap/libcap
`libcap` is the user- and kernel-space capability library that
provides the core capability API we use. libcap allows its users to
register new object types that can be associated with capabilities,
and provides hooks to express semantics of capability operations on
those objects when rights to them are granted, revoked, etc.
Capnet's objects include nodes and flows.
* https://gitlab.flux.utah.edu/tcloud/openmul
We have a modified version of OpenMUL (http://www.openmul.org/) that
has some bug fixes as well as build modifications to allow openmul
applications to be built outside the source tree.
Then, install this package using setup.py . If your neutron package was
installed from a distribution package (and is thus installed in
`/usr/lib/python2.7/dist-packages`), you will need to use some extra args
to place the library in `dist-packages` instead of `site-packages` (the
usual default for manually-built or -installed packages). (This matters
because Neutron's ML2 plugin autoloads plugins from a well-known entry
point module namespace; and ML2 plugins can say they have a module to
add to a particular namespace. This autoloading mechanism cannot cross
the dist-packages and site-packages boundary, AFAIK.)
(Also note that all the setup.py command lines below include
`--install-data /`; this ensures that the ml2_conf_capnet.ini sample
file is placed in `/etc/neutron/plugins/ml2/`.)
* Distribution-installed neutron packages, on Ubuntu:
$ cd networking-capnet
$ python setup.py install --install-layout=deb --install-data /
* Distribution-installed neutron packages, on some other linux:
$ cd networking-capnet
$ python setup.py install --install-lib /usr/lib/python2.7/dist-packages --prefix /usr --install-data /
* Manually-installed neutron packages in /usr:
$ cd networking-capnet
$ python setup.py install --prefix /usr --install-data /
* Manually-installed neutron packages in /usr/local (the default):
$ cd networking-capnet
$ python setup.py install --prefix /usr/local --install-data /
If you're developing the plugin, and want to reinstall, you'll do
something like this:
$ cd networking-capnet
$ rm -rf build/ networking_capnet.egg-info/ && python setup.py install --install-layout=deb --install-data / -v -f
to ensure the new code gets installed. Of course, you'll have to
restart the relevant Neutron servers and agents on the controller,
compute nodes (and the network manager machine, if you have one).
Getting started
---------------
Once installed, you must configure the Capnet ML2 plugin to be used by
Neutron. This involved several changes on all your nodes:
$ crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin networking_capnet.plugins.ml2.plugin.CapnetMl2Plugin
(The simple CapnetMl2Plugin wrapper is necessary to extend the ML2
plugin with extra RPCs necessary for Capnet. The ML2 plugin doesn't
provide the ability to extend its set of RPC endpoints. Our wrapper
doesn't change any functionality, other than to add Capnet Workflow
Agent RPCs/notifications.)
$ ML2TYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers`
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers "capnet,$ML2TYPES"
$ ML2TENANTTYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types`
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types "capnet,$ML2TENANTTYPES"
$ ML2MECHDRVS=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers`
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers "capnet,$ML2MECHDRVS"
$ ML2EXTDRVS=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers`
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers "capnet,$ML2EXTDRVS"
Then, make sure that the Capnet Neutron API extensions get loaded (via
the ML2 extension mechanism):
$ ML2EXT=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers`
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers "capnet,$ML2EXT"
You must also configure Neutron to use the Capnet Neutron API extensions
(which are API-level extensions of the Neutron "network" resource):
$ crudini --set /etc/neutron/neutron.conf DEFAULT api_extensions_path /usr/lib/python2.7/dist-packages/networking_capnet/extensions
(Note that the directory you supply here needs to be where the
`networking-capnet` library was installed; here I assume it was
installed in `dist-packages`.)
Then, on your networkmanager node, you need to enable our special
Neutron Interface Driver, so that that the DHCP/L3/metering agents can
create interfaces and get them added to the correct bridge. You'll need
to do this on whichever node is running those agents (probably your
networkmanager node):
$ crudini --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
$ crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
$ crudini --set /etc/neutron/metering_agent.ini DEFAULT interface_driver networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
(This is backwards-compat with the OVSInterfaceDriver; it just is a bit
more intelligent for Capnet networks so we can figure out which bridge
to add the port to.)
Then, if you're on Ubuntu (or if not all ML2 config files in
`/etc/neutron/plugins/ml2` are loaded by default), do something like
this on your controller node:
$ echo CONFIG_FILE="/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf_capnet.ini" >> /etc/default/neutron-server
(This is of course a gross hack, but the Ubuntu Neutron initscript is
only setup to allow a main neutron config file, and a plugin config
file. It doesn't have an /etc/default variable you can set to pass in
arbitrary args. So, we exploit the CONFIG_FILE variable in
/etc/init.d/neutron-server, which normally will add the `--config-file
/etc/neutron/neutron.conf` option to the neutron-server command line,
and jam the ml2_conf_capnet.ini file in as an "additional" argument.
This may not work for you; be careful and check the
`/var/log/neutron/neutron-server.log` file if neutron fails to start.
Next, you'll need to configure your physical network. The simplest
configuration assumes your physical nodes are all connected in a LAN or
VLAN. You want to create an OpenVSwitch bridge on each physical node
called `br-capnet-3` or similar:
$ ovs-vsctl add-br br-capnet-3
Then, put the physical NIC or virtual VLAN NIC device into the
`br-capnet-3` bridge:
$ ovs-vsctl add-port br-capnet-3 ethN
Then, add this OpenVSwitch-based network configuration to the
`/etc/neutron/plugins/ml2/ml2_conf_capnet.ini` on your controller node
file by setting the `bridge_mappings` configuration item to
`capnet-phys-3:br-capnet-3`:
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
capnet bridge_mappings capnet-phys-3:br-capnet-3
Then tell the Capnet plugin how many Capnet tenant networks each Capnet
physical network can host (or don't specify a limit to allow infinitely
many):
$ crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
capnet capnet_networks capnet-phys-3
(You could optionally set the value to 'capnet-3:8' to allow a maximum
of 8 tenant networks to be created atop the physical capnet-3 network.
This tells Neutron that there is a physical network that can host Capnet
virtual networks, and which OpenVSwitch bridge to use to configure it.
Now, because Capnet networks are special, we also have to tell the other
Neutron agents (dhcp, l3, metering, etc), about the Capnet
bridge_mappings. The CapnetOVSInterfaceDriver that creates VNIC
interfaces for Neutron and moves them into the proper OVS bridges needs
this information.
$ crudini --set /etc/neutron/dhcp_agent.ini \
capnet bridge_mappings capnet-phys-3:br-capnet-3
$ crudini --set /etc/neutron/l3_agent.ini \
capnet bridge_mappings capnet-phys-3:br-capnet-3
$ crudini --set /etc/neutron/metering_agent.ini \
capnet bridge_mappings capnet-phys-3:br-capnet-3
Once you have at least one "physical" network available for Capnet, you
can create virtual networks like so:
$ neutron net-create capnet-1 --provider:network_type capnet
$ neutron subnet-create capnet-1 --name capnet-1-subnet 10.0.1.0/24
$ neutron router-create capnet-1-router
$ neutron router-interface-add capnet-1-router capnet-1-subnet
You can supply additional Capnet arguments to the `neutron net-create`
command. For instance, if you have a custom workflow application in
`/foo/bar/baz` on your networkmanager node (or your controller node, if
you're running a combined networkmanager/controller node), you could
bind it to your Capnet network like this:
$ neutron net-create capnet-1 --provider:network_type capnet --capnet:workflow-app /foo/bar/baz
Even though the `neutron` client doesn't natively understand the
`--capnet:` options, it will put them into the request RPC and let the
server handle them.
In this case, you won't be able to connect the router to any other
networks, since Capnet doesn't yet model communication outside its
network.
You cannot change any of the Capnet network parameters via `neutron
net-update` yet; that potentially means restarting the workflow app and
doing something with any existing capability grants. We'll support that
later.
If you're hacking on this package, you might try a command like the
following to do a full install yet save off the key config file:
$ cp -p /etc/neutron/plugins/ml2/ml2_conf_capnet.ini /etc/neutron/plugins/ml2/ml2_conf_capnet.ini.bak ; \
rm -rf build/ networking_capnet.egg-info/ ; \
python setup.py install --install-layout=deb --install-data / -v -f ; \
diff -u /etc/neutron/plugins/ml2/ml2_conf_capnet.ini.bak /etc/neutron/plugins/ml2/ml2_conf_capnet.ini ; \
cp -p /etc/neutron/plugins/ml2/ml2_conf_capnet.ini.bak /etc/neutron/plugins/ml2/ml2_conf_capnet.ini
After that, you can do something like (on the controller node):
$ service neutron-server restart
or on the networkmanager and/or compute nodes:
$ service neutron-plugin-capnet-agent restart
Finally, on your compute nodes, you need to enable our special virt
driver (just a wrapper around the "generic" libvirt driver that handles
networks --- and vifs --- of VIF_TYPE_CAPNET) and its vif_driver. The
virt driver simply adds an option to the libvirt driver to provide a
different vif driver. The vif driver contains the support to plug
VM interfaces into the correct Capnet bridge.
$ crudini --set /etc/nova/nova-compute.conf DEFAULT compute_driver \
compute_capnet.virt.libvirt.driver.CapnetLibvirtDriver
$ crudini --set /etc/nova/nova-compute.conf libvirt vif_driver \
compute_capnet.virt.libvirt.vif.CapnetLibvirtVIFDriver
(This assumes your libvirt config is in /etc/nova/nova-compute.conf ---
it might also be in /etc/nova/nova.conf --- you'll need to check.
Then, you must set the bridge_mappings accordingly on the compute node:
$ crudini --set /etc/nova/nova-compute.conf \
capnet bridge_mappings capnet-phys-3:br-capnet-3
This is the source code (a `geni-lib` script, `osp-capnet.py`) to create
a CloudLab profile to setup and run Capnet in an OpenStack. This
profile lives at https://www.cloudlab.us/p/TCloud/OpenStack-Capnet . It
is basically the CloudLab OpenStack profile
(https://www.cloudlab.us/p/emulab-ops/OpenStack), but adds several
Capnet parameters, creates physical networks for Capnet to use, and also
comes with a set of extension scripts to the CloudLab OpenStack profile
that install Capnet and configure it based on the user's specified
profile parameters. It relies on this extension support to be present
in the core CloudLab OpenStack profile tarball.
To make the extension tarball, in this directory, do:
$ tar -czvf setup-ext-capnet-vX.tar.gz setup-ext-capnet
Then if you need to change the canonical, official one installed on
boss.emulab.net, that the official profile references, get someone with
privs to handle that :).
This diff is collapsed.
#!/bin/sh
set -x
DIRNAME=`dirname $0`
# Gotta know the rules!
if [ $EUID -ne 0 ] ; then
echo "This script must be run as root" 1>&2
exit 1
fi
# Grab our libs
. "$DIRNAME/../../setup-lib.sh"
if [ -f $SETTINGS ]; then
. $SETTINGS
fi
##
## First, we setup OVS stuff for Capnet physical networks.
##
$DIRNAME/setup-ovs-node.sh
##
## Second, however, we do all the Neutron config Capnet needs:
##
ML2TYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 type_drivers "capnet,$ML2TYPES"
ML2TENANTTYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 tenant_network_types "capnet,$ML2TENANTTYPES"
ML2MECHDRVS=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 mechanism_drivers "capnet,$ML2MECHDRVS"
ML2EXT=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers`
if [ ! -z "$ML2EXT" ] ; then
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 extension_drivers "capnet,$ML2EXT"
else
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 extension_drivers "capnet"
fi
crudini --set /etc/neutron/neutron.conf DEFAULT api_extensions_path \
/usr/lib/python2.7/dist-packages/networking_capnet/extensions
##
## Setup our physical Capnet connections
##
capnet_networks=""
bridge_mappings=""
for lan in $DATACAPNETLANS ; do
if [ -n "${capnet_networks}" ]; then
capnet_networks="${capnet_networks},"
fi
if [ -n "${bridge_mappings}" ]; then
bridge_mappings="${bridge_mappings},"
fi
. $OURDIR/info.${lan}
capnet_networks="${capnet_networks}${lan}"
bridge_mappings="${bridge_mappings}${lan}:${DATABRIDGE}"
done
crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
capnet bridge_mappings "$bridge_mappings"
crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
capnet capnet_networks "$capnet_networks"
crudini --set /etc/nova/nova-compute.conf DEFAULT compute_driver \
compute_capnet.virt.libvirt.driver.CapnetLibvirtDriver
crudini --set /etc/nova/nova-compute.conf libvirt vif_driver \
compute_capnet.virt.libvirt.vif.CapnetLibvirtVIFDriver
crudini --set /etc/nova/nova-compute.conf \
capnet bridge_mappings "$bridge_mappings"
crudini --set /etc/nova/nova-compute.conf \
capnet capnet_networks "$capnet_networks"
##
## Ok, restart Neutron Capnet ML2 plugin
##
service_restart neutron-plugin-capnet-agent
service_enable neutron-plugin-capnet-agent
#!/bin/sh
set -x
DIRNAME=`dirname $0`
# Gotta know the rules!
if [ $EUID -ne 0 ] ; then
echo "This script must be run as root" 1>&2
exit 1
fi
# Grab our libs
. "$DIRNAME/../../setup-lib.sh"
if [ "$HOSTNAME" != "$CONTROLLER" ]; then
exit 0;
fi
if [ -f $SETTINGS ]; then
. $SETTINGS
fi
##
## First, we *don't* setup OVS stuff for Capnet physical networks.
## It's unnecessary for Neutron.
##
#$DIRNAME/setup-ovs-node.sh
##
## Second, however, we do all the Neutron config Capnet needs:
##
crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin capnet
#crudini --set /etc/neutron/neutron.conf \
# DEFAULT core_plugin networking_capnet.plugins.ml2.plugin.CapnetMl2Plugin
ML2TYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 type_drivers "capnet,$ML2TYPES"
ML2TENANTTYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 tenant_network_types "capnet,$ML2TENANTTYPES"
ML2MECHDRVS=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers`
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 mechanism_drivers "capnet,$ML2MECHDRVS"
ML2EXT=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers`
if [ ! -z "$ML2EXT" ] ; then
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 extension_drivers "capnet,$ML2EXT"
else
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
ml2 extension_drivers "capnet"
fi
crudini --set /etc/neutron/neutron.conf DEFAULT api_extensions_path \
/usr/lib/python2.7/dist-packages/networking_capnet/extensions
##
## Hack the initscript a bit, to add ml2_conf_capnet.ini as a config file.
##
echo 'CONFIG_FILE="/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf_capnet.ini"' >> /etc/default/neutron-server
##
## Ok, restart Neutron
##
service_restart neutron-server
#!/bin/sh
##
## This script builds and installs the necessary deps required for the
## Capnet controller, as well as the Capnet Neutron Plugin, and
## configures it all based on the Cloudlab Openstack Profile
## (https://www.cloudlab.us/p/capnet/OpenStack-Capnet).
##
set -x
DIRNAME=`dirname $0`
# Gotta know the rules!
if [ $EUID -ne 0 ] ; then
echo "This script must be run as root" 1>&2
exit 1
fi
LIBCAP_REPO="https://gitlab.flux.utah.edu/xcap/libcap.git"
LIBCAP_BRANCH="master"
OPENMUL_REPO="https://gitlab.flux.utah.edu/tcloud/openmul.git"
OPENMUL_BRANCH="capnet"
CAPNET_REPO="https://gitlab.flux.utah.edu/tcloud/capnet.git"
CAPNET_BRANCH="master"
#CAPNET_PLUGIN_REPO="https://gitlab.flux.utah.edu/tcloud/networking-capnet.git"
CAPNET_PLUGIN_REPO="http://www.emulab.net/downloads/networking-capnet.tar.gz"
CAPNET_PLUGIN_BRANCH="master"
# Grab our libs
. "$DIRNAME/../../setup-lib.sh"
if [ "$HOSTNAME" != "$CONTROLLER" ]; then
exit 0;
fi
if [ -f $SETTINGS ]; then
. $SETTINGS
fi
#
# openstack CLI commands seem flakey sometimes on Kilo and Liberty.
# Don't know if it's WSGI, mysql dropping connections, an NTP
# thing... but until it gets solved more permanently, have to retry :(.
#
__openstack() {
__err=1
__debug=
__times=0
while [ $__times -lt 16 -a ! $__err -eq 0 ]; do
openstack $__debug "$@"
__err=$?
if [ $__err -eq 0 ]; then
break
fi
__debug=" --debug "
__times=`expr $__times + 1`
if [ $__times -gt 1 ]; then
echo "ERROR: openstack command failed: sleeping and trying again!"
sleep 8
fi
done
}
##
## First, we install Capnet at all of the nodes.
##
cd $OURDIR
mkdir capnet
cd capnet
maybe_install_packages git
maybe_install_packages pkg-config glib2.0-dev
maybe_install_packages automake1.11 swig2.0 python2.7-dev gawk libevent-dev libcurl3-dev
maybe_install_packages python-protobuf
#maybe_install_packages protobuf-c-compiler libprotobuf-c-dev
maybe_install_packages libcap-dev
# Ubuntu protobuf-c is too old; install all that from src
# First grab protobuf itself:
wget https://github.com/google/protobuf/releases/download/v2.6.1/protobuf-2.6.1.tar.gz
tar -xzvf protobuf-2.6.1.tar.gz
cd protobuf-2.6.1
./configure --prefix=/usr/local
make && make install
ldconfig
cd ..
# Now protobuf-c
git clone https://github.com/protobuf-c/protobuf-c.git protobuf-c
cd protobuf-c && ./autogen.sh && cd ..
mkdir protobuf-c.obj && cd protobuf-c.obj
../protobuf-c/configure --prefix=/usr/local
make && make install
ldconfig
cd ..
#
# First, libcap.
#
git clone "$LIBCAP_REPO" libcap
cd libcap
git checkout "$LIBCAP_BRANCH"
./autogen.sh
cd ..
mkdir libcap.obj
cd libcap.obj
../libcap/configure --prefix=/opt/tcloud/libcap
make
make install
cd ..
#
# Second, our version of openmul.
#
git clone "$OPENMUL_REPO" openmul
cd openmul
git checkout "$OPENMUL_BRANCH"
./autogen.sh
cd ..
mkdir openmul.obj
cd openmul.obj
../openmul/configure --prefix=/opt/tcloud/mul
make
make install
cd ..
#
# Third, capnet controller.
#
git clone "$CAPNET_REPO" capnet
cd capnet
git checkout "$CAPNET_BRANCH"
./autogen.sh
cd ..
mkdir capnet.obj
cd capnet.obj
../capnet/configure --prefix=/opt/tcloud/capnet \
--with-libcap=/opt/tcloud/libcap --with-mul=/opt/tcloud/mul \
--with-protoc=/usr/local
make && make install
cd ..
#
# Finally, capnet Neutron plugin stuff.
#
echo "$CAPNET_PLUGIN_REPO" | grep -q tar\.gz
if [ $? = 0 ]; then
wget -O networking-capnet.tar.gz "$CAPNET_PLUGIN_REPO"
tar -xzf networking-capnet.tar.gz
else
git clone "$CAPNET_PLUGIN_REPO" networking-capnet
fi
cd networking-capnet
git checkout "$CAPNET_PLUGIN_BRANCH"
rm -rf build networking_capnet.egg-info
# Install the Ubuntu way, and straight into dist-packages (i.e. /).
# Otherwise it goes into site-packages and Neutron can't find us.
python setup.py install --install-layout=deb --install-data / -v -f
cd ..
##
##
##
#!/bin/sh
set -x
DIRNAME=`dirname $0`
# Gotta know the rules!