README.rst 26.5 KB
Newer Older
1 2 3 4 5 6 7 8 9 10
===============================
networking-capnet
===============================

An ML2 Neutron driver, agent, and associated extensions to manage Capnet
capability-based networks.

* Source: https://gitlab.flux.utah.edu/tcloud/networking-capnet
* Bugs: https://gitlab.flux.utah.edu/tcloud/networking-capnet/issues

11 12


13 14 15 16 17 18 19
Capnet
----------------

Capnet is an application of capability theory to network privileges.  A
Capnet network is a least-privilege, Layer 2 network, where the right
to send data depends on if the sender has a capability to the receiver.

20 21


22 23 24 25 26
Installing
----------------

You must install several source packages:

27
* https://gitlab.flux.utah.edu/tcloud/capnet
28

29 30
  This is the core Capnet SDN (OpenFlow) switch controller.  It
  depends on the two packages listed below, ``libcap`` and ``openmul``.
31

32
* https://gitlab.flux.utah.edu/xcap/libcap
33

34 35 36 37 38 39
  ``libcap`` is the user- and kernel-space capability library that
  provides the core capability API we use.  libcap allows its users to
  register new object types that can be associated with capabilities,
  and provides hooks to express semantics of capability operations on
  those objects when rights to them are granted, revoked, etc.
  Capnet's objects include nodes and flows.
40

41
* https://gitlab.flux.utah.edu/tcloud/openmul
42

43 44 45
  We have a modified version of ``OpenMUL`` (http://www.openmul.org/) that
  has some bug fixes as well as build modifications to allow openmul
  applications to be built outside the source tree.
46

47
Then, install this package using ``setup.py``.  If your neutron package was
48
installed from a distribution package (and is thus installed in
49 50
``/usr/lib/python2.7/dist-packages``), you will need to use some extra args
to place the library in ``dist-packages`` instead of ``site-packages`` (the
51 52 53 54 55 56
usual default for manually-built or -installed packages).  (This matters
because Neutron's ML2 plugin autoloads plugins from a well-known entry
point module namespace; and ML2 plugins can say they have a module to
add to a particular namespace.  This autoloading mechanism cannot cross
the dist-packages and site-packages boundary, AFAIK.)

57 58 59
(Also note that all the ``setup.py`` command lines below include
``--install-data /``; this ensures that the ``ml2_conf_capnet.ini`` sample
file is placed in ``/etc/neutron/plugins/ml2/``.)
60

61
* Distribution-installed neutron packages, on Ubuntu::
62

63 64
    cd networking-capnet
    python setup.py install --install-layout=deb --install-data /
65

66
* Distribution-installed neutron packages, on some other linux::
67

68 69
    cd networking-capnet
    python setup.py install --install-lib /usr/lib/python2.7/dist-packages --prefix /usr --install-data /
70

71
* Manually-installed neutron packages in ``/usr``::
72

73 74
    cd networking-capnet
    python setup.py install --prefix /usr --install-data /
75

76
* Manually-installed neutron packages in ``/usr/local`` (the default)::
77

78 79
    cd networking-capnet
    python setup.py install --prefix /usr/local --install-data /
80 81

If you're developing the plugin, and want to reinstall, you'll do
82
something like this::
83

84 85
    cd networking-capnet
    rm -rf build/ networking_capnet.egg-info/ && python setup.py install --install-layout=deb --install-data / -v -f
86 87 88 89 90 91

to ensure the new code gets installed.  Of course, you'll have to
restart the relevant Neutron servers and agents on the controller,
compute nodes (and the network manager machine, if you have one).


92

93 94 95 96
Getting started
---------------

Once installed, you must configure the Capnet ML2 plugin to be used by
97 98 99
Neutron.  This involved several changes on each of your physical
OpenStack nodes.  We'll follow the standard doc convention, and run
through the changes for controller, networkmanager, and compute nodes.
100

101 102 103
**Make sure to apply all items** in the `Common Configuration` section
below to all OpenStack compute nodes, as well as to the ``controller``
and ``networkmanager`` nodes!
104

105
**NOTE**: You'll need to run *all* of these commands as root, or via ``sudo``.
106

107 108 109
**NOTE**: If you have a modern OpenStack configuration, where the
``controller`` and ``networkmanager`` nodes are shared, run both the
``controller`` and ``networkmanager`` commands on the ``controller`` node.
110

111 112 113 114
**NOTE**: We'll use ``crudini`` to apply configuration changes to the
OpenStack configuration files, which are formatted in the INI style.
You can certainly apply the edits manually if you like.  On Ubuntu,
you can use ``apt-get install crudini`` to install.
115 116 117



118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188
Setting Up Physical Networks
----------------------------

**NOTE:** the configuration settings described in this section will be
referred to in the ``networkmanager`` and ``compute`` node
configuration sections; so keep it handy (i.e., in an environment
variable, as shown below).

Before you begin configuring Neutron and Nova configuration files,
you'll want to plan out your physical network setup and figure out which
resources to allow Capnet to manager.

Capnet supports creation of multiple virtual Capnet networks atop a
single physical network connection, so unless you need more bandwidth,
you can use a single physical NIC.  Capnet can happily coexists with the
stock Neutron ``openvswitch`` ML2 driver, so you could enable both, and
if you have multiple physical interfaces per machine, all connected in a
LAN, you could allow ``openvswitch`` to manage some, and ``capnet`` to
manage others.

Finally, Capnet networks can be shared much more safely than regular
OpenStack networks.  When you declare an OpenStack network to be
``shared``, that allows multiple tenants to plug VMs into it.  In a
Capnet shared virtual network, VMs from multiple tenants can be plugged
in, but the tenant's users are in control of the capabilities (and thus
the communication paths) to and from those nodes --- the user must
explicity create flow capabilities between its nodes and another
tenant's nodes --- so unless explicitly created, there cannot be any
communication.  (However, **note** that we have not yet secured resource
utilization in this shared mode, so it is possible that one
high-utilization tenant's node can affect another node's utilization,
for instance.)

The simplest configuration assumes your physical nodes are all connected
in a LAN or VLAN.  You want to create an OpenVSwitch bridge on each
physical node called ``br-capnet-1`` or similar::

    ovs-vsctl add-br br-capnet-1

Then, put the physical NIC or virtual VLAN NIC device you've chosen for
Capnet to manager into the ``br-capnet-1`` bridge::

    ovs-vsctl add-port br-capnet-1 ethN

(where ``ethN`` is the name of the ethernet device you want Capnet to use
to build virtual Capnet networks).

Then, add this OpenVSwitch-based network configuration to the
``/etc/neutron/plugins/ml2/ml2_conf_capnet.ini`` on your controller node
file by setting the ``bridge_mappings`` configuration item to
``capnet-phys-1:br-capnet-1``::

    BRIDGE_MAPPINGS="capnet-phys-1:br-capnet-1"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
         capnet bridge_mappings $BRIDGE_MAPPINGS

Then tell the Capnet plugin how many Capnet tenant networks each Capnet
physical network can host (or don't specify a limit to allow infinitely
many; we don't specify a limit)::

    CAPNET_NETWORKS="capnet-phys-1"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
         capnet capnet_networks $CAPNET_NETWORKS

(You could optionally set the value to 'capnet-phys-1:8' to allow a maximum
of 8 tenant networks to be created atop the physical capnet-phys-1 network.)

This tells Neutron that there is a physical network that can host Capnet
virtual networks, and which OpenVSwitch bridge to use to configure it.


189

190 191 192 193 194 195 196 197
Common Configuration
--------------------

There is a significant amount of Capnet Neutron configuration that must
be applied to your ``controller``, ``networkmanager``, and ``compute``
nodes.  Run these commands on all your nodes **without restarting** any
Neutron processes running on them (we'll do that in the node-specific
configuration sections below).
198 199

You must also configure Neutron to use the Capnet Neutron API extensions
200 201 202 203 204 205 206 207 208
(which are API-level extensions of the Neutron "network" resource)::

    crudini --set /etc/neutron/neutron.conf DEFAULT api_extensions_path \
        /usr/lib/python2.7/dist-packages/networking_capnet/extensions

(Change ``/usr/lib/python2.7/dist-packages`` to wherever you installed
the ``networking-capnet`` package.)

Then set up the ML2 configuration to include Capnet::
209

210 211 212 213 214 215 216 217 218
    ML2TYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers`
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
        ml2 type_drivers "capnet,$ML2TYPES"
    ML2TENANTTYPES=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types`
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
        ml2 tenant_network_types "capnet,$ML2TENANTTYPES"
    ML2MECHDRVS=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers`
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
        ml2 mechanism_drivers "capnet,$ML2MECHDRVS"
219

220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252
Then, make sure that the Capnet Neutron API extensions get loaded (via
the ML2 extension mechanism)::

    ML2EXT=`crudini --get /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers`
    if [ ! -z "$ML2EXT" ] ; then
        crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
	    ml2 extension_drivers "capnet,$ML2EXT"
    else
        crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini \
	    ml2 extension_drivers "capnet"
    fi

Tell the Capnet ML2 driver where it must look to find the Capnet Python
bindings module::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini capnet pythonpath \
        /opt/tcloud/capnet/lib/python2.7/site-packages

(assuming you installed ``capnet`` and its Python binding module in
``/opt/tcloud/capnet``).  You can test whether you've got the right path
here by doing something like (again, adjust your path as necessary)::

    PYTHONPATH=/opt/tcloud/capnet/lib/python2.7/site-packages
    python
    > import capnet
    > help(capnet)

You'll want to enable Neutron debug logging::

    crudini --set /etc/neutron/neutron.conf DEFAULT verbose True
    crudini --set /etc/neutron/neutron.conf DEFAULT debug True
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini DEFAULT verbose True
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini DEFAULT debug True
253 254


255 256
``controller`` Configuration
----------------------------
257

258 259
First, manually add the database tables you need for Capnet
(Eventually we'll add an alembic migration path, but not just yet)::
260

261
    mysql neutron < networking-capnet/networking_capnet/db/create.sql
262

263
Now you'll "edit" the configuration files using ``crudini``.
264

265
Set the core Neutron plugin to our wrapper::
266

267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283
    crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin capnet

(The simple ``networking_capnet.plugins.ml2.plugin.CapnetMl2Plugin``
(https://gitlab.flux.utah.edu/tcloud/networking-capnet/blob/master/networking_capnet/plugins/ml2/plugin.py)
wrapper is necessary to extend the ML2 plugin with extra RPCs necessary
for Capnet; the installation of the `networking-capnet` package dumps
this module into the correct place inside Neutron's core plugins area.
We need a wrapper because the ML2 plugin doesn't provide the ability to
extend its set of RPC endpoints.  Our wrapper doesn't change any ML2
functionality whatsoever, other than to add Capnet Workflow Agent
RPCs/notifications.)

You might need to modify the the Neutron initscript a bit, to add
ml2_conf_capnet.ini as a config file, to the neutron-server command
line.  Below is one way to do it for Ubuntu 14; YMMV::

    echo 'CONFIG_FILE="/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf_capnet.ini"' >> /etc/default/neutron-server
284 285 286 287 288 289 290 291 292 293 294

(This is of course a gross hack, but the Ubuntu Neutron initscript is
only setup to allow a main neutron config file, and a plugin config
file.  It doesn't have an /etc/default variable you can set to pass in
arbitrary args.  So, we exploit the CONFIG_FILE variable in
/etc/init.d/neutron-server, which normally will add the `--config-file
/etc/neutron/neutron.conf` option to the neutron-server command line,
and jam the ml2_conf_capnet.ini file in as an "additional" argument.
This may not work for you; be careful and check the
`/var/log/neutron/neutron-server.log` file if neutron fails to start.

295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451
Finally, restart Neutron::

    service neutron-server restart


``networkmanager`` Configuration
--------------------------------

First, we have to configure our Capnet ML2 agent, and the other Neutron
agents (dhcp, l3, metering) to plug interfaces into OVS bridges using
the Capnet interface driver (our driver wraps the
``OVSInterfaceDriver``, and when it is plugging a virtual interface into
a Capnet network, makes sure to plug it into the right Capnet OVS
bridge.  If it is operating on a virtual interface that is not attached
to a Capnet network, it defaults to the standard OVS behavior)::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini DEFAULT interface_driver \
        networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver \
        networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
    crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver \
        networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver
    crudini --set /etc/neutron/metering_agent.ini DEFAULT interface_driver \
        networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver

In OpenStack, the metadata the VMs request after DHCPing can be obtained
from either the DHCP port for the virtual subnet associated with the VM;
or from the router (L3) port for the virtual subnet.  The Capnet SDN
controller automatically installs flows for both DHCP and OpenStack
metadata to and from each VM, to the correct DHCP and metadata server
virtual ports.  Thus, we must tell the Capnet ML2 agent where the
metadata server is listening --- either on the dhcp port, or the l3
port.  Typically this will be the l3 port; but you should check your
configuration to be sure; then apply the proper configuration::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini capnet \
        neutron_metadata_service_location l3

Next, we need to customize the Neutron dnsmasq configuration.  Neutron
starts up a ``dnsmasq`` instance for each virtual OpenStack network.
Our custom Capnet dnsmasq driver
(https://gitlab.flux.utah.edu/tcloud/networking-capnet/blob/master/networking_capnet/agent/linux/dhcp.py)
wraps the default dnsmasq driver, and when the driver is starting up a
``dnsmasq`` process for a Capnet virtual network, it strips out the
dnsmasq option that causes it to forward DNS queries to an external
resolver.  We have to do this for Capnet networks at the moment because
they have no route to the outside world; and we don't want VM name
resolution to hang just because of this.

So, first setup the DHCP agent configuration with the Capnet dnsmasq driver::

    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver \
        networking_capnet.agent.linux.dhcp.CapnetDnsmasq
    crudini --set /etc/neutron/dhcp_agent.ini capnet dnsmasq_config_file \
        /etc/neutron/capnet-dnsmasq-neutron.conf

Then create the ``/etc/neutron/capnet-dnsmasq-neutron.conf``.  **NOTE**:
make sure to copy the MTU you want from your dnsmasq config file, if
you've set one.  (The configuration file we create below is similar to
the default, but we force dnsmasq to send a reduced MTU (``1450``) to
support both GRE and VXLAN tunnels on the non-Capnet, OVS networks
(technically, GRE needs an MTU of 1454; VXLAN needs 1450).  If you only
plan to use Capnet virtual networks in your setup, and not OVS virtual
networks, you won't care about this.)::

    cat <<EOF > /etc/neutron/capnet-dnsmasq-neutron.conf
    dhcp-option-force=26,1450
    log-queries
    log-dhcp
    no-resolv
    EOF

Finally, now that you've set up the necessary configuration logic, you
need to tell the Neutron agents about the physical networks that Capnet
will manage.  Get out the ``BRIDGE_MAPPINGS`` and ``CAPNET_NETWORKS``
environment variables you created in the `Setting Up Physical Networks`
section above.  You'll configure several different files:

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet capnet_networks "$CAPNET_NETWORKS"
    crudini --set /etc/neutron/dhcp_agent.ini \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"
    crudini --set /etc/neutron/l3_agent.ini \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"
    crudini --set /etc/neutron/metering_agent.ini \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"

Finally, you will want to set some good defaults for these
``networkmanager``-node-specific parameters. ``hosts_controllers``
signifies that this physical node will host a Capnet switch controller;
``multi_controller_mode`` would signify that the SDN controller is
running in distributed mode.  As of this commit, we have not yet added a
distributed mode to the Capnet SDN controller, so only set this option
on the ``networkmanager`` node::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet hosts_controllers True
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet multi_controller_mode False

You also need to tell the Capnet driver the IP address that is hosting
the master controller.  So you'll want to use the management IP address
of the ``networkmanager`` node for that (consult the OpenStack
documentation to understand what I mean by "management" network ---
OpenStack recommends that you create a private control/management
network when you install it)::

    MGMTIP="x.y.z.a"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet master_controller_ip "$MGMTIP"

Finally, we have theoretical support to run workflow agents on any node;
however, we have not added a feature (or a frontend mechanism) to allow
the user to choose which physical machine will host the workflow agent
being created.  So for now, only enable this option on the
``networkmanager`` node::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet hosts_workflow_apps True

Restart Neutron Capnet ML2 plugin and other Neutron agents::

    service neutron-plugin-capnet-agent restart
    service neutron-plugin-capnet-agent enable
    service neutron-dhcp-agent restart
    service neutron-l3-agent restart
    service neutron-metadata-agent restart
    service neutron-plugin-openvswitch-agent restart



``compute`` node Configuration
------------------------------

Finally, on all your ``compute`` nodes, you need to apply some Capnet
configuration.

First, just like we did for the ``networkmanager`` node, we have to
configure our Capnet ML2 agent to plug interfaces into OVS bridges using
the Capnet interface driver::

    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini DEFAULT interface_driver \
        networking_capnet.agent.linux.interface.CapnetOVSInterfaceDriver

Second, you need to enable our special virt driver (just a wrapper around
the "generic" libvirt driver that handles networks --- and vifs --- of
VIF_TYPE_CAPNET) and its vif_driver.  The virt driver simply adds an
option to the libvirt driver to provide a different vif driver.  The vif
driver contains the support to plug VM interfaces into the correct
Capnet bridge.  So, enable that::

    crudini --set /etc/nova/nova-compute.conf DEFAULT compute_driver \
        compute_capnet.virt.libvirt.driver.CapnetLibvirtDriver
    crudini --set /etc/nova/nova-compute.conf libvirt vif_driver \
        compute_capnet.virt.libvirt.vif.CapnetLibvirtVIFDriver
452

453 454
Then, you must set the bridge_mappings accordingly on the ``compute`` nodes,
just like you did on your ``networkmanager`` node::
455

456 457 458 459
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet capnet_networks "$CAPNET_NETWORKS"
460

461 462 463
You also need to place these settings into your ``nova-compute`` file.
(This assumes your libvirt config is in /etc/nova/nova-compute.conf ---
it might also be in /etc/nova/nova.conf --- you'll need to check.)::
464

465 466 467 468
    crudini --set /etc/nova/nova-compute.conf \
        capnet bridge_mappings "$BRIDGE_MAPPINGS"
    crudini --set /etc/nova/nova-compute.conf \
        capnet capnet_networks "$CAPNET_NETWORKS"
469

470 471 472 473 474 475 476
Finally, you need to tell ``nova-compute`` to use our special Neutron
API wrapper
(https://gitlab.flux.utah.edu/tcloud/networking-capnet/blob/master/compute_capnet/network/neutronv2/api.py).
All this does is tell the Nova interface-plugging code which Capnet
bridge to plug the virtual NIC for a VM into.  If the virtual port is
not a Capnet port (i.e. it is an openvswitch virtual port), nothing
happens.  So apply that::
477

478 479
    crudini --set /etc/nova/nova.conf \
        DEFAULT network_api_class compute_capnet.network.neutronv2.api.API
480

481 482 483 484
You also need to tell the Capnet driver the IP address that is hosting
the master controller (as we described in the ``networkmanager``
Configuration section, it will be the management IP address for the
``networkmanage`` node)::
485

486 487 488
    MGMTIP="x.y.z.a"
    crudini --set /etc/neutron/plugins/ml2/ml2_conf_capnet.ini \
        capnet master_controller_ip "$MGMTIP"
489

490 491
Finally, restart and enable the Capnet Neutron plugin, and restart
``nova-compute``::
492

493 494 495
    service neutron-plugin-capnet-agent restart
    service neutron-plugin-capnet-agent enable
    service nova-compute restart
496 497 498



499 500
Creating Capnet Virtual Networks
--------------------------------
501 502

Once you have at least one "physical" network available for Capnet, you
503
can create virtual networks like so::
504

505 506 507 508 509 510 511 512 513
    neutron net-create capnet-1 --provider:network_type capnet
    neutron subnet-create capnet-1 --name capnet-1-subnet 10.99.99.0/24
    neutron router-create capnet-1-router
    neutron router-interface-add capnet-1-router capnet-1-subnet

(and as we discussed in the `Setting Up Physical Networks` section
above, you'll probably want to add the ``--shared`` flag to your
``neutron net-create`` command above, to allow multiple tenants to
interact within the virtual Capnet network).
514 515 516 517 518

You can supply additional Capnet arguments to the `neutron net-create`
command.  For instance, if you have a custom workflow application in
`/foo/bar/baz` on your networkmanager node (or your controller node, if
you're running a combined networkmanager/controller node), you could
519
bind it to your Capnet network like this::
520

521
    neutron net-create capnet-1 --provider:network_type capnet --capnet:workflow-app /foo/bar/baz
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537

Even though the `neutron` client doesn't natively understand the
`--capnet:` options, it will put them into the request RPC and let the
server handle them.

In this case, you won't be able to connect the router to any other
networks, since Capnet doesn't yet model communication outside its
network.

You cannot change any of the Capnet network parameters via `neutron
net-update` yet; that potentially means restarting the workflow app and
doing something with any existing capability grants.  We'll support that
later.



538 539
Using Capnet
------------
540

541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621
First, you'll want to create a Capnet-backed virtual network, where the
virtual network is built atop the physical Capnet network you configured
above (i.e., ``capnet-phys-``)::

    PHYSNET="capnet-phys-1"
    NETNAME="whatever-net"
    neutron net-create $NETNAME --shared \
        --provider:physical_network $PHYSNET --provider:network_type capnet
    NETID=`neutron net-show $NETNAME | awk '/ id / {print $4}'`

    SUBNETNAME="whatever-subnet"
    SUBNET="10.12.0.0/255.255.0.0"
    ROUTER="10.12.1.1"
    ALLOCPOOL="start=10.12.2.1,end=10.12.254.254"
    neutron subnet-create $NETNAME --name $SUBNETNAME \
        --allocation-pool $ALLOCPOOL --gateway $ROUTER $SUBNET
    fi
    SUBNETID=`neutron subnet-show $SUBNETNAME | awk '/ id / {print $4}'`

    ROUTERNAME="whatever-router"
    neutron router-create $ROUTERNAME
    neutron router-interface-add $ROUTERNAME $SUBNETNAME

(Obviously, change the network metadata as you like.  Also, ***note***
that we create a shared virtual network so that multiple tenants can
connect at once, as described above.)

Once you have a Capnet virtual network in your OpenStack cloud, you can
begin to add VMs and Capnet workflow agents to the network.  When you
create a workflow agent in Capnet, you can mark it as the "master" for
the tenant.  This means it automatically receives capabilities to all
the VMs owned by that tenant and attached to the network to which the
agent is being connected.  If you don't mark it as a master, it will not
receive those capabilities.

You can also mark an existing VM's port as a master workflow agent, or
reuse an existing VM as a workflow agent.  If you choose this option,
though, you'll have to install the Capnet Python binding and your agent
in that VM, or similar.  We haven't tested this feature, either,
although it should be supported.

If you don't use an existing VM's port as the workflow agent, when you
create the agent, a Linux network namespace will be created on your
network manager node (or whatever node you've configured with
`hosts_workflow_apps` in
``/etc/neutron/plugins/ml2/ml2_capnet_conf.ini``), and your agent
program will run in that namespace.  Thus, the path to the workflow
agent program you specify must already exist on the network manager node
--- you'll have to manually install any new agents you want to run (in
the future, we may allow them to be glance images so users can upload
them, but this isn't important right now.).

***NOTE***: right now, the workflow agent will run as root --- this has
to do with the difficulty of passing CAP_NET_RAW to a non-binary forked
child on Linux --- Linux 4.3 has native support for allowing POSIX caps
to be passed to children (see ``capnet/tools/capnet-privexec.c``), but
that's too recent to count on.  We'll fix it later; for now, just be
aware.

You can explore the Capnet Neutron CLI extension commands by doing::

    neutron help | grep capnet

Then you can further explore each subcommand::

    neutron help capnet-wfagent-create

For instance, you could create a Hadoop service workflow agent (which
receives capabilities to VMs from a user tenant who wants Hadoop
installed and configured, and configures Hadoop software and network
flows between those VMs) like this::

    STENANT="service-0"
    STENANTID=`openstack project show $STENANT | awk ' / id / {print $4}'`
    WFANAME="service-0-hadoop"
    neutron capnet-wfagent-create --tenant-id $STENANTID --name $WFANAME \
        --master --wfapp-path /usr/bin/capnet-wfagent-service-tenant-hadoop-membrane \
        capnetlan-1

(assuming you had a project named `service-0`, and that workflow agent
installed on your network manager node).