Commit 8a20ab47 authored by David Johnson's avatar David Johnson

Add a dev guide to the capnet Neutron ML2 driver and extensions.

parent b13e257b
Pipeline #1863 passed with stage
in 2 seconds
Capnet Neutron ML2 Driver Structure
This plugin is structured in the style of other Neutron ML2 drivers.
The ML2 core provides database functions common to any L2 network
protocol type; ML2 drivers (i.e., the Capnet driver) handle
type-specific concerns and virtual network implementation. Furthermore,
the Capnet ML2 driver is compatible with the ``openvswitch`` ML2 driver,
which means you can mix virtual Capnet networks with virtual
``openvswitch`` networks.
The driver requires the Capnet Neutron extensions (which consist of
new server-side Neutron commands; new internal agent<->server RPC
methods; ).
Driver Core
The Capnet driver is decomposed into two basic parts: the `type` driver,
and the `mechanism` driver. The type driver
defines the Capnet network type and provides simple database-level
routines that check whether there are sufficient resources to create a
new Capnet virtual network, etc. The mechanism driver
follows the agent-based pattern common to ML2 and defines the core model
for the agent. The agent itself
runs as a daemon anywhere OpenStack might need to plug in a virtual port
to an OpenVSwitch bridge (i.e., the network manager node and compute
nodes, and also the controller if the controller is your network
manager). The agent is responsible for:
* running the Capnet controller (and the openmul core) to control the
OpenVSwitch bridges ``class CapnetControllerProcesses``;
* binding ports --- noticing new ports plugged in by other agents (i.e.,
by the DHCP or L3 Neutron agents, or by the nova-compute agent ---
plugging is decentralized in Neutron) and sending port binding updates
to the neutron server (``class CapnetNeutronAgent``);
* receiving notifications from the neutron server (i.e., when ports are
deleted or updated) (``class CapnetAgentRpcCallbacks``);
* receiving port binding notifications and communicating metadata to the
controller (``class CapnetAgentExtRpcCallbacks``);
* calling custom methods on the neutron server (``class CapnetExtPluginApi``)
to create ports for new workflow agents, update port bindings, get
special Capnet information;
* running and stopping workflow agents as instructed (creating and
plugging (or unplugging and deleting) a virtual port (a Linux network
namespace and virtual Ethernet interface) into the appropriate
OpenVSwitch bridge (``class CapnetWorkflowAgentManager``));
* communicating appropriate port metadata into the Capnet controller so
it knows what capabilities, if any, should be given to new ports
(``class CapnetControllerMetadataManager``).
Each agent periodically polls any local Capnet OpenVSwitch bridges,
looking for new or deleted ports, and updates the bindings
(``class CapnetNeutronAgent``).
As described above, virtual network interface plugging and unplugging is
decentralized in Neutron --- for instance, nova-compute creates its own
ports and plugs them into the "right" place. This is sort of
inconvenient and odd for cases like ours, where it would seem most
natural for the agent to create the virtual Ethernet device and plug it
into the right OpenVSwitch bridge. But that's the model, and one can
imagine several good reasons for structuring it this way. In any case,
what this means for us is that to support multiple Capnet OpenVSwitch
bridges per physical node, the decentralized plugging code has to always
know for a particular virtual port, into which bridge the port must be
plugged. In the ``openvswitch`` ML2 agent, this is trivial, because all
virtual ports are plugged into the one true integration bridge
(typically ``br-int``). Not so for us. So we have special plugging
code that figures out which OpenVSwitch bridge a particular port must be
plugged into (``class CapnetOVSInterfaceDriver`` in
The implementation of the Neutron API extensions is in
``networking-capnet/networking_capnet/extensions/``). This is
a mix of policy, database sanity checks, and agent notifications. It
makes use of a database logic mixin class found in
The Neutron CLI extension descriptors for our extra Capnet commands are
in ``networking-capnet/networking_capnet/neutronclient/``.
Finally, the Capnet agent and Capnet Neutron extensions require
additional internal RPC; the server-side method implementations (called
by the agent) are in
the client-side method call interface is in
The Capnet driver and extensions require several new Neutron database
tables. Those tables and their relationships are defined in
Nova Integration
Because interface plugging in the Neutron world is decentralized, we
have to add custom interface plugging code for Nova. However, Nova
doesn't have a config option to set a custom interface plug driver! So,
assuming that anyone who uses Capnet also will be using the ``libvirt``
Nova compute driver to run their VMs, we add a ``libvirt`` driver
wrapper (``networking-capnet/compute_capnet/virt/libvirt/``,
``class CapnetLibvirtDriver``) whose sole purpose is to add this option.
Then, we add an interface driver
``class CapnetLibvirtVIFDriver``) that simply plugs the virtual
interface into the correct bridge. It is backwards-compatible with
regular ``openvswitch`` virtual interface plugging.
``nova-compute`` must also use our special Neutron API wrapper
``networking-capnet/compute_capnet/network/neutronv2/``, ``class
API``). This wrapper adds the "which bridge should I plug X interface
into" bit of information into the Nova/Neutron network info; it tells
the Nova interface-plugging code which Capnet bridge to plug the virtual
NIC for a VM into. If the virtual port is not a Capnet port (i.e. it is
an openvswitch virtual port), nothing different happens.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment