Commit 8de9d51a authored by Anmol Vatsa's avatar Anmol Vatsa

additions to readme for MCCN instructions

parent c656d6fa
Capnet: An SDN Controller and Tools for Capability-based Networks
MCCN: An SDN Controller and Tools for Capability-based Networks
================================================================================
Capnet is a family of SDN-based tools that allow you to build networks
......@@ -74,6 +74,21 @@ and `ping_sub.py` to setup bidirectional connectivity between a pair of
nodes.
###### For Multi-Cloud Capnet on dist-capnet branch:
Use CloudLab profile: **multi-cloud-capability-network**
to deploy the basic MCCN configuration.
Link to profile: https://www.cloudlab.us/show-profile.php?uuid=0cac7064-a17a-11e8-b228-90e2ba22fee4
The profile uses two bare metal nodes as "Cloud machines" connecting them with a Routing node (vanilla Ubuntu using kernel forwarding). Each "Cloud machine" represents one cloud with a single SDN and MCCN controller.
Use etc/ovs-net-ns.sh script on the "Cloud" machine to create Linux network namespaces as cloud VMs connecting them all within the cloud through an OpenvSwitch:
$ etc/ovs-ns-net.sh create sw2 3 1 uplinkbr sw2 gateway
"gateway" passed as the last parameter creates a gateway/routing port on the switch that MCCN uses
to establish cross-cloud OpenFlow rules.
You need a controller registry service running for controllers to
register and lookup other controllers participating in the
Multi-Cloud Capnet protocol. The service is found in
......@@ -82,39 +97,39 @@ Run the service:
$ FLASK_APP=reg_srv.py python -m flask run --host=0.0.0.0
The --host=0.0.0.0 allows connections from external networks, since we need
all controllers to be able to access it.
one any Cloud or Routing node. Make sure it is accessible by all cloud controllers.
Only one instance of the registry service is run, and all MCCN controllers connect to it.
Run the controller:
$ sudo ../capnet.obj/controller/capnet-controller -s 127.0.0.1 -V 6934 -S /var/tmp -Z 16 -W 16 -L ALL -I <ip1:port1> --registry-service=<ip2:port2>
here,
ip1:port1 is the address of the interface you want the
Inter-controller comunication to bind socket on.
ip2:port2 is the address of the controller-registry-service
that each controller contacts to register itself
and look up other registered controllers to talk to.
Experimenting with Capnet in OpenStack
--------------------------------------
If you want to try out Capnet "in the cloud", we've written an OpenStack
Neutron plugin to allow you to do exactly that. You can find that
plugin, and its installation, configuration, and usage instructions, at
https://gitlab.flux.utah.edu/tcloud/networking-capnet .
To make it easier, you can also sign up for a Cloudlab account, and
create an experiment using `OpenStack-Capnet` profile
(https://www.cloudlab.us/p/TCloud/OpenStack-Capnet). This profile will
create a personal OpenStack cloud for you on the Cloudlab cluster of
your choice, fully configured; and it will install the entire Capnet
software stack and set up some basic demo-ready features. There are
instructions in that repository that show you how to run an example
Capnet cooperative, multi-party network configuration where one
OpenStack project provides Hadoop software and network configuration as
a service to other tenants on-demand.
here,
*ip1:port1* is the address of the interface you want the Inter-controller comunication to bind socket on.
*ip2:port2* is the address of the controller-registry-service that each controller contacts to register itself and look up other registered controllers to talk to.
At this point you can open a python console on the "master" node and use CapNet client side
API to exchange capabilities:
$ sudo ip netns exec sw2p1 python
Or run a workflow agent from within the network namespace of the "master" node.
If there are two clouds/cloud-machines, there will be two "master" nodes.
The "Cloud machines" already have data paths setup between them through the Routing node.
But in order to get data paths working between the virtual clouds that we setup though the
etc/ovs-ns-net.sh, we need to add kernel forwarding rules in the "Routing node" for the
network address of the virtual clouds on each Cloud machine.
For example, if you have the physical network interface of the Cloud machines and the Routing node on
192.168.0.0/16 network(the default for the CloudLab profile), then the forwarding rules are already
established for this network on the Cloud machine's and Routing node's root network namespace.
The etc/ovs-ns-net.sh script would put the virtual cloud on the 10.0.0.<seq>/24 network,
where <seq> is just the Cloud machine number - 10.0.0.1/24 for Cloud 1, 10.0.0.2/24 for Cloud 2, etc.
Forwarding rules are also added for the virtual cloud network 10.0.0.1/24 and all other 10.0.0.0/24
networks on Cloud machine's root network namespace with the etc/ovs-ns-net.sh script.
But **you need to add** forwarding rules for 10.0.0.<seq>/24 networks on the Routing node's
root network namespace.
etc/ovs-ns-net.sh script also adds the "gateway port" as the default gateway to each
of the VM network namespaces on the Cloud machines to complete the link.
Authors
-------
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment