- 05 May, 2016 5 commits
-
-
David Johnson authored
-
David Johnson authored
The default Neutron policy is that the provider:* attributes are only sent on a get_networks() call if the caller is an admin. Well, Capnet needs that attribute so it knows which Capnet bridge to put a virtual NIC into. And it turns out that if a non-admin user adds a VM to an admin-owned shared network, when Nova sets up the VM, it calls out to Neutron to collect network info for the VM -- but it must be doing it as the tenant user -- not with its admin powers. Well, we have to know this attribute... so we open up the policy a tiny bit to send the provider:physical_network attribute if the network is a shared network. So we override that default Neutron policy bit here. This is really the wrong thing to do, I suppose, because it leaks provider info through get_networks for shared networks. But the alternative is to make a secondary call in our Nova plugin to get_networks() with admin creds, and that I don't have time for right now. (The bit of our Nova plugin that requires this is in compute_capnet/network/neutronv2/api.py .) Nova agent collected the port's network info
-
David Johnson authored
This is the allpairs wfa again, but this time, the user-tenant-allpairs wfa looks up the "allpairs" service at the broker, and as it receives its own nodes, it send those caps to the allpairs service's RP that it got from the broker. The idea is that you run each of these wfas in a separate tenant. For instance, run the service-tenant-allpairs in the new service-0 tenant: neutron capnet-wfagent-create --name service-0-wfa0 \ --tenant-id <SERVICE_0_UUID> \ --master --wfapp-path=/usr/bin/capnet-wfagent-service-tenant-allpairs Then, in the new tenant-0 tenant, do neutron capnet-wfagent-create --name tenant-0-wfa0 \ --tenant-id <TENANT_0_UUID> \ --master --wfapp-path=/usr/bin/capnet-wfagent-user-tenant-allpairs Then add some nodes into tenant-0, and they will be granted to the service-0 wfa0, which will add them to the allpairs mesh, i.e. nova boot --image trusty-server --flavor m1.small \ --nic net-id=9bab982f-80d7-427f-a34b-0bf7d3dcd5bc t1 where net-id is the id of the Capnet network, and you have changed the OS_PROJECT, OS_USERNAME, OS_PASSWORD, OS_TENANT env vars to send your resource request from the tenant-0 tenant (I can't see that nova boot supports an admin injecting resources on behalf of a tenant, like neutron does).
-
David Johnson authored
(I had taken this out during some debugging...)
-
David Johnson authored
This script, setup-capnet-basic.sh, can be run as many times as you want... it checks to see if everything it creates already exists. We create 4 user/service project/user tandems by default. The idea is that each "user" project is where the project user allocates nodes, and its master wfa looks up another service wfa at the broker, and gives node caps to the service wfa. The projects and users are generically named for now... we still don't run any wfas by default.
-
- 04 May, 2016 2 commits
-
-
David Johnson authored
This was tonight's adventure. Holy cow. I feel like I need to go take a shower. Apparently, despite the fact that Neutron and Nova have coexisted for many years, Nova VMs have a hostname that doesn't resolve to anything, and that Neutron knows nothing about. This causes all kinds of local hangups (i.e., sudo, ssh UseDNS), but ok, whatever. Neutron has its own "DNS" names for the VMs; Nova has its own. They don't share info. For us, it matters because we want the tenants to be able to create VMs and wfagents with meaningful names, and have those names returned to the Capnet Protocol as the node names. Well, Neutron and Nova do not share this information. The real way to solve this is to ask Nova for the VM name from the Capnet Neutron agent, when said agent sees a port binding update. But we want to process those pretty fast, and calling out to Keystone/Nova for a lookup on the hot path is quite undesirable. So instead, since we already have a Capnet-specific binding update call that pushes local OVS dpid and ofportno from the local node to the controller (which distributes it to whichever Capnet agents are writing metadata files), if we can "find" a nova VM name locally, we add that to the binding, and then the Neutron server adds the name to the ports table in the DB. Currently, we find this by whacking through the libvirt.xml until we find an instance who owns the device_name we are sending the binding update for. It seems that the Neutron people have already staged in db schema changes to support their pending new DNS feature. This is coming, but it's not here yet. So we can use the new table field, but the Neutron-Nova DNS thing doesn't exist in Liberty. Eventually, Nova will tell Neutron the name of the port and other DNS information.
-
David Johnson authored
-
- 03 May, 2016 1 commit
-
-
David Johnson authored
Users can configuration the metadata service to either hook into the dhcp server port, or into the router port. In the Cloudlab openstack profile, I guess I set it up to run the metadata proxy through the router port. So we just stick with that. This means that capnet/openstack in cloudlab users will have to always add a router to their capnet networks.
-
- 02 May, 2016 4 commits
-
-
David Johnson authored
-
David Johnson authored
Hopefully this will help the initial packet not get lost in the ether. Timeout is 5 seconds for now... we can't wait long.
-
David Johnson authored
This is caused by a multithreaded separation of concerns. The generic agent main thread notices when ports are plugged on OVS switches, and notifies the neutron server of the dpid/ofport for the newly-plugged port_id. But, when the agent creates and plugs a port for a new wfagent, it wasn't sending its wfagent/port binding msg (which tells neutron that a given wfagent has been bound to a port on some client node) until after the wfapp was running. The main thread saw the new port plug and sent and received generic port binding info that did not have the wfagent part of the binding because it hadn't been made yet. So now we just send a wfagent bind notification prior to plugging the device for the new port; and then later on send another wfagent bind notification to update status once the wfapp is running.
-
David Johnson authored
-
- 01 May, 2016 3 commits
-
-
David Johnson authored
(Did I even test this thing? What the heck.)
-
David Johnson authored
(Also strip out some old code that I had thought about using, but won't.)
-
David Johnson authored
(And configure it in the cloudlab setup scripts relative to where we're installing the capnet controller and the protocol bindings.)
-
- 30 Apr, 2016 5 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
Lots of little fixes... one important one is to make the Capnet physical LAN name 'capnetlan-N'... this means our OVS bridge (i.e., br-capnetlan-1) has a name < 16 chars, which I believe is the Linux interface ID limit. Hopefully this is a pretty complete configuration; debug/verbose modes enabled by default for all our Neutron stuff (except the minor Nova plugin... that's not going to be any trouble). We don't autocreate any tenant Capnet networks thus far, although we could now that we have the allpairs workflow app. Next version. Since the compile times are so long (have to build protobuf main lib cause the Ubuntu version is too old for proto-c), we trot out pssh for some of this.
-
David Johnson authored
-
David Johnson authored
No wonder I couldn't figure out (again) how the Nova vif driver was figuring out which physical OVS bridge to plug a Capnet VM NIC into... I had forgotten about this little guy. This is a simple wrapper for the Nova network API Neutron driver that adds a couple of Capnet-specific fields into the VIF info. This seemed to be the way to do it at the time... maybe there's a way to not have to do it though.
-
- 29 Apr, 2016 1 commit
-
-
David Johnson authored
-