- 14 May, 2016 1 commit
-
-
David Johnson authored
Lots of little changes here (but the experiment configuration for the paper experiments is preserved)... now all the hadoop setup scripts and config files that are baked into the hadoop VM image are stored here in the source tree, and then are also placed in the overall capnet ext tarball that osp-capnet.py references. Thus, no need for all the extra and conf tarballs. Now we only download hadoop and a wordfile (for reproducibility of input) from www.emulab.net when we create the hadoop image. The hadoop config files included here are the ones that we need and are working. During image creation, they get baked into a tarball in the image, and then extracted at VM runtime once the hadoop install scripts have unpacked the hadoop tarball. We wait til runtime to unpack hadoop because it's huge. But the conf dir we use is in the unpacked dir, hence the need to wait to unpack our overlay conf tarball. The hadoop config files here are slightly different than Unni's (but of course they are the ones we used for the paper); there are changes so that the slaves can contact the tracker on the master (I think that's what it was); and more imporantly JVM and hadoop memory limit adjustments to make the wordcount case work for our experiments. I don't know how well they'll work for others... I might have inadvertently required that VMs have 4096MB of memory minimum :(. But that is ok for us.
-
- 11 May, 2016 1 commit
-
-
David Johnson authored
-
- 10 May, 2016 10 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
(Not sure about this... but it seems to be a good thing to do...)
-
David Johnson authored
-
David Johnson authored
-
- 09 May, 2016 10 commits
-
-
Josh Kunz authored
-
David Johnson authored
All recv() are polling right now... hopefully this is a good set of timestamps.
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
:) :) :)
-
David Johnson authored
This is a bit weird. The user wfa receives node and passes them to the membrane until it hasn't received any for 60 seconds; then it starts trying to receive capabilities from the membrane; that is how it figures out that Hadoop has finished. The hadoop wfa doesn't send any caps back until it has finished setting up; then it sends all flow caps it created back to the user wfa.
-
- 08 May, 2016 6 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
Also some Hadoop utility scripts, and wrappers to run jobs we care about for testing or whatever. These don't have the password login Unni had done in expect. Right now we don't need that. If we need that, we can add it as the "frontend" script and then use these for the rest. The other part of these scripts is the part that bakes the hadoop image, in ../setup-capnet-basic.sh .
-
David Johnson authored
This is a bit odd, because the hadoop service wfa does not know when it's done receiving node caps from the user/tenant wfa, and it has no way to signal the user when it's done setting up. Oh well.
-
- 07 May, 2016 2 commits
-
-
David Johnson authored
(later on we have to fixup the pre-baked key stuff, of course)
-
David Johnson authored
-
- 06 May, 2016 1 commit
-
-
David Johnson authored
-
- 05 May, 2016 9 commits
-
-
David Johnson authored
Capnet networks cannot get to the external world. However, the default Cloudlab/OpenStack dnsmasq arrangment (of course) specifies an external resolver. This slows all kinds of queries from the VMs, and slows bootup, while the local resolver waits for the remote one to timeout. Dnsmasq in openstack doesn't give up per-network config ability, so we add some of our own. There is now a custom capnet dnsmasq config file sans external resolver; and the wrapper class strips out any --server CLI options that the base class might have added due to the dhcp/dnsmasq config file opts. It warns when it does this. We may not want that behavior in the future; hopefully we remember to get rid of it then. But there's no other way to allow recursive public resolution for non-capnet networks, and then disallow it for Capnet networks, without this.
-
David Johnson authored
-
David Johnson authored
For whatever reason, when you create VMs from the command line Nova client, the Neutron port "device_owner" field is "compute:None", instead of "compute:nova" when you create from the Dashboard. Sigh... why does stuff like this happen?
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
The default Neutron policy is that the provider:* attributes are only sent on a get_networks() call if the caller is an admin. Well, Capnet needs that attribute so it knows which Capnet bridge to put a virtual NIC into. And it turns out that if a non-admin user adds a VM to an admin-owned shared network, when Nova sets up the VM, it calls out to Neutron to collect network info for the VM -- but it must be doing it as the tenant user -- not with its admin powers. Well, we have to know this attribute... so we open up the policy a tiny bit to send the provider:physical_network attribute if the network is a shared network. So we override that default Neutron policy bit here. This is really the wrong thing to do, I suppose, because it leaks provider info through get_networks for shared networks. But the alternative is to make a secondary call in our Nova plugin to get_networks() with admin creds, and that I don't have time for right now. (The bit of our Nova plugin that requires this is in compute_capnet/network/neutronv2/api.py .) Nova agent collected the port's network info
-
David Johnson authored
This is the allpairs wfa again, but this time, the user-tenant-allpairs wfa looks up the "allpairs" service at the broker, and as it receives its own nodes, it send those caps to the allpairs service's RP that it got from the broker. The idea is that you run each of these wfas in a separate tenant. For instance, run the service-tenant-allpairs in the new service-0 tenant: neutron capnet-wfagent-create --name service-0-wfa0 \ --tenant-id <SERVICE_0_UUID> \ --master --wfapp-path=/usr/bin/capnet-wfagent-service-tenant-allpairs Then, in the new tenant-0 tenant, do neutron capnet-wfagent-create --name tenant-0-wfa0 \ --tenant-id <TENANT_0_UUID> \ --master --wfapp-path=/usr/bin/capnet-wfagent-user-tenant-allpairs Then add some nodes into tenant-0, and they will be granted to the service-0 wfa0, which will add them to the allpairs mesh, i.e. nova boot --image trusty-server --flavor m1.small \ --nic net-id=9bab982f-80d7-427f-a34b-0bf7d3dcd5bc t1 where net-id is the id of the Capnet network, and you have changed the OS_PROJECT, OS_USERNAME, OS_PASSWORD, OS_TENANT env vars to send your resource request from the tenant-0 tenant (I can't see that nova boot supports an admin injecting resources on behalf of a tenant, like neutron does).
-
David Johnson authored
(I had taken this out during some debugging...)
-