- 19 Jul, 2016 1 commit
-
-
Josh Kunz authored
-
- 18 Jul, 2016 1 commit
-
-
Josh Kunz authored
-
- 24 Jun, 2016 3 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
- 23 Jun, 2016 1 commit
-
-
Josh Kunz authored
-
- 16 Jun, 2016 1 commit
-
-
Josh Kunz authored
-
- 13 Jun, 2016 4 commits
-
-
Josh Kunz authored
-
Josh Kunz authored
-
David Johnson authored
-
David Johnson authored
-
- 14 May, 2016 1 commit
-
-
David Johnson authored
Lots of little changes here (but the experiment configuration for the paper experiments is preserved)... now all the hadoop setup scripts and config files that are baked into the hadoop VM image are stored here in the source tree, and then are also placed in the overall capnet ext tarball that osp-capnet.py references. Thus, no need for all the extra and conf tarballs. Now we only download hadoop and a wordfile (for reproducibility of input) from www.emulab.net when we create the hadoop image. The hadoop config files included here are the ones that we need and are working. During image creation, they get baked into a tarball in the image, and then extracted at VM runtime once the hadoop install scripts have unpacked the hadoop tarball. We wait til runtime to unpack hadoop because it's huge. But the conf dir we use is in the unpacked dir, hence the need to wait to unpack our overlay conf tarball. The hadoop config files here are slightly different than Unni's (but of course they are the ones we used for the paper); there are changes so that the slaves can contact the tracker on the master (I think that's what it was); and more imporantly JVM and hadoop memory limit adjustments to make the wordcount case work for our experiments. I don't know how well they'll work for others... I might have inadvertently required that VMs have 4096MB of memory minimum :(. But that is ok for us.
-
- 11 May, 2016 1 commit
-
-
David Johnson authored
-
- 10 May, 2016 10 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
(Not sure about this... but it seems to be a good thing to do...)
-
David Johnson authored
-
David Johnson authored
-
- 09 May, 2016 10 commits
-
-
Josh Kunz authored
-
David Johnson authored
All recv() are polling right now... hopefully this is a good set of timestamps.
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
:) :) :)
-
David Johnson authored
This is a bit weird. The user wfa receives node and passes them to the membrane until it hasn't received any for 60 seconds; then it starts trying to receive capabilities from the membrane; that is how it figures out that Hadoop has finished. The hadoop wfa doesn't send any caps back until it has finished setting up; then it sends all flow caps it created back to the user wfa.
-
- 08 May, 2016 6 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
Also some Hadoop utility scripts, and wrappers to run jobs we care about for testing or whatever. These don't have the password login Unni had done in expect. Right now we don't need that. If we need that, we can add it as the "frontend" script and then use these for the rest. The other part of these scripts is the part that bakes the hadoop image, in ../setup-capnet-basic.sh .
-
David Johnson authored
This is a bit odd, because the hadoop service wfa does not know when it's done receiving node caps from the user/tenant wfa, and it has no way to signal the user when it's done setting up. Oh well.
-
- 07 May, 2016 1 commit
-
-
David Johnson authored
(later on we have to fixup the pre-baked key stuff, of course)
-