1. 17 Aug, 2017 1 commit
  2. 25 Oct, 2016 7 commits
  3. 19 Oct, 2016 1 commit
  4. 17 Oct, 2016 5 commits
  5. 16 Oct, 2016 4 commits
    • David Johnson's avatar
      Process WFA logs to detect success/failure; maybe send email. · a46a911b
      David Johnson authored
      We look for Traceback strings as error indicators; and specific
      strings to indicate success.  Then if specific email-related env
      vars are set, we'll send emails on failure and maybe on success,
      with key bits of logfiles attached.  The env vars you would set are
      
      TESTEMAIL=you@mail.wherever
      TESTMAILFAILURE=1
      TESTMAILSUCCESS=1
      
      You must set TESTEMAIL to get any mail.  If you set TESTMAILFAILURE
      to 1, you'll get failure notifications.  If you set TESTMAILSUCCESS
      to 1, you'll also get successful test notifications.
      
      Now I don't have to stay up all night babysitting these test runs.
      a46a911b
    • David Johnson's avatar
      Handle test dirs vs test names correctly. · 265047ce
      David Johnson authored
      (Note new extra argument to test-hadoop.sh .)
      265047ce
    • David Johnson's avatar
      Disable the openstack-slothd collector; fix #6. · c2f35500
      David Johnson authored
      For whatever reason, this seems to be the trigger for issue #6 .  We
      certainly are running with a ton of VMs, so the Ceilometer history that
      the collector grabs for Cloudlab is very detailed... it would not
      surprise me if the collector runs around the clock in these experiments.
      
      I no longer think this is really our problem... but I'll leave that bug
      open in case I ever have time to investigate.  But this certainly
      explains the nondeterministic locks I was seeing in Neutron.
      c2f35500
    • David Johnson's avatar
  6. 13 Oct, 2016 2 commits
  7. 29 Sep, 2016 3 commits
  8. 22 Sep, 2016 1 commit
  9. 19 Sep, 2016 2 commits
  10. 16 Sep, 2016 2 commits
  11. 15 Sep, 2016 1 commit
  12. 13 Jun, 2016 2 commits
  13. 14 May, 2016 1 commit
    • David Johnson's avatar
      Simplify the hadoop image creation and track modified conf files. · 06e1878b
      David Johnson authored
      Lots of little changes here (but the experiment configuration for the
      paper experiments is preserved)... now all the hadoop setup scripts and
      config files that are baked into the hadoop VM image are stored here in
      the source tree, and then are also placed in the overall capnet ext
      tarball that osp-capnet.py references.  Thus, no need for all the extra
      and conf tarballs.  Now we only download hadoop and a wordfile (for
      reproducibility of input) from www.emulab.net when we create the hadoop
      image.
      
      The hadoop config files included here are the ones that we need and are
      working.  During image creation, they get baked into a tarball in the
      image, and then extracted at VM runtime once the hadoop install scripts
      have unpacked the hadoop tarball.  We wait til runtime to unpack hadoop
      because it's huge.  But the conf dir we use is in the unpacked dir,
      hence the need to wait to unpack our overlay conf tarball.
      
      The hadoop config files here are slightly different than Unni's (but of
      course they are the ones we used for the paper); there
      are changes so that the slaves can contact the tracker on the master (I
      think that's what it was); and more imporantly JVM and hadoop memory
      limit adjustments to make the wordcount case work for our experiments.
      I don't know how well they'll work for others... I might have
      inadvertently required that VMs have 4096MB of memory minimum :(.  But
      that is ok for us.
      06e1878b
  14. 11 May, 2016 1 commit
  15. 10 May, 2016 2 commits
  16. 09 May, 2016 1 commit
  17. 08 May, 2016 1 commit
    • David Johnson's avatar
      A simple port of Unni's Hadoop expect scripts to sh. · 25f12d8c
      David Johnson authored
      Also some Hadoop utility scripts, and wrappers to run jobs we care about
      for testing or whatever.
      
      These don't have the password login Unni had done in expect.  Right now
      we don't need that.  If we need that, we can add it as the "frontend"
      script and then use these for the rest.
      
      The other part of these scripts is the part that bakes the hadoop image,
      in ../setup-capnet-basic.sh .
      25f12d8c
  18. 07 May, 2016 2 commits
  19. 06 May, 2016 1 commit