1. 17 Oct, 2016 2 commits
  2. 16 Oct, 2016 4 commits
    • David Johnson's avatar
      Process WFA logs to detect success/failure; maybe send email. · a46a911b
      David Johnson authored
      We look for Traceback strings as error indicators; and specific
      strings to indicate success.  Then if specific email-related env
      vars are set, we'll send emails on failure and maybe on success,
      with key bits of logfiles attached.  The env vars you would set are
      
      TESTEMAIL=you@mail.wherever
      TESTMAILFAILURE=1
      TESTMAILSUCCESS=1
      
      You must set TESTEMAIL to get any mail.  If you set TESTMAILFAILURE
      to 1, you'll get failure notifications.  If you set TESTMAILSUCCESS
      to 1, you'll also get successful test notifications.
      
      Now I don't have to stay up all night babysitting these test runs.
      a46a911b
    • David Johnson's avatar
      Handle test dirs vs test names correctly. · 265047ce
      David Johnson authored
      (Note new extra argument to test-hadoop.sh .)
      265047ce
    • David Johnson's avatar
      Disable the openstack-slothd collector; fix #6. · c2f35500
      David Johnson authored
      For whatever reason, this seems to be the trigger for issue #6 .  We
      certainly are running with a ton of VMs, so the Ceilometer history that
      the collector grabs for Cloudlab is very detailed... it would not
      surprise me if the collector runs around the clock in these experiments.
      
      I no longer think this is really our problem... but I'll leave that bug
      open in case I ever have time to investigate.  But this certainly
      explains the nondeterministic locks I was seeing in Neutron.
      c2f35500
    • David Johnson's avatar
  3. 13 Oct, 2016 2 commits
  4. 29 Sep, 2016 3 commits
  5. 22 Sep, 2016 1 commit
  6. 19 Sep, 2016 2 commits
  7. 16 Sep, 2016 2 commits
  8. 15 Sep, 2016 1 commit
  9. 13 Jun, 2016 2 commits
  10. 14 May, 2016 1 commit
    • David Johnson's avatar
      Simplify the hadoop image creation and track modified conf files. · 06e1878b
      David Johnson authored
      Lots of little changes here (but the experiment configuration for the
      paper experiments is preserved)... now all the hadoop setup scripts and
      config files that are baked into the hadoop VM image are stored here in
      the source tree, and then are also placed in the overall capnet ext
      tarball that osp-capnet.py references.  Thus, no need for all the extra
      and conf tarballs.  Now we only download hadoop and a wordfile (for
      reproducibility of input) from www.emulab.net when we create the hadoop
      image.
      
      The hadoop config files included here are the ones that we need and are
      working.  During image creation, they get baked into a tarball in the
      image, and then extracted at VM runtime once the hadoop install scripts
      have unpacked the hadoop tarball.  We wait til runtime to unpack hadoop
      because it's huge.  But the conf dir we use is in the unpacked dir,
      hence the need to wait to unpack our overlay conf tarball.
      
      The hadoop config files here are slightly different than Unni's (but of
      course they are the ones we used for the paper); there
      are changes so that the slaves can contact the tracker on the master (I
      think that's what it was); and more imporantly JVM and hadoop memory
      limit adjustments to make the wordcount case work for our experiments.
      I don't know how well they'll work for others... I might have
      inadvertently required that VMs have 4096MB of memory minimum :(.  But
      that is ok for us.
      06e1878b
  11. 11 May, 2016 1 commit
  12. 10 May, 2016 2 commits
  13. 09 May, 2016 1 commit
  14. 08 May, 2016 1 commit
    • David Johnson's avatar
      A simple port of Unni's Hadoop expect scripts to sh. · 25f12d8c
      David Johnson authored
      Also some Hadoop utility scripts, and wrappers to run jobs we care about
      for testing or whatever.
      
      These don't have the password login Unni had done in expect.  Right now
      we don't need that.  If we need that, we can add it as the "frontend"
      script and then use these for the rest.
      
      The other part of these scripts is the part that bakes the hadoop image,
      in ../setup-capnet-basic.sh .
      25f12d8c
  15. 07 May, 2016 2 commits
  16. 06 May, 2016 1 commit
  17. 05 May, 2016 4 commits
    • David Johnson's avatar
      Add a Capnet dhcp Dnsmasq wrapper to stop DNS recursive resolution. · fc2350ec
      David Johnson authored
      Capnet networks cannot get to the external world.  However, the default
      Cloudlab/OpenStack dnsmasq arrangment (of course) specifies an external
      resolver.  This slows all kinds of queries from the VMs, and slows bootup,
      while the local resolver waits for the remote one to timeout.
      
      Dnsmasq in openstack doesn't give up per-network config ability, so we
      add some of our own.  There is now a custom capnet dnsmasq config file
      sans external resolver; and the wrapper class strips out any --server
      CLI options that the base class might have added due to the dhcp/dnsmasq
      config file opts.  It warns when it does this.
      
      We may not want that behavior in the future; hopefully we remember to
      get rid of it then.  But there's no other way to allow recursive public
      resolution for non-capnet networks, and then disallow it for Capnet
      networks, without this.
      fc2350ec
    • David Johnson's avatar
    • David Johnson's avatar
    • David Johnson's avatar
      Autocreate Capnet networks, and user/service tenant projects. · 62951cfb
      David Johnson authored
      This script, setup-capnet-basic.sh, can be run as many times as you
      want... it checks to see if everything it creates already exists.
      
      We create 4 user/service project/user tandems by default.
      
      The idea is that each "user" project is where the project user allocates
      nodes, and its master wfa looks up another service wfa at the broker,
      and gives node caps to the service wfa.
      
      The projects and users are generically named for now... we still don't
      run any wfas by default.
      62951cfb
  18. 03 May, 2016 1 commit
    • David Johnson's avatar
      Setup the OpenStack metadata service flows, depending on OS config. · 11d036b4
      David Johnson authored
      Users can configuration the metadata service to either hook into the
      dhcp server port, or into the router port.  In the Cloudlab openstack
      profile, I guess I set it up to run the metadata proxy through the
      router port.  So we just stick with that.
      
      This means that capnet/openstack in cloudlab users will have to always
      add a router to their capnet networks.
      11d036b4
  19. 01 May, 2016 1 commit
  20. 30 Apr, 2016 2 commits
    • David Johnson's avatar
    • David Johnson's avatar
      Bring Cloudlab Capnet scripts up to speed; working; faster. · 7e96fcaf
      David Johnson authored
      Lots of little fixes... one important one is to make the Capnet physical
      LAN name 'capnetlan-N'... this means our OVS bridge (i.e.,
      br-capnetlan-1) has a name < 16 chars, which I believe is the Linux
      interface ID limit.
      
      Hopefully this is a pretty complete configuration; debug/verbose modes
      enabled by default for all our Neutron stuff (except the minor Nova
      plugin... that's not going to be any trouble).  We don't autocreate any
      tenant Capnet networks thus far, although we could now that we have the
      allpairs workflow app.  Next version.
      
      Since the compile times are so long (have to build protobuf main lib
      cause the Ubuntu version is too old for proto-c), we trot out pssh for
      some of this.
      7e96fcaf
  21. 29 Apr, 2016 1 commit