1. 14 May, 2016 1 commit
    • David Johnson's avatar
      Simplify the hadoop image creation and track modified conf files. · 06e1878b
      David Johnson authored
      Lots of little changes here (but the experiment configuration for the
      paper experiments is preserved)... now all the hadoop setup scripts and
      config files that are baked into the hadoop VM image are stored here in
      the source tree, and then are also placed in the overall capnet ext
      tarball that osp-capnet.py references.  Thus, no need for all the extra
      and conf tarballs.  Now we only download hadoop and a wordfile (for
      reproducibility of input) from www.emulab.net when we create the hadoop
      The hadoop config files included here are the ones that we need and are
      working.  During image creation, they get baked into a tarball in the
      image, and then extracted at VM runtime once the hadoop install scripts
      have unpacked the hadoop tarball.  We wait til runtime to unpack hadoop
      because it's huge.  But the conf dir we use is in the unpacked dir,
      hence the need to wait to unpack our overlay conf tarball.
      The hadoop config files here are slightly different than Unni's (but of
      course they are the ones we used for the paper); there
      are changes so that the slaves can contact the tracker on the master (I
      think that's what it was); and more imporantly JVM and hadoop memory
      limit adjustments to make the wordcount case work for our experiments.
      I don't know how well they'll work for others... I might have
      inadvertently required that VMs have 4096MB of memory minimum :(.  But
      that is ok for us.
  2. 11 May, 2016 1 commit
  3. 10 May, 2016 10 commits
  4. 09 May, 2016 10 commits
  5. 08 May, 2016 6 commits
    • David Johnson's avatar
      Whoops, no host key checking. · 8974bfbc
      David Johnson authored
    • David Johnson's avatar
      Again? · ff885e37
      David Johnson authored
    • David Johnson's avatar
      Bug. · f4259588
      David Johnson authored
    • David Johnson's avatar
      Bug. · 3a3f729f
      David Johnson authored
    • David Johnson's avatar
      A simple port of Unni's Hadoop expect scripts to sh. · 25f12d8c
      David Johnson authored
      Also some Hadoop utility scripts, and wrappers to run jobs we care about
      for testing or whatever.
      These don't have the password login Unni had done in expect.  Right now
      we don't need that.  If we need that, we can add it as the "frontend"
      script and then use these for the rest.
      The other part of these scripts is the part that bakes the hadoop image,
      in ../setup-capnet-basic.sh .
    • David Johnson's avatar
      Hadoop workflow agent tandem. · 51100be1
      David Johnson authored
      This is a bit odd, because the hadoop service wfa does not know when
      it's done receiving node caps from the user/tenant wfa, and it has no
      way to signal the user when it's done setting up.  Oh well.
  6. 07 May, 2016 2 commits
  7. 06 May, 2016 1 commit
  8. 05 May, 2016 9 commits
    • David Johnson's avatar
      Add a Capnet dhcp Dnsmasq wrapper to stop DNS recursive resolution. · fc2350ec
      David Johnson authored
      Capnet networks cannot get to the external world.  However, the default
      Cloudlab/OpenStack dnsmasq arrangment (of course) specifies an external
      resolver.  This slows all kinds of queries from the VMs, and slows bootup,
      while the local resolver waits for the remote one to timeout.
      Dnsmasq in openstack doesn't give up per-network config ability, so we
      add some of our own.  There is now a custom capnet dnsmasq config file
      sans external resolver; and the wrapper class strips out any --server
      CLI options that the base class might have added due to the dhcp/dnsmasq
      config file opts.  It warns when it does this.
      We may not want that behavior in the future; hopefully we remember to
      get rid of it then.  But there's no other way to allow recursive public
      resolution for non-capnet networks, and then disallow it for Capnet
      networks, without this.
    • David Johnson's avatar
    • David Johnson's avatar
      Extend the Nova VM name hack in 9ef9388c. · 875d06d0
      David Johnson authored
      For whatever reason, when you create VMs from the command line Nova
      client, the Neutron port "device_owner" field is "compute:None",
      instead of "compute:nova" when you create from the Dashboard.
      Sigh... why does stuff like this happen?
    • David Johnson's avatar
    • David Johnson's avatar
    • David Johnson's avatar
    • David Johnson's avatar
      Send provider:physical_network attr from get_networks if net is shared. · 0162f233
      David Johnson authored
      The default Neutron policy is that the provider:* attributes are only
      sent on a get_networks() call if the caller is an admin.  Well, Capnet
      needs that attribute so it knows which Capnet bridge to put a virtual
      NIC into.  And it turns out that if a non-admin user adds a VM to an
      admin-owned shared network, when Nova sets up the VM, it calls out to
      Neutron to collect network info for the VM -- but it must be doing it as
      the tenant user -- not with its admin powers.  Well, we have to know
      this attribute... so we open up the policy a tiny bit to send the
      provider:physical_network attribute if the network is a shared network.
      So we override that default Neutron policy bit here.
      This is really the wrong thing to do, I suppose, because it leaks
      provider info through get_networks for shared networks.  But the
      alternative is to make a secondary call in our Nova plugin to
      get_networks() with admin creds, and that I don't have time for right
      (The bit of our Nova plugin that requires this is in
      compute_capnet/network/neutronv2/api.py .)
      Nova agent collected the port's network info
    • David Johnson's avatar
      Add a broker-aware allpairs service master and tenant master. · 48420535
      David Johnson authored
      This is the allpairs wfa again, but this time, the user-tenant-allpairs
      wfa looks up the "allpairs" service at the broker, and as it receives
      its own nodes, it send those caps to the allpairs service's RP that it
      got from the broker.
      The idea is that you run each of these wfas in a separate tenant.  For
      instance, run the service-tenant-allpairs in the new service-0 tenant:
        neutron capnet-wfagent-create --name service-0-wfa0 \
          --tenant-id <SERVICE_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-service-tenant-allpairs
      Then, in the new tenant-0 tenant, do
        neutron capnet-wfagent-create --name tenant-0-wfa0 \
          --tenant-id <TENANT_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-user-tenant-allpairs
      Then add some nodes into tenant-0, and they will be granted to the
      service-0 wfa0, which will add them to the allpairs mesh, i.e.
        nova boot --image trusty-server --flavor m1.small \
          --nic net-id=9bab982f-80d7-427f-a34b-0bf7d3dcd5bc t1
      where net-id is the id of the Capnet network, and you have changed
      the OS_PROJECT, OS_USERNAME, OS_PASSWORD, OS_TENANT env vars to send
      your resource request from the tenant-0 tenant (I can't see that nova
      boot supports an admin injecting resources on behalf of a tenant, like
      neutron does).
    • David Johnson's avatar
      The Capnet agent should wipe switches when it restarts. · 58168bb9
      David Johnson authored
      (I had taken this out during some debugging...)