1. 10 May, 2016 7 commits
  2. 09 May, 2016 10 commits
  3. 08 May, 2016 6 commits
    • David Johnson's avatar
      Whoops, no host key checking. · 8974bfbc
      David Johnson authored
      8974bfbc
    • David Johnson's avatar
      Again? · ff885e37
      David Johnson authored
      ff885e37
    • David Johnson's avatar
      Bug. · f4259588
      David Johnson authored
      f4259588
    • David Johnson's avatar
      Bug. · 3a3f729f
      David Johnson authored
      3a3f729f
    • David Johnson's avatar
      A simple port of Unni's Hadoop expect scripts to sh. · 25f12d8c
      David Johnson authored
      Also some Hadoop utility scripts, and wrappers to run jobs we care about
      for testing or whatever.
      
      These don't have the password login Unni had done in expect.  Right now
      we don't need that.  If we need that, we can add it as the "frontend"
      script and then use these for the rest.
      
      The other part of these scripts is the part that bakes the hadoop image,
      in ../setup-capnet-basic.sh .
      25f12d8c
    • David Johnson's avatar
      Hadoop workflow agent tandem. · 51100be1
      David Johnson authored
      This is a bit odd, because the hadoop service wfa does not know when
      it's done receiving node caps from the user/tenant wfa, and it has no
      way to signal the user when it's done setting up.  Oh well.
      51100be1
  4. 07 May, 2016 2 commits
  5. 06 May, 2016 1 commit
  6. 05 May, 2016 10 commits
    • David Johnson's avatar
      Add a Capnet dhcp Dnsmasq wrapper to stop DNS recursive resolution. · fc2350ec
      David Johnson authored
      Capnet networks cannot get to the external world.  However, the default
      Cloudlab/OpenStack dnsmasq arrangment (of course) specifies an external
      resolver.  This slows all kinds of queries from the VMs, and slows bootup,
      while the local resolver waits for the remote one to timeout.
      
      Dnsmasq in openstack doesn't give up per-network config ability, so we
      add some of our own.  There is now a custom capnet dnsmasq config file
      sans external resolver; and the wrapper class strips out any --server
      CLI options that the base class might have added due to the dhcp/dnsmasq
      config file opts.  It warns when it does this.
      
      We may not want that behavior in the future; hopefully we remember to
      get rid of it then.  But there's no other way to allow recursive public
      resolution for non-capnet networks, and then disallow it for Capnet
      networks, without this.
      fc2350ec
    • David Johnson's avatar
    • David Johnson's avatar
      Extend the Nova VM name hack in 9ef9388c. · 875d06d0
      David Johnson authored
      For whatever reason, when you create VMs from the command line Nova
      client, the Neutron port "device_owner" field is "compute:None",
      instead of "compute:nova" when you create from the Dashboard.
      
      Sigh... why does stuff like this happen?
      875d06d0
    • David Johnson's avatar
    • David Johnson's avatar
      f5a72aca
    • David Johnson's avatar
    • David Johnson's avatar
      Send provider:physical_network attr from get_networks if net is shared. · 0162f233
      David Johnson authored
      The default Neutron policy is that the provider:* attributes are only
      sent on a get_networks() call if the caller is an admin.  Well, Capnet
      needs that attribute so it knows which Capnet bridge to put a virtual
      NIC into.  And it turns out that if a non-admin user adds a VM to an
      admin-owned shared network, when Nova sets up the VM, it calls out to
      Neutron to collect network info for the VM -- but it must be doing it as
      the tenant user -- not with its admin powers.  Well, we have to know
      this attribute... so we open up the policy a tiny bit to send the
      provider:physical_network attribute if the network is a shared network.
      
      So we override that default Neutron policy bit here.
      
      This is really the wrong thing to do, I suppose, because it leaks
      provider info through get_networks for shared networks.  But the
      alternative is to make a secondary call in our Nova plugin to
      get_networks() with admin creds, and that I don't have time for right
      now.
      
      (The bit of our Nova plugin that requires this is in
      compute_capnet/network/neutronv2/api.py .)
      Nova agent collected the port's network info
      0162f233
    • David Johnson's avatar
      Add a broker-aware allpairs service master and tenant master. · 48420535
      David Johnson authored
      This is the allpairs wfa again, but this time, the user-tenant-allpairs
      wfa looks up the "allpairs" service at the broker, and as it receives
      its own nodes, it send those caps to the allpairs service's RP that it
      got from the broker.
      
      The idea is that you run each of these wfas in a separate tenant.  For
      instance, run the service-tenant-allpairs in the new service-0 tenant:
      
        neutron capnet-wfagent-create --name service-0-wfa0 \
          --tenant-id <SERVICE_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-service-tenant-allpairs
      
      Then, in the new tenant-0 tenant, do
      
        neutron capnet-wfagent-create --name tenant-0-wfa0 \
          --tenant-id <TENANT_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-user-tenant-allpairs
      
      Then add some nodes into tenant-0, and they will be granted to the
      service-0 wfa0, which will add them to the allpairs mesh, i.e.
      
        nova boot --image trusty-server --flavor m1.small \
          --nic net-id=9bab982f-80d7-427f-a34b-0bf7d3dcd5bc t1
      
      where net-id is the id of the Capnet network, and you have changed
      the OS_PROJECT, OS_USERNAME, OS_PASSWORD, OS_TENANT env vars to send
      your resource request from the tenant-0 tenant (I can't see that nova
      boot supports an admin injecting resources on behalf of a tenant, like
      neutron does).
      48420535
    • David Johnson's avatar
      The Capnet agent should wipe switches when it restarts. · 58168bb9
      David Johnson authored
      (I had taken this out during some debugging...)
      58168bb9
    • David Johnson's avatar
      Autocreate Capnet networks, and user/service tenant projects. · 62951cfb
      David Johnson authored
      This script, setup-capnet-basic.sh, can be run as many times as you
      want... it checks to see if everything it creates already exists.
      
      We create 4 user/service project/user tandems by default.
      
      The idea is that each "user" project is where the project user allocates
      nodes, and its master wfa looks up another service wfa at the broker,
      and gives node caps to the service wfa.
      
      The projects and users are generically named for now... we still don't
      run any wfas by default.
      62951cfb
  7. 04 May, 2016 2 commits
    • David Johnson's avatar
      Make VM and wfagent names appear as node names in the Capnet Protocol. · 9ef9388c
      David Johnson authored
      This was tonight's adventure.  Holy cow.  I feel like I need to go take
      a shower.  Apparently, despite the fact that Neutron and Nova have
      coexisted for many years, Nova VMs have a hostname that doesn't resolve
      to anything, and that Neutron knows nothing about.  This causes all
      kinds of local hangups (i.e., sudo, ssh UseDNS), but ok, whatever.
      Neutron has its own "DNS" names for the VMs; Nova has its own.  They
      don't share info.
      
      For us, it matters because we want the tenants to be able to create VMs
      and wfagents with meaningful names, and have those names returned to the
      Capnet Protocol as the node names.
      
      Well, Neutron and Nova do not share this information.  The real way to
      solve this is to ask Nova for the VM name from the Capnet Neutron agent,
      when said agent sees a port binding update.  But we want to process
      those pretty fast, and calling out to Keystone/Nova for a lookup on the
      hot path is quite undesirable.  So instead, since we already have a
      Capnet-specific binding update call that pushes local OVS dpid and
      ofportno from the local node to the controller (which distributes it to
      whichever Capnet agents are writing metadata files), if we can "find" a
      nova VM name locally, we add that to the binding, and then the Neutron
      server adds the name to the ports table in the DB.  Currently, we find
      this by whacking through the libvirt.xml until we find an instance who
      owns the device_name we are sending the binding update for.
      
      It seems that the Neutron people have already staged in db schema
      changes to support their pending new DNS feature.  This is coming, but
      it's not here yet.  So we can use the new table field, but the
      Neutron-Nova DNS thing doesn't exist in Liberty.  Eventually, Nova will
      tell Neutron the name of the port and other DNS information.
      9ef9388c
    • David Johnson's avatar
  8. 03 May, 2016 1 commit
    • David Johnson's avatar
      Setup the OpenStack metadata service flows, depending on OS config. · 11d036b4
      David Johnson authored
      Users can configuration the metadata service to either hook into the
      dhcp server port, or into the router port.  In the Cloudlab openstack
      profile, I guess I set it up to run the metadata proxy through the
      router port.  So we just stick with that.
      
      This means that capnet/openstack in cloudlab users will have to always
      add a router to their capnet networks.
      11d036b4
  9. 02 May, 2016 1 commit