1. 05 May, 2016 5 commits
    • David Johnson's avatar
    • David Johnson's avatar
      Send provider:physical_network attr from get_networks if net is shared. · 0162f233
      David Johnson authored
      The default Neutron policy is that the provider:* attributes are only
      sent on a get_networks() call if the caller is an admin.  Well, Capnet
      needs that attribute so it knows which Capnet bridge to put a virtual
      NIC into.  And it turns out that if a non-admin user adds a VM to an
      admin-owned shared network, when Nova sets up the VM, it calls out to
      Neutron to collect network info for the VM -- but it must be doing it as
      the tenant user -- not with its admin powers.  Well, we have to know
      this attribute... so we open up the policy a tiny bit to send the
      provider:physical_network attribute if the network is a shared network.
      
      So we override that default Neutron policy bit here.
      
      This is really the wrong thing to do, I suppose, because it leaks
      provider info through get_networks for shared networks.  But the
      alternative is to make a secondary call in our Nova plugin to
      get_networks() with admin creds, and that I don't have time for right
      now.
      
      (The bit of our Nova plugin that requires this is in
      compute_capnet/network/neutronv2/api.py .)
      Nova agent collected the port's network info
      0162f233
    • David Johnson's avatar
      Add a broker-aware allpairs service master and tenant master. · 48420535
      David Johnson authored
      This is the allpairs wfa again, but this time, the user-tenant-allpairs
      wfa looks up the "allpairs" service at the broker, and as it receives
      its own nodes, it send those caps to the allpairs service's RP that it
      got from the broker.
      
      The idea is that you run each of these wfas in a separate tenant.  For
      instance, run the service-tenant-allpairs in the new service-0 tenant:
      
        neutron capnet-wfagent-create --name service-0-wfa0 \
          --tenant-id <SERVICE_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-service-tenant-allpairs
      
      Then, in the new tenant-0 tenant, do
      
        neutron capnet-wfagent-create --name tenant-0-wfa0 \
          --tenant-id <TENANT_0_UUID> \
          --master --wfapp-path=/usr/bin/capnet-wfagent-user-tenant-allpairs
      
      Then add some nodes into tenant-0, and they will be granted to the
      service-0 wfa0, which will add them to the allpairs mesh, i.e.
      
        nova boot --image trusty-server --flavor m1.small \
          --nic net-id=9bab982f-80d7-427f-a34b-0bf7d3dcd5bc t1
      
      where net-id is the id of the Capnet network, and you have changed
      the OS_PROJECT, OS_USERNAME, OS_PASSWORD, OS_TENANT env vars to send
      your resource request from the tenant-0 tenant (I can't see that nova
      boot supports an admin injecting resources on behalf of a tenant, like
      neutron does).
      48420535
    • David Johnson's avatar
      The Capnet agent should wipe switches when it restarts. · 58168bb9
      David Johnson authored
      (I had taken this out during some debugging...)
      58168bb9
    • David Johnson's avatar
      Autocreate Capnet networks, and user/service tenant projects. · 62951cfb
      David Johnson authored
      This script, setup-capnet-basic.sh, can be run as many times as you
      want... it checks to see if everything it creates already exists.
      
      We create 4 user/service project/user tandems by default.
      
      The idea is that each "user" project is where the project user allocates
      nodes, and its master wfa looks up another service wfa at the broker,
      and gives node caps to the service wfa.
      
      The projects and users are generically named for now... we still don't
      run any wfas by default.
      62951cfb
  2. 04 May, 2016 2 commits
    • David Johnson's avatar
      Make VM and wfagent names appear as node names in the Capnet Protocol. · 9ef9388c
      David Johnson authored
      This was tonight's adventure.  Holy cow.  I feel like I need to go take
      a shower.  Apparently, despite the fact that Neutron and Nova have
      coexisted for many years, Nova VMs have a hostname that doesn't resolve
      to anything, and that Neutron knows nothing about.  This causes all
      kinds of local hangups (i.e., sudo, ssh UseDNS), but ok, whatever.
      Neutron has its own "DNS" names for the VMs; Nova has its own.  They
      don't share info.
      
      For us, it matters because we want the tenants to be able to create VMs
      and wfagents with meaningful names, and have those names returned to the
      Capnet Protocol as the node names.
      
      Well, Neutron and Nova do not share this information.  The real way to
      solve this is to ask Nova for the VM name from the Capnet Neutron agent,
      when said agent sees a port binding update.  But we want to process
      those pretty fast, and calling out to Keystone/Nova for a lookup on the
      hot path is quite undesirable.  So instead, since we already have a
      Capnet-specific binding update call that pushes local OVS dpid and
      ofportno from the local node to the controller (which distributes it to
      whichever Capnet agents are writing metadata files), if we can "find" a
      nova VM name locally, we add that to the binding, and then the Neutron
      server adds the name to the ports table in the DB.  Currently, we find
      this by whacking through the libvirt.xml until we find an instance who
      owns the device_name we are sending the binding update for.
      
      It seems that the Neutron people have already staged in db schema
      changes to support their pending new DNS feature.  This is coming, but
      it's not here yet.  So we can use the new table field, but the
      Neutron-Nova DNS thing doesn't exist in Liberty.  Eventually, Nova will
      tell Neutron the name of the port and other DNS information.
      9ef9388c
    • David Johnson's avatar
  3. 03 May, 2016 1 commit
    • David Johnson's avatar
      Setup the OpenStack metadata service flows, depending on OS config. · 11d036b4
      David Johnson authored
      Users can configuration the metadata service to either hook into the
      dhcp server port, or into the router port.  In the Cloudlab openstack
      profile, I guess I set it up to run the metadata proxy through the
      router port.  So we just stick with that.
      
      This means that capnet/openstack in cloudlab users will have to always
      add a router to their capnet networks.
      11d036b4
  4. 02 May, 2016 4 commits
  5. 01 May, 2016 3 commits
  6. 30 Apr, 2016 5 commits
  7. 29 Apr, 2016 1 commit