1. 29 Mar, 2010 1 commit
  2. 18 Mar, 2010 1 commit
  3. 09 Mar, 2010 1 commit
  4. 03 Mar, 2010 1 commit
  5. 24 Feb, 2010 1 commit
  6. 12 Feb, 2010 1 commit
  7. 04 Feb, 2010 2 commits
    • Jonathon Duerig's avatar
    • Leigh B Stoller's avatar
      Big cleanup of GeniComponent stuff. Moved Resolve() into GeniComponent · b63cb055
      Leigh B Stoller authored
      since it has to be aware of the CM version. Add a Version() call to
      GeniAuthority with goes asks the CM what version it is exporting.
      Based on that, we know how to do a resolve of a component. Refactored
      the code that was used in GeniAggregate when creating tunnels, since
      that is where we have to Resolve components. This also turns up in
      cooked mode.
      Continuine moving towards a urn-only world. If a GeniAuthority or a
      GeniComponent does not have the URN set locally in the DB, go back to
      the clearinghouse and get it. Error if it is not known, and go bang on
      the remote site to update and rerun register_resources.
  8. 03 Feb, 2010 1 commit
  9. 06 Jan, 2010 1 commit
    • Leigh B. Stoller's avatar
      Slice expiration changes. The crux of these changes: · 5c63cf86
      Leigh B. Stoller authored
      1. You cannot unregister a slice at the SA before it has expired. This
         will be annoying at times, but the alphanumeric namespace for slice
         ames is probably big enough for us.
      2. To renew a slice, the easiest approach is to call the Renew method
         at the SA, get a new credential for the slice, and then pass that
         to renew on the CMs where you have slivers.
      The changes address the problem of slice expiration.  Before this
      change, when registering a slice at the Slice Authority, there was no
      way to give it an expiration time. The SA just assigns a default
      (currently one hour). Then when asking for a ticket at a CM, you can
      specify a "valid_until" field in the rspec, which becomes the sliver
      expiration time at that CM. You can later (before it expires) "renew"
      the sliver, extending the time. Both the sliver and the slice will
      expire from the CM at that time.
      Further complicating things is that credentials also have an
      expiration time in them so that credentials are not valid forever. A
      slice credential picks up the expiration time that the SA assigned to
      the slice (mentioned in the first paragraph).
      A problem is that this arrangement allows you to extend the expiration
      of a sliver past the expiration of the slice that is recorded at the
      SA. This makes it impossible to expire slice records at the SA since
      if we did, and there were outstanding slivers, you could get into a
      situation where you would have no ability to access those slivers. (an
      admin person can always kill off the sliver).
      Remember, the SA cannot know for sure if there are any slivers out
      there, especially if they can exist past the expiration of the slice.
      The solution:
      * Provide a Renew call at the SA to update the slice expiration time.
        Also allow for an expiration time in the Register() call.
        The SA will need to abide by these three rules:
        1. Never issue slice credentials which expire later than the
           corresponding slice
        2. Never allow the slice expiration time to be moved earlier
        3. Never deregister slices before they expire [*].
      * Change the CM to not set the expiration of a sliver past the
        expiration of the slice credential; the credential expiration is an
        upper bound on the valid_until field of the rspec. Instead, one must
        first extend the slice at the SA, get a new slice credential, and
        use that to extend the sliver at the CM.
      * For consistency with the SA, the CM API will changed so that
        RenewSliver() becomes RenewSlice(), and it will require the
        slice credential.
  10. 02 Dec, 2009 1 commit
    • Leigh B. Stoller's avatar
      Checkpoint. · f83ba977
      Leigh B. Stoller authored
      * More URN issues dealt with.
      * Sliver registration and unregistraton (CM to SA).
      * More V2 status stuff.
      * Other fixes.
  11. 06 Nov, 2009 1 commit
  12. 02 Nov, 2009 1 commit
  13. 30 Oct, 2009 1 commit
  14. 29 Oct, 2009 1 commit
  15. 28 Oct, 2009 1 commit
  16. 26 Oct, 2009 1 commit
  17. 01 Oct, 2009 1 commit
  18. 18 Sep, 2009 1 commit
  19. 16 Sep, 2009 1 commit
  20. 04 Aug, 2009 1 commit
  21. 19 Jul, 2009 1 commit
    • Leigh B. Stoller's avatar
      Temp fix for the problem of tunnels not working, which was caused by · 4a64493c
      Leigh B. Stoller authored
      a missing url in the certificate for component (node). Why was that?
      Well, when I create a sliver, I use the same uuid as the node, and
      then later when I need to find the node for that sliver, I look for it
      in the nodes table using the that uuid. This was bad cause for each
      sliver I create a new certificate pair, and thus a new uuid. This
      overwrites the original certificate bound to that node, only the new
      certificate is not created with a URL. This is bad all around, but
      with uuids being replaced by URNs and so close to the demo, I am not
      going to fix this properly, but rather just avoid the problem by
      reusing the existing certificate for the node when creating a sliver.
      Revisit later this week.
  22. 15 Jul, 2009 1 commit
  23. 09 Jul, 2009 1 commit
    • Leigh B. Stoller's avatar
      Two big changes · e6c90969
      Leigh B. Stoller authored
      1. Allow use of the shared nodes via the "exclusive" tag in the rspec.
      2. Switch to using the mapper in GetTicket() and in RedeemTicket() as
         per this email I sent:
      * GetTicket(): New rspec comes in and I build a virtual topology as I
       parse the rspec. Basically, virt_nodes and virt_lans table entries,
       which are stored into the DB. The rspec can include wildcards or
       specific nodes; I use the "fixed" slot of the virt_nodes table. Note
       that I am not yet handling fixed ifaces. Nice thing about this is
       that all of the show exp tools work.
       I run the (new) mapper on it in "solution" mode. This does two
       things; wildcards are mapped, and 2) it verifies that the rspec is
       mappable on the local hardware. Solution mode does not actually
       change the DB, but rather it spits out an XML file that I parse
       (note, we eventually will pass the rspec through, but I am not ready
       for that yet). I then allocate the nodes to the holding area, update
       the rspec, create a ticket, and return it.
      * RedeemTicket(): I run the new mapper again, only this time in real
       mode with -update. This is basically a redo of the run above since
       all the nodes are reserved already, but the DB is actually filled
       out this time. I then create the slivers and such.
       The other difference is that instead of creating the vlans by hand,
       I can now run snmpit -t to do the work for me. Ditto for tear down
       with snmpit -r.
       Another bonus is that I can add (missing) IP addressess during the
       initial rspec parse, and the nodes now boot and have their
       interfaces configured. Virtual interfaces too, including the ones
       inside of virtual nodes.
      * All of the above works with shared nodes too:
      "<rspec xmlns=\"http://protogeni.net/resources/rspec/0.1\"> " +\
      " <node virtual_id=\"geni1\" "+\
      "       virtualization_type=\"emulab-vnode\" " +\
      "       virtualization_subtype=\"emulab-openvz\" " +\
      "       exclusive=\"0\"> " +\
      "   <interface virtual_id=\"virt0\"/> " +\
      " </node>" +\
      " <node virtual_id=\"geni2\" "+\
      "       virtualization_type=\"emulab-vnode\" " +\
      "       virtualization_subtype=\"emulab-openvz\" " +\
      "       exclusive=\"0\"> " +\
      "   <interface virtual_id=\"virt0\"/> " +\
      " </node>" +\
      " <link virtual_id=\"link0\"> " +\
      "  <interface_ref " +\
      "            virtual_interface_id=\"virt0\" " +\
      "            virtual_node_id=\"geni1\" " +\
      "            /> " +\
      "  <interface_ref " +\
      "            virtual_interface_id=\"virt0\" " +\
      "            virtual_node_id=\"geni2\" " +\
      "            /> " +\
      " </link> " +\
      The shared nodes boot, and you can ping on the experimental networks.
      * UpdateTicket and UpdateSliver need work as per the mail I sent the
       other day about the state of the sliver between the ticket and the
       sliver operations.
      * Collocation specifications are ignored since we do not have any way
       to specify this to assign when wildcards are used. Rob, I am
       wondering if assign has any tricks we can take advantage of.
      * Still need to commit all the snmpit changes and get that hooked into
       the CM.
  24. 05 Jun, 2009 1 commit
  25. 31 Mar, 2009 1 commit
  26. 30 Mar, 2009 1 commit
  27. 18 Mar, 2009 1 commit
    • Leigh B. Stoller's avatar
      A set of changes to allow real jails using our existing jails · 02083681
      Leigh B. Stoller authored
      support. Can even create multiple jailed nodes on the same physical
      node. Sorry, no sharing of physical nodes yet (between slices).
      Also no link support yet; coming later.
      The syntax is an extension of the current hack syntax:
              " <node uuid=\"" + node_uuid + "\" " +\
              "       nickname=\"geni1\" "+\
              "       phys_nickname=\"geni1\" "+\
              "       virtualization_type=\"emulab-vnode\" " +\
              "       virtualization_subtype=\"emulab-jail\"> " +\
              " </node>"
      This only works on sites that already can do jails.
  28. 04 Mar, 2009 1 commit
    • Leigh B. Stoller's avatar
      Change EMULAB-COPYRIGHT to GENIPUBLIC-COPYRIGHT, for future expansions · 6c8d30fc
      Leigh B. Stoller authored
      to the Geni Public License at http://www.geni.net/docs/GENIPubLic.pdf,
      whose expansion at this time is:
      Permission is hereby granted, free of charge, to any person obtaining
      a copy of this software and/or hardware specification (the "Work") to
      deal in the Work without restriction, including without limitation the
      rights to use, copy, modify, merge, publish, distribute, sublicense,
      and/or sell copies of the Work, and to permit persons to whom the Work
      is furnished to do so, subject to the following conditions:
      The above copyright notice and this permission notice shall be
      included in all copies or substantial portions of the Work.
      IN THE WORK.
  29. 02 Mar, 2009 1 commit
    • Leigh B. Stoller's avatar
      A bunch of changes for a "standalone" clearinghouse. Presently this · 60f04310
      Leigh B. Stoller authored
      its really a hugely stripped down Emulab boss install, using a very
      short version of install/boss-install to get a few things into place.
      I refactored a few things in both the protogeni code and the Emulab
      code, and whacked a bunch of makefiles and configure stuff. The result
      is that we only need to install about 10-12 files from the Emulab
      code, plus the protogeni code. Quite manageable, if you don't mind
      that it requires FreeBSD 6.X ... Still, I think it satisfies the
      requirement that we have a packaged clearinghouse that can be run
      standalone from a running Emulab site.
  30. 19 Feb, 2009 1 commit
  31. 29 Jan, 2009 1 commit
    • Leigh B. Stoller's avatar
      Add two calls to CM interface; SliverStatus() and SliceStatus() which · 85ea76eb
      Leigh B. Stoller authored
      allow you to find out the status of your sliver. As you can guess, one
      takes a sliver credential and the other takes the slice credential.
      See the test scripts test/slicestatus.py and test/slicestatus.py. Both
      calls return the same thing. In pythonese:
      {'status': 'notready',
       'details': {'de9803c2-773e-102b-8eb4-001143e453fe': 'notready'}
      The 'status' is a summary of the entire aggregate; if all nodes are
      ISUP then the status is 'ready'
      The details is an array of the status of each sliver, indexed by the
      sliver uuid.
      Probably need to define a better set of status indicators, but this is
      the one that was bothering me the most.
  32. 23 Jan, 2009 1 commit
  33. 03 Dec, 2008 1 commit
  34. 04 Nov, 2008 1 commit
  35. 27 Oct, 2008 1 commit
  36. 16 Oct, 2008 1 commit
  37. 02 Oct, 2008 1 commit
  38. 25 Sep, 2008 1 commit
  39. 18 Sep, 2008 1 commit