1. 05 Nov, 2004 1 commit
  2. 25 Aug, 2004 1 commit
  3. 24 Jun, 2004 1 commit
    • Mike Hibler's avatar
      Improve the client-side install. With these changes, it should now be · 976133e4
      Mike Hibler authored
      possible to:
      
      	gmake client
      	sudo gmake client-install
      
      on a FBSD4, FBSD5, RHL7.3, and RHL9.0 client node.
      
      There are still some dependencies that are not explicit and which would
      prevent a build/install from working on a "clean" OS.  Two that I know of are:
      you must install our version of the elvin libraries and you must install boost.
      976133e4
  4. 10 May, 2004 1 commit
  5. 26 Apr, 2004 1 commit
    • Mike Hibler's avatar
      Cleanup Makefiles: · 297019fb
      Mike Hibler authored
      1. "make clean" will just remove stuff built in the process of a regular build
      2. "make distclean" will also clean out configure generated files.
      
      This is how it was always supposed to be, there was just some bitrot.
      297019fb
  6. 20 Apr, 2004 1 commit
    • Mike Hibler's avatar
      Improve the client-install. You can now do a "make client-install" from · 361ee691
      Mike Hibler authored
      the top level.  This will build all the necessary binaries and then install
      them.  This works on FBSD4 and RHL7.3.  It still doesn't work on FBSD5
      (newer compiler that no longer supports a style of use of _FUNCTION_ in the
      event lib) or RHL9 (event lib needs SSL lib which has a bad dependency
      on Kerberos).  Notes:
      
      - requires that elvin libraries be installed on nodes (they are) to build
        event agents, requires linuxthreads be installed on FBSD (it is now) to
        build imagezip (which is installed, but is not strictly necessary)
      
      - installed event-agents and other binaries are stripped
      
      - added a few missing files to the source tree for bsd (healthd.conf)
        and linux (healthd.conf, rc.local)
      
      - the only thing that doesn't get rebuilt in /usr/local/etc/emulab is
        healthd, I couldn't quickly find how it gets built
      
      - uses a scaled down version of libtb with no DB functions (since mysql
        isn't installed on nodes).  N.B. DO NOT DO A CLIENT INSTALL FROM YOUR
        REGULAR OBJ TREE OR ELSE YOU MAY WIND UP WITH A NEUTERED VERSION OF
        libtb.a!
      
      The build-as-well-as-install semantics are counter to the regular install
      targets, but this is what we gotta do for now.  Once the TB source builds
      under Linux and newer BSDs, we could undo this and just require that people
      do a regular "make" followed by "make client-install"  OTOH, there should
      be no reason to require installation of mysql and other server-side packages
      just to build clients (or make them sit through the compilation of assign),
      so maybe we will keep the client build special.
      361ee691
  7. 06 Apr, 2004 4 commits
  8. 05 Apr, 2004 1 commit
  9. 23 Feb, 2004 1 commit
  10. 09 Feb, 2004 2 commits
  11. 06 Feb, 2004 1 commit
  12. 05 Aug, 2003 2 commits
    • Leigh B. Stoller's avatar
    • Leigh B. Stoller's avatar
      The rest of the sync server additions: · 212cc781
      Leigh B. Stoller authored
      * Parser: Added new tb command to set the name of the sync server:
      
      	tb-set-sync-server <node>
      
        This initializes the sync_server slot of the experiment entry to the
        *vname* of the node that should run the sync server for that
        experiment. In other words, the sync server is per-experiment, runs
        on a node in the experiment, and the user gets to chose which node
        it runs on.
      
      * tmcd and client side setup. Added new syncserver command which
        returns the name of the syncserver and whether the requesting node
        is the lucky one to run the daemon:
      
          SYNCSERVER SERVER='nodeG.syncserver.testbed.emulab.net' ISSERVER=1
      
        The name of the syncserver is written to /var/emulab/boot/syncserver
        on the nodes so that clients can easily figure out where the server
        is.
      
        Aside: The ready bits are now ignored (no DB accesses are made) for
        virtual nodes; they are forced to use the new sync server.
      
      * New os/syncd directory containing the daemon and the client. The
        daemon is pretty simple. It waits for TCP (and UDP, although that
        path is not complete yet) connections, and reads in a little
        structure that gives the name of the "barrier" to wait for, and an
        optional count of clients in the group (this would be used by the
        "master" who initializes barriers for clients). The socket is saved
        (no reply is made, so the client is blocked) until the count reaches
        zero. Then all clients are released by writting back to the
        sockets, and the sockets are closed. Obviously, the number of
        clients is limited by the numbed of FDs (open sockets), hence the
        need for a UDP variant, but that will take more work.
      
        The client has a simple command line interface:
      
          usage: emulab-sync [options]
          -n <name>         Optional barrier name; must be less than 64 bytes long
          -d                Turn on debugging
          -s server         Specify a sync server to connect to
          -p portnum        Specify a port number to connect to
          -i count          Initialize named barrier to count waiters
          -u                Use UDP instead of TCP
      
          The client figures out the server by looking for the file created
          above by libsetup (/var/emulab/boot/syncserver). If you do not
          specify a barrier "name", it uses an internal default. Yes, the
          server can handle multiple barriers (differently named of course)
          at once (non-overlapping clients obviously).
      
          Clients can wait before a barrier in "initialized." The count on
          the barrier just goes negative until someone initializes the
          barrier using the -i option, which increments the count by the
          count. Therefore, the master does not have to arrange to get there
          "first." As an example, consider a master and one client:
      
      	nodeA> /usr/local/etc/emulab/emulab-sync -n mybarrier
      	nodeB> /usr/local/etc/emulab/emulab-sync -n mybarrier -i 1
      
          Node A waits until Node B initializes the barrier (gives it a
          count).  The count is the number of *waiters*, not including the
          master. The master is also blocked until all of the waiters have
          checked in.
      
          I have not made an provision for timeouts or crashed clients. Lets
          see how it goes.
      212cc781