1. 23 Dec, 2003 1 commit
  2. 15 Oct, 2003 1 commit
    • Mike Hibler's avatar
      Uniform syslog'ing. Change everything I could find to use a syslog facility · cc6d6fa7
      Mike Hibler authored
      as defined in the defs-* file (e.g. "TBLOGFACIL=local2").  The default is
      "local5" which is what we are setup to use so you shouldn't need to mess
      with your defs- file!
      
      perl scripts just get this value configured in when configure is run.
      C programs get the value in two ways.  For programs that are intimate with
      the testbed infrastructure, and include "config.h", they just get it from
      that file.  For programs that we sometimes use outside the Emulab build
      environment (e.g., frisbee, capture) and that don't include config.h,
      the value is set via a "-DLOG_TESTBED=..." in the GNUmakefile build line.
      If the value isn't set, it defaults to what it used to be (usually LOG_USER).
      
      Still to do: healthd, hmcd (whose build doesn't seem to be completely
      integrated) and plabdaemon.in (since its icky python :-)
      cc6d6fa7
  3. 07 Jul, 2002 1 commit
  4. 04 Apr, 2002 1 commit
    • Leigh B. Stoller's avatar
      First round of ssl'ification of tmcd/tmcc. This needs to be looked at · ffe40d2e
      Leigh B. Stoller authored
      by smarter brains by me (I have asked Dave to look it over). Anyway ...
      
      I added a top level ssl directory which has a bunch of goo for
      creating certificates and keys.  I currently create a Certificate
      Authority, a server certificate, and a client certificate. The private
      keys for all three are unencrypted, so no password is required. All
      key/cert combos can be installed on boss. The client side needs the
      key/cert pair (in one file), and the CA cert (no key!). There are
      install targets to do this. NOTE, you do not want to create/install
      these without being careful, since you could instantly invalidate all
      the clients!
      
      I have added the necessary SSL routines to tmcd/tmcc. See the ssl.c
      and ssl.h file. I have set it up so that with all you need to do is
      uncomment three lines in the makefile, and accept,connect,read,write,
      and close are redirected to SSL'ified versions in ssl.c. The current
      security model is that the client and server both "demand" certificate
      verification from the other side (as opposed to just server side
      verification). tmcd reads in server.pem, while tmcc reads in
      client.pem. Both read in the emulab.pem (CA cert with no private
      key).
      
      Initial testing indicates I have done this at least partially
      correctly. Whoever invented this stuff has a really twisted mind
      though. There are some questions at the top of ssl.c that need to be
      answered.
      
      Oh, also redid all the syslog stuff throughout tmcd.
      ffe40d2e
  5. 05 Mar, 2002 1 commit
  6. 28 Feb, 2002 1 commit
  7. 26 Feb, 2002 1 commit
  8. 24 Feb, 2002 1 commit
  9. 14 Jan, 2002 1 commit
  10. 10 Jan, 2002 1 commit
  11. 07 Jan, 2002 1 commit
    • Leigh B. Stoller's avatar
      Checkpoint first working version of Frisbee Redux. This version · 86efdd9e
      Leigh B. Stoller authored
      requires the linux threads package to give us kernel level pthreads.
      
      From: Leigh Stoller <stoller@fast.cs.utah.edu>
      To: Testbed Operations <testbed-ops@fast.cs.utah.edu>
      Cc: Jay Lepreau <lepreau@cs.utah.edu>
      Subject: Frisbee Redux
      Date: Mon, 7 Jan 2002 12:03:56 -0800
      
      Server:
      The server is multithreaded. One thread takes in requests from the
      clients, and adds the request to a work queue. The other thread processes
      the work queue in fifo order, spitting out the desrired block ranges. A
      request is a chunk/block/blockcount tuple, and most of the time the clients
      are requesting complete 1MB chunks. The exception of course is when
      individual blocks are lost, in which case the clients request just those
      subranges.  The server it totally asynchronous; It maintains a list of who
      is "connected", but thats just to make sure we can time the server out
      after a suitable inactive time. The server really only cares about the work
      queue; As long as the queue si non empty, it spits out data.
      
      Client:
      The client is also multithreaded. One thread receives data packets and
      stuffs them in a chunkbuffer data structure. This thread also request more
      data, either to complete chunks with missing blocks, or to request new
      chunks. Each client can read ahead up 2 chunks, although with multiple
      clients it might actually be much further ahead as it also receives chunks
      that other clients requested. I set the number of chunk buffers to 16,
      although this is probably unnecessary as I will explain below. The other
      thread waits for chunkbuffers to be marked complete, and then invokes the
      imagunzip code on that chunk. Meanwhile, the other thread is busily getting
      more data and requesting/reading ahread, so that by the time the unzip is
      done, there is another chunk to unzip. In practice, the main thread never
      goes idle after the first chunk is received; there is always a ready chunk
      for it. Perfect overlap of I/O! In order to prevent the clients from
      getting overly synchronized (and causing all the clients to wait until the
      last client is done!), each client randomizes it block request order. This
      why we can retain the original frisbee name; clients end up catching random
      blocks flung out from the server until it has all the blocks.
      
      Performance:
      The single node speed is about 180 seconds for our current full image.
      Frisbee V1 compares at about 210 seconds. The two node speed was 181 and
      174 seconds. The amount of CPU used for the two node run ranged from 1% to
      4%, typically averaging about 2% while I watched it with "top."
      
      The main problem on the server side is how to keep boss (1GHZ with a Gbit
      ethernet) from spitting out packets so fast that 1/2 of them get dropped. I
      eventually settled on a static 1ms delay every 64K of packets sent. Nothing
      to be proud of, but it works.
      
      As mentioned above, the number of chunk buffers is 16, although only a few
      of them are used in practice. The reason is that the network transfer speed
      is perhaps 10 times faster than the decompression and raw device write
      speed. To know for sure, I would have to figure out the per byte transfer
      rate for 350 MBs via network, via the time to decompress and write the
      1.2GB of data to the raw disk. With such a big difference, its only
      necessary to ensure that you stay 1 or 2 chunks ahead, since you can
      request 10 chunks in the time it takes to write one of them.
      86efdd9e