1. 12 Jan, 2004 1 commit
    • Shashi Guruprasad's avatar
      Another bug fix. The newly added $ns ip-connect instproc had a bug. The · b113d029
      Shashi Guruprasad authored
      code originally tried to do a normal $ns connect between traffic agents
      attached to simnodes on the same pnode. The problem that I forgot of course
      is that partitioned topology is quite disconnected which means that a
      packet is forced to exit the pnode and come back to it (in many cases).
      In other words, a direct intra pnode path does not exist. The fix is
      to just use the IP address based routes always. A similar problem
      is encountered in pdns as well. However, since IP address based routing
      is not used, there is no simple fix unless I work on it!
      
      The 416 node topology testbed/nse416 is working alright. It mapped to
      20 pnodes and as soon as a whole bunch of traffic started up, 7 pnodes
      couldn't track real-time and caused a modify. Expt modify happened 3
      times but eventually max_retries in my re-swapping code was reached. Need
      more measuring, tuning as well as eventrate based re-swapping.
      b113d029
  2. 07 Jan, 2004 1 commit
    • Shashi Guruprasad's avatar
      Yet another bugfix + code to call a function that sends NSESWAP event · ac01c40b
      Shashi Guruprasad authored
      when it cannot keep up with real-time.
      
      bug: This affected encapsulated simulator packets that had to cross
      multiple physical nodes before arriving at the destination simulator
      traffic agent. This bug didnt affect live packets from traffic sources
      on real PCs.
      
      The NSESWAP event is now sent via the tevc command. The nse scheduler
      waits for the slop factor (diff between clock and event dispatch time
      that exceeds a threshold) to be crossed multiple times in a second
      before sending the NSESWAP event. Currently 5 times in 1 second.
      However, this needs more careful thought and will get modified later.
      When is it really necessary to declare that an nse is overloaded?
      i.e. what is the right slop factor? How many times can we tolerate
      that the slop factor is exceeded to ensure end-to-end performance
      is within a certain percentage of the expected?
      ac01c40b
  3. 05 Jan, 2004 1 commit
    • Shashi Guruprasad's avatar
      nse would fail if there were only rlinks on a simnode. The problem wouldn't · 0807f0e5
      Shashi Guruprasad authored
      occur if a normal duplex link was created first. The reason is coz the
      creation of an rlink would also internally create an IPTap agent and when
      this agent was being attached to the node, classifier entries were being
      added to the port demultiplexer for all the IP addresses of the simnode.
      Unfortunately, if an rlink was created first before anything else, the
      IP address would be set only in the next Tcl statement causing nse to
      exit prematurely.
      0807f0e5
  4. 15 Dec, 2003 1 commit
    • Shashi Guruprasad's avatar
      Distributed NSE changes. In other words, simulation resources are · d266bd71
      Shashi Guruprasad authored
      now mapped to more than one PC if required. The simnode_capacity
      column in the node_types table determines how many sim nodes can
      be packed on one PC. The packing factor can also be controlled via
      tb-set-colocate-factor to be smaller than simnode_capacity.
      
      - No frontend code changes. To summarize:
        $ns make-simulated {
          ...
        }
        is still the easy way to put a whole bunch of Tcl code to be
        in simulation.
        One unrelated fix in the frontend code is to fix the
        xmlencode() function which prior to this would knock off
        newlines from columns in the XML output. This affected
        nseconfigs since it is one of the few columns with embedded
        newlines. Also changed the event type and event object type
        in traffic.tcl from TRAFGEN/MODIFY to NSE/NSEEVENT.
      
      - More Tcl code in a new directory tbsetup/nseparse
        -> Runs on ops similar to the main parser. This is invoked
           from assign_wrapper in the end if there are simnodes
        -> Partitions the Tcl code into multiple Tcl specifications
           and updates the nseconfigs table via xmlconvert
        -> Comes with a lot of caveats. Arbitrary Tcl code such as user
           specified objects or procedures will not be re-generated. For
           example, if a user wanted a procedure to be included in Tcl
           code for all partitions, there is no way for code in nseparse
           to do that. Besides that, it needs to be tested more thoroughly.
      
      - xmlconvert has a new option -s. When invoked with this option,
        the experiments table is not allowed to be modified. Also,
        virtual tables are just updated (as opposed to deleting
        all rows in the first invocation before inserting new rows)
      
      - nse.patch has all the IP address related changes committed in
        iversion 1.11 + 2 other changes. 1) MTU discovery support in
        the ICMP agent 2) "$ns rlink" mechanism for sim node to real
        node links
      
      - nseinput.tcl includes several client side changes to add IP
        routes in NSE and the kernel routing table for packets crossing
        pnodes. Also made the parsing of tmcc command output more robust
        to new changes. Other client side changes in libsetup.pm and other
        scripts to run nse, are also in this commit
      
      - Besides the expected changes in assign_wrapper for simulated nodes,
        the interfaces and veth_interfaces tables are updated with
        routing table identifiers (rtabid). The tmcd changes are already
        committed. This field is used only by sim hosts on the client side.
        Of course, they can be used by jails as well if desired.
      d266bd71
  5. 05 Nov, 2003 2 commits
  6. 16 Oct, 2003 1 commit
    • Shashi Guruprasad's avatar
      Distributed nse changes · 1630611a
      Shashi Guruprasad authored
      1) IP address based routes can now be added
         - The IP address is set on a link object
         - An "$ns rlink" is used to instantiate links
           that get cut and cross physical partitions
         - Traffic agents that are across physical
           partitions (i.e. different instances of nse)
           are connected by a new "$ns ip-connect"
           mechanism
         - A new Node instproc "add-route-to-ip" adds
           IP address based routes.
         - Changed ns multicast addressing to use 3 bits
           instead of the default 1
         - Currently, the classifier does a lookup on a
           complete 32 bit IP and if a target to route to
           is not found, uses a 24 bit IP mask. It does not
           try to match IP prefixes of all lengths. I'll add
           that later if necessary
      2) NS packets that cross partitions are encapsulated in
         IPPROTO_ENCAP IP packets.
      3) RAW IP sockets used to inject packets into the network
         now use a rtabid paramater so that packets can be
         routed according to different routing tables
      
      Tested with 2 test cases, one with UDP/CBR traffic
      and another with default NS TCP/FTP traffic. Setup was done
      manually. As I do testbed integration, there may be more changes.
      Here's the test setup:
      
         2.2    2.3   1.2      1.3   3.2      3.3
      n0 --------- n1 ----------- n2 ------------ n3
      
      n0,n1 are on one physical node and n2,n3 are on another. The n1-n2 link
      is cut.
      
      A TCP example:
      
      ---------------------physnode0---------------------
      set ns [new Simulator]
      $ns use-scheduler RealTime
      
      set n0 [$ns node]
      set n1 [$ns node]
      
      $ns duplex-link $n0 $n1 10Mb 5ms DropTail
      [$ns link $n0 $n1] set-ip 10.1.2.2
      [$ns link $n1 $n0] set-ip 10.1.2.3
      
      set rl0 [$ns rlink $n1 10.1.1.3 2Mb 40ms DropTail]
      $rl0 set-ip 10.1.1.2
      
      set tcp0 [new Agent/TCP]
      # The last parameter specifies the port
      $ns attach-agent $n0 $tcp0 20
      $ns ip-connect $tcp0 10.1.3.3 20
      set ftp0 [new Application/FTP]
      $ftp0 attach-agent $tcp0
      
      $n0 add-route-to-ip 10.1.3.3 10.1.2.3
      $n1 add-route-to-ip 10.1.3.3 10.1.1.3
      
      $ns at 1.0 "$ftp0 start"
      $ns at 10.0 "$ftp0 stop"
      -----------------end physnode0---------------------
      
      ---------------------physnode1---------------------
      set ns [new Simulator]
      $ns use-scheduler RealTime
      
      set n2 [$ns node]
      set n3 [$ns node]
      
      $ns duplex-link $n2 $n3 10Mb 5ms DropTail
      [$ns link $n2 $n3] set-ip 10.1.3.2
      [$ns link $n3 $n2] set-ip 10.1.3.3
      
      set rl1 [$ns rlink $n2 10.1.1.2 2Mb 40ms DropTail]
      $rl1 set-ip 10.1.1.3
      
      set tcpsink0 [new Agent/TCPSink]
      $ns attach-agent $n3 $tcpsink0 20
      $ns ip-connect $tcpsink0 10.1.2.2 20
      
      $n3 add-route-to-ip 10.1.2.2 10.1.3.2
      $n2 add-route-to-ip 10.1.2.2 10.1.1.2
      -----------------end physnode1---------------------
      1630611a
  7. 06 Jun, 2003 1 commit
  8. 30 Jan, 2003 1 commit
    • Shashi Guruprasad's avatar
      Fixed an error made by ISI during last year's rewriting of the RTSched code · cbe7fc8b
      Shashi Guruprasad authored
      that although appeared to be right was actually causing errors to be
      accumalated. This fixes the high error rates in the bandwidth and loss rate
      numbers reported in OSDI. There are also code optimizations after profiling
      it that reduces the RTSched overhead.
      
      Another source of error was send/consume/request/reply which used to be
      given a very rough estimate of the CPU speed (600 instead of 601.37 Mhz for
      example). The latter comes from the boot up calibration in FreeBSD which
      is supposed to be accurate up to 10 microsecs on a 486.  Using
      600 instead of 601.37 causes an error of 0.22 % in the measurement. That is
      about 1.3 ms for an RTT of 600 ms. This error is worse when send/consume
      are used to calculate throughputs. The longer the period of measurement,
      the worse it becomes.  I have committed changes in them as well. Defining
      the macro CPU_SPEED_IN_KHZ will get you kernels that take CPU_SPEED
      parameters in Khz instead of Mhz. So, you can specify 851940 instead of
      850 for a pc850.
      
      boss:/tftpboot/x86/{send,consume,request,reply}.cpuinkhz lets you specify
      CPU_SPEED in Khz
      cbe7fc8b
  9. 15 Nov, 2002 1 commit
  10. 09 Nov, 2002 1 commit
    • Shashi Guruprasad's avatar
      Updated nse.patch,tbnexthop.{cc,h} with all the recent nse changes. · fc580b5a
      Shashi Guruprasad authored
      tbnexthop.{cc,h} now contains setsockopts to install "ipfw fwd" rules.
      
      <netinet/ip_fw.h> has changed from FBSD 4.3 to 4.5 . Because boss has 4.3,
      compiling ipfw code on boss and running it on an experimental
      node doesn't work. Therefore, I now have a cvs checked in local copy of
      the 4.5 version of the file.
      
      nseinput.tcl now finds the CPU cycle speed from /var/run/dmesg.boot and
      passes the info to nse's RT scheduler which keeps track of real time
      using the TSC. The same info can be obtained by PERFMON ioctls but the
      kernel boot time measurement of the cpu cycle speed is more accurate than
      what perfmon can report
      fc580b5a
  11. 07 Oct, 2002 1 commit
  12. 06 Oct, 2002 1 commit
  13. 19 Jun, 2002 1 commit
  14. 13 Jun, 2002 1 commit
  15. 12 Jun, 2002 1 commit
  16. 20 Mar, 2002 1 commit