Commit 6d9b210e authored by Pramod R Sanaga's avatar Pramod R Sanaga

Sigcomm paper directory and a list of notes.

parent f779f41b
Cloud model:
1) Each node has a seperate bandwidth pipe to every possible destination.
- If the node has a node cap being applied to some or all of its destinations,
then simple cloud model fails when the sum of bandwidths of all flows being used by the
application exceeds node cap.
- Cloud model does not take into account multiple flows between the same pair
of hosts - it probably results in lower throughput
a) if the real world flows are limited by bandwidth-delay product and not
by congestion. Or
b) The bottleneck link has more ethan a modest level of statistical
multiplexing. In this case, creating two flows gives 2x throughput - whereas the
cloud only gives 'x' total throughput for any number of flows between a pair of hosts.
- It also does not work when the bottleneck is the last mile link on the down stream. Two streams from two different nodes might get bottlenecked at the destination node
- we do not handle this case at the moment.
2) Shared config:
- This is a simplification which is applicable mostly to slow DSL and commodity
internet nodes.
- Heavily used Inet2 nodes on Planetlab might also exhibit this behaviour -
if their last mile access is congested.
a) However DSL/Inet nodes have relatively low statistical multiplexing. The
only flows congesting the link are our own experiment flows.
-- Usually RTT variation is considerable even without active IPerf
measurement by us. So we can easily identify this type of bottlenecks.
b) Congested Inet2 nodes have high statistical multiplexing. But RTT remains
relatively stable unless measurement flows are introduced -- tricky to identify which
destinations share the bottleneck - Heisenberg effects.
- The trouble here is that we have no idea how far out the bottleneck
link is and how many destination routes pass through that bottleneck.
- Assuming we identify and group destinations sharing a bottleneck, how do we
set the bandwidth for each group ?
- We assume high multiplexing, so each new flow should get the same amount
of throughput.
- What about flows going to different destinations with different RTT ?
Potential problems for evaluation:
1) We will have to compare a run of (IPerf mesh and/or Bittorrent) on Planetlab to a run with our static model, which uses Flexmon measurements. If the available bandwidth conditions hange between Flexmon measurement and our Planetlab run, then we'll have a hard time
figuring out whether the difference is because of our model or transient traffic conditions
on the Planetlab paths.
2) Flexmon measurements are taken one path at a time. There is a chance that these
throughput values are going to be lower when all the streams are run at once on
Planetlab - of course this could point out the bottleneck links we missed using
our UDP probes when there wasn't any significant cross-traffic.
Asymmetric paths and bandwidth emulation:
When the paths being emulated have relatively low bandwidths ( < 30 Mbps I think - not
competely sure ) and the asymmetry is > 3:1, then simply setting up Dummynet pipes with
the bandwidth and delay values is going to cause anomalies (upto 70% throughput loss) in
the forward path TCP when there is/are TCP flows on the reverse path.
Remedy to this i twofold. The key observation is that most of the Inet links have
high capacities. The available bandwidth varies because of the amount of
cross-traffic. Even links with very low available bandwidth transmit packets at a high
link rate. In contrast, if a dummynet pipe is setup with the low available bandwidth
as its capacity, then it is going to cause large queuing delays for the ACK packets of
the forward path TCP flows. This inflates the RTT of the forward TCP flow and results
in a significant and drastic throughput drop compared to the expected
forward throughput.
In order to correct this problem, we introduce a new parameter "BACKFILL" during
the dummynet pipe configuration . All the experiment links are setup as 50 Mbps(OC-1)
capacity. The amount of backfill on a link is link capacity - available bandwidth. We
introduce a CBR packet flow into the Dummynet pipe to act as cross-traffic at this
"BACKFILL" rate.
We modified Dummynet to introduce this "BACKFILL" directly into the bandwidth
queues. So, the cross-traffic is inserted at a fine granularity and is removed
from the queues and is simply ignored ie. it is not forwarded to any of the
experiment nodes. Therefore, it does not interfere with any other experiment
traffic other than serving to shape traffic on a particular Dummynet pipe. Our
modifications do not result in any noticeable increase in load on the delay nodes.
Another aspect of accurate bandwidth emulation is the setting up of queue
sizes. To emulate real world routers, the queues are set in terms of bytes, rather
than slots. The size of a particular queue depends on the available bandwidth and RTT
of the path - it is set to restrict the maximum window size of the TCP flow(s) on
the path.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment