05.08.2014 Views

An Investigation into Transport Protocols and Data Transport ...

An Investigation into Transport Protocols and Data Transport ...

An Investigation into Transport Protocols and Data Transport ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

8.1. Methodology 149<br />

Various other constraints on TCP network performance were also removed,<br />

such as the default low limits on the TCP socket buffer sizes. txqueuelen<br />

<strong>and</strong> max_backlog values were left at acceptable default values (under Linux<br />

2.6) of 1, 000 <strong>and</strong> 300 respectively. Default device driver settings were also<br />

used which were found to be sufficiently provisioned for these tests.<br />

8.1.3 Overview<br />

In order to minimise the effects of local hosts’ queues <strong>and</strong> flow interactions,<br />

only a single TCP flow was injected from each source machine <strong>into</strong> the testbed<br />

through the dummynet router using iperf [TQD + 03] . All flows were run<br />

with slow-start enabled upon the start of each transfer to give a representative<br />

test of real users competing for b<strong>and</strong>width when conducting bulk transport<br />

along the network path.<br />

A second flow was initiated at a r<strong>and</strong>om time after the first flow such<br />

that the perturbation of the second flow’s slow start covered a range of times<br />

within a congestion epoch of the first flow. This was to ensure that a representative<br />

range of convergence times <strong>and</strong> cwnd dynamics was captured.<br />

As defined in [Cla82b], all TCP receivers were run with delayed acking on.<br />

As Linux 2.6 kernels were used, the effects of quick-acking (See Section 5.2.4)<br />

were left enabled. All TCP senders were configured with Appropriate Byte<br />

Sizing (See Section 5.2.4).<br />

Each individual test was run for at least ten minutes. This duration gave<br />

a good representation of the number of congestion epochs required for all<br />

of the New-TCP algorithms. In the case of tests involving St<strong>and</strong>ard TCP,<br />

individual tests were run for up to an hour as the long epoch time of St<strong>and</strong>ard<br />

TCP, especially at large BDPs, requires more time to reach steady state.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!