20.11.2012 Views

Contents Telektronikk - Telenor

Contents Telektronikk - Telenor

Contents Telektronikk - Telenor

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

166<br />

Throughput [Mbit/s]<br />

Throughput [Mbit/s]<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

0 4096 8192 12288 16384 20480 24576 28672 32768<br />

MSS segments to be transmitted back-toback.<br />

There is also a throughput increase from<br />

a 40k to a 52428 byte window size. This<br />

increase is also caused by the size of the<br />

socket buffers. The segment flow is still<br />

primarily two MSSs in-between<br />

acknowledgments for these window<br />

sizes. Both window sizes have bytes for<br />

two MSS segments ready in the socket<br />

buffer when the acknowledgment arrives.<br />

24576 byte window<br />

32768 byte window<br />

40960 byte window<br />

52428 byte window<br />

User data size [byte]<br />

(a) TCP throughput dependent on window and user data size<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

0 16384 32768 49152 65536<br />

Window size [byte]<br />

(b)Throughput dependent on window size<br />

4096 byte window<br />

8192 byte window<br />

16384 byte window<br />

CPU utilization [%]<br />

100<br />

75<br />

50<br />

25<br />

Transmit<br />

Receive<br />

0<br />

0 16384 32768 49152 65536<br />

Window size [byte]<br />

(c) CPU utilization<br />

Figure 15 TCP throughput, Sparc10 SBA-200<br />

With a 52428 byte window size, the<br />

sender has buffer space for more than<br />

5*MSS bytes. It sometimes manages to<br />

transmit 3 MSSs before the acknowledgment<br />

arrives. In short, the window is better<br />

utilized on connections with a 52428<br />

byte window size which results in a higher<br />

throughput.<br />

For large window sizes and large user<br />

data sizes there is no degradation in<br />

throughput as with the SBA-100 adapt-<br />

ers. With the SBA-200 adapter, the receiver<br />

is less loaded than the sender.<br />

Thus, the read system calls return before<br />

the entire user receive buffer is filled.<br />

Thereby, the socket layer more often<br />

calls TCP to check if a window update<br />

should be returned.<br />

7.1 Throughput peak patterns<br />

For small window sizes there are<br />

throughput peaks which are even more<br />

characteristic than peaks observed with<br />

the SBA-100 interfaces on the Sparc10s.<br />

For example, with an 8 kbyte window the<br />

variation can be as high as 10 Mbit/s for<br />

different user data sizes. This is caused<br />

by a mismatch between user data size<br />

and window size which directly affects<br />

the segment flow. Figure 15 shows that<br />

the throughput pattern to some extent<br />

repeats for every 4096 bytes. With the<br />

SBA-200 interface, the high byte-dependent<br />

overhead of the segment processing<br />

is reduced. This implies a relative<br />

increase in the fixed overhead per segment,<br />

and the number of segments will<br />

have a larger impact on throughput performance.<br />

Window and user data sizes<br />

that result in the least number of segments<br />

are expected to give the highest<br />

performance. This is reflected in the<br />

throughput peaks for user data sizes of an<br />

integer multiple of 4096 bytes. With the<br />

SBA-100 adapters, the byte dependent<br />

processing was much higher, and the<br />

throughput was therefore less dependent<br />

on the number of transmitted segments.<br />

For large window sizes there are no<br />

throughput peak patterns. For these window<br />

sizes, the number of outstanding<br />

unacknowledged bytes seldom reaches<br />

the window size. The segment flow is<br />

primarily maximum sized segments of<br />

9148 bytes with an acknowledgment<br />

returned for every other segment.<br />

8 Summary and<br />

conclusions<br />

In this paper we have presented TCP/IP<br />

throughput measurements over a local<br />

area ATM network from FORE Systems.<br />

The purpose of these measurements was<br />

to show how and why end-system hardware<br />

and software parameters influence<br />

achievable throughput. Both the host<br />

architecture and the host network interface<br />

(driver + adapter) as well as software<br />

configurable parameters such as<br />

window and user data size affect the<br />

measured throughput.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!