01.03.2013 Views

ICTCP: Incast Congestion Control for TCP in Data Center ... - Sigcomm

ICTCP: Incast Congestion Control for TCP in Data Center ... - Sigcomm

ICTCP: Incast Congestion Control for TCP in Data Center ... - Sigcomm

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Goodput (Mbps) with a fixed total amount<br />

1000<br />

900<br />

800<br />

700<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

0<br />

<strong>TCP</strong> total data=2M<br />

<strong>TCP</strong> total data=8M<br />

<strong>IC<strong>TCP</strong></strong> total data=2M<br />

<strong>IC<strong>TCP</strong></strong> total data=8M<br />

4 8 12 16 20 24 28 32 36 40 44 48<br />

Number of senders <strong>in</strong> parallel<br />

Figure 9: Goodput of <strong>IC<strong>TCP</strong></strong> (with m<strong>in</strong>imal<br />

receive w<strong>in</strong>dow at 2MSS) and <strong>TCP</strong> under the<br />

case that the total data amount from all send<strong>in</strong>g<br />

servers is a fixed value<br />

Ratio of rounds with <strong>TCP</strong> timeout(%)<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

<strong>TCP</strong> total data=2M<br />

<strong>TCP</strong> total data=8M<br />

<strong>IC<strong>TCP</strong></strong> total data=2M<br />

<strong>IC<strong>TCP</strong></strong> total data=8M<br />

4 8 12 16 20 24 28 32 36 40 44 48<br />

Number of senders <strong>in</strong> parallel<br />

Figure 10: Ratio of timeout <strong>for</strong> <strong>IC<strong>TCP</strong></strong> (with<br />

m<strong>in</strong>imal receive w<strong>in</strong>dow at 2MSS) and <strong>TCP</strong> under<br />

the case that the total data amount from all<br />

send<strong>in</strong>g servers is a fixed value<br />

9 and 10. From Figure 9 we observe that the number<br />

of send<strong>in</strong>g servers to trigger <strong>in</strong>cast congestion is close<br />

<strong>for</strong> 2M and 8M bytes total traffic respectively. <strong>IC<strong>TCP</strong></strong><br />

greatly improves the goodput and controls timeout well.<br />

Note that we show the case <strong>for</strong> <strong>IC<strong>TCP</strong></strong> with m<strong>in</strong>imal<br />

receive w<strong>in</strong>dow at 2MSS and skip the case with 1MSS,<br />

as the timeout ratio is aga<strong>in</strong> 0% <strong>for</strong> 1MSS.<br />

6.3 <strong>Incast</strong> with high throughput background<br />

traffic<br />

In previous experiments, we do not explicit generate<br />

long term background traffic <strong>for</strong> <strong>in</strong>cast experiments. In<br />

the third scenario, we generate a long term <strong>TCP</strong> connection<br />

as background traffic to the same receiv<strong>in</strong>g server,<br />

and it occupies 900Mbps be<strong>for</strong>e <strong>in</strong>cast traffic starts.<br />

The goodput and timeout ratio of <strong>TCP</strong> and <strong>IC<strong>TCP</strong></strong><br />

are shown <strong>in</strong> Figure 11 and 12. Compared Figure 11<br />

with Figure 2, the throughput achieved by <strong>TCP</strong> be<strong>for</strong>e<br />

<strong>in</strong>cast congestion is slightly lower. <strong>IC<strong>TCP</strong></strong> also achieves<br />

slightly lower throughput when the number of send<strong>in</strong>g<br />

Goodput (Mbps) with a background <strong>TCP</strong><br />

900<br />

800<br />

700<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

0<br />

<strong>TCP</strong> data=64kbytes<br />

<strong>TCP</strong> data=128kbytes<br />

<strong>TCP</strong> data=256kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=64kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=128kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=256kbytes<br />

4 8 12 16 20 24 28 32 36 40 44 48<br />

Number of senders <strong>in</strong> parallel<br />

Figure 11: Goodput of <strong>IC<strong>TCP</strong></strong> (with m<strong>in</strong>imal receive<br />

w<strong>in</strong>dow at 2MSS) and <strong>TCP</strong> under the case<br />

with a background long term <strong>TCP</strong> connection<br />

Ratio of rounds with <strong>TCP</strong> timeout(%)<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

<strong>TCP</strong> data=64kbytes<br />

<strong>TCP</strong> data=128kbytes<br />

<strong>TCP</strong> data=256kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=64kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=128kbytes<br />

<strong>IC<strong>TCP</strong></strong> data=256kbytes<br />

4 8 12 16 20 24 28 32 36 40 44 48<br />

Number of senders <strong>in</strong> parallel<br />

Figure 12: Ratio of timeout <strong>for</strong> <strong>IC<strong>TCP</strong></strong> (with<br />

m<strong>in</strong>imal receive w<strong>in</strong>dow at 2MSS) and <strong>TCP</strong> under<br />

the case with a background long term <strong>TCP</strong><br />

connection<br />

serves is small. Compar<strong>in</strong>g Figure 12 with Figure 7,<br />

the timeout ratio with <strong>IC<strong>TCP</strong></strong> becomes slightly higher<br />

when there is a high throughput background connection<br />

ongo<strong>in</strong>g. This is because the available bandwidth<br />

becomes smaller and thus the <strong>in</strong>itiation of new connections<br />

is affected. We also obta<strong>in</strong> the experimental results<br />

<strong>for</strong> a background UDP connection at 200Mbps,<br />

and <strong>IC<strong>TCP</strong></strong> also per<strong>for</strong>ms well. We skip the results under<br />

background UDP <strong>for</strong> space limitation.<br />

6.4 Fairness and long term per<strong>for</strong>mance of <strong>IC<strong>TCP</strong></strong><br />

To evaluate the fairness of <strong>IC<strong>TCP</strong></strong> on multiple connections,<br />

we generate 5 <strong>IC<strong>TCP</strong></strong> flows to the same receiv<strong>in</strong>g<br />

server under the same switch. The flows are started<br />

sequentially with 20s <strong>in</strong>terval and 100s duration. The<br />

achieved goodput of those 5 <strong>IC<strong>TCP</strong></strong> flows are shown <strong>in</strong><br />

Figure 13. We observe that the fairness of <strong>IC<strong>TCP</strong></strong> on<br />

multiple connections is very good, and the total goodput<br />

of multiple connections is close to l<strong>in</strong>k capacity at<br />

1Gbps. Note that the goodput here is much larger than<br />

that shown <strong>in</strong> Figure 6, s<strong>in</strong>ce the traffic volume is much

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!