12.07.2015 Views

Département Réseau, Sécurité et Multimédia Rapport d'Activités 2008

Département Réseau, Sécurité et Multimédia Rapport d'Activités 2008

Département Réseau, Sécurité et Multimédia Rapport d'Activités 2008

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Loss Synchronization and Router Buffer Sizing with High-SpeedVersions of TCPResearch Staff : David Ros – Ph.D. Student : Sofiane HassayounKeywords : TCP, congestion control, high-speed n<strong>et</strong>worksApplications : high-speed n<strong>et</strong>works, Grid n<strong>et</strong>workingIntroductionThe Transmission Control Protocol (TCP) [1-2]is the most widely used transport protocol inIP n<strong>et</strong>works. Indeed, many measurementstudies show that more than 90% of theIntern<strong>et</strong> traffic is carried by TCP. TCP is theprotocol of choice of most data applications(web, e-mail, file transfer, …), and it is evenused for multimedia applications like audio andvideo streaming.In order to react to pack<strong>et</strong> losses, TCPperforms congestion control; that is, a TCPsender cuts its data sending rate (at least) byhalf whenever it d<strong>et</strong>ects that pack<strong>et</strong>s are lost.The goal of this rate-control mechanism is tohelp in alleviating n<strong>et</strong>work congestion (which isimplicitly assumed as being the cause ofpack<strong>et</strong> loss). In the absence of losses, thesender steadily increases its rate: the goal inthis case is to use as much availablebandwidth as possible.In very high-speed n<strong>et</strong>works, drastic ratereductions—which are inherent of thecongestion control mechanisms—often result inpoor performance. This is because TCPsenders cannot fully utilize the link capacityand, moreover, after a loss is d<strong>et</strong>ected theymay take a long time before reaching again ahigh sending rate.In recent years, a good deal of research workhas thus been devoted to studying andimproving the performance of TCP over veryhigh-speed links. Several variants of theprotocol have been proposed in order to adaptTCP’s congestion control mechanisms to Gb/sspeeds and beyond.RealizationDrop synchronization b<strong>et</strong>ween TCP flowshappens whenever two or more flowsexperience pack<strong>et</strong> loss in a short time interval.Such phenomenon has been the object ofseveral studies and proposals because of itspotential performance implications. Indeed,perfectly-synchronized losses among flowswould result in TCP senders reducing theirwindow in unison; hence, in poor throughputand low link utilization. In practice, however,such highly-correlated loss patterns are rarelyobserved in the Intern<strong>et</strong>. Factors likefluctuations in round-trip times (RTTs) andhigh levels of statistical multiplexing tend tobreak the synchronization among flows.Non<strong>et</strong>heless, given that previous studies onsynchronization have focused on “low-speed”TCP, one may wonder wh<strong>et</strong>her pack<strong>et</strong> dropsmay become more correlated when high-speedvariants of TCP are used. Indeed, it has beenconjectured [3] that a higher synchronizationmight be a side effect of the increasedaggressiveness of the congestion controlalgorithms in those variants.We have thus performed a preliminary study ofthe relation b<strong>et</strong>ween drop synchronization andbuffer sizes, when high-speed TCPs are used.By means of ns-2 simulations, the dependenceof synchronization on the TCP version wasexplored. We used finely-tuned simulationscenarios, designed to avoid synchronizationas much as possible; the idea was to capturedrop correlations induced mainly by theversion of TCP in use.One of our main motivations for looking intothis problem was that, contrary to the commoncase in the general Intern<strong>et</strong>, Grid n<strong>et</strong>works areprone to synchronization. Moreover, the factthat such n<strong>et</strong>works are prime candidates fordeploying high-speed versions of TCP, makesthe subject of the study y<strong>et</strong> more compelling.We evaluated three high-speed protocols:HSTCP, H-TCP and BIC (the latter has beenadopted in the standard Linux kernel); SACKTCP was used as a sort of “low-speedbenchmark”. In order to assess the impact ofthe bottleneck buffer size, we explored a widerange of buffers, going from very small ones(50 pack<strong>et</strong>s) to very large ones (150,000pack<strong>et</strong>s).The figure below shows the CDF (cumulativedistribution function) of the so-called global8 Extract of Pracom’s Annual Report <strong>2008</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!