13.07.2015 Views

Give-to-Get: Free-riding-resilient Video-on-Demand in P2P Systems

Give-to-Get: Free-riding-resilient Video-on-Demand in P2P Systems

Give-to-Get: Free-riding-resilient Video-on-Demand in P2P Systems

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>: <str<strong>on</strong>g>Free</str<strong>on</strong>g>-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>-<str<strong>on</strong>g>resilient</str<strong>on</strong>g> <str<strong>on</strong>g>Video</str<strong>on</strong>g>-<strong>on</strong>-<strong>Demand</strong> <strong>in</strong> <strong>P2P</strong> <strong>Systems</strong>J.J.D. Mol, J.A. Pouwelse, M. Meulpolder, D.H.J. Epema, and H.J. Sips ∗Department of Computer Science, Delft University of TechnologyP.O. Box 5031, 2600 GA Delft, The NetherlandsABSTRACTCentralised soluti<strong>on</strong>s for <str<strong>on</strong>g>Video</str<strong>on</strong>g>-<strong>on</strong>-<strong>Demand</strong> (VoD) services, which stream pre-recorded video c<strong>on</strong>tent <str<strong>on</strong>g>to</str<strong>on</strong>g> multiple clientswho start watch<strong>in</strong>g at the moments of their own choos<strong>in</strong>g, are not scalable because of the high bandwidth requirements ofthe central video servers. Peer-<str<strong>on</strong>g>to</str<strong>on</strong>g>-peer (<strong>P2P</strong>) techniques which let the clients distribute the video c<strong>on</strong>tent am<strong>on</strong>g themselves,can be used <str<strong>on</strong>g>to</str<strong>on</strong>g> alleviate this problem. However, such techniques may <strong>in</strong>troduce the problem of free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>, with some peers<strong>in</strong> the <strong>P2P</strong> network not forward<strong>in</strong>g the video c<strong>on</strong>tent <str<strong>on</strong>g>to</str<strong>on</strong>g> others if there is no <strong>in</strong>centive <str<strong>on</strong>g>to</str<strong>on</strong>g> do so. When the <strong>P2P</strong> networkc<strong>on</strong>ta<strong>in</strong>s <str<strong>on</strong>g>to</str<strong>on</strong>g>o many free-riders, an <strong>in</strong>creas<strong>in</strong>g number of the well-behav<strong>in</strong>g peers may not achieve high enough downloadspeeds <str<strong>on</strong>g>to</str<strong>on</strong>g> ma<strong>in</strong>ta<strong>in</strong> an acceptable service. In this paper we propose <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>, a <strong>P2P</strong> VoD algorithm which discouragesfree-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> by lett<strong>in</strong>g peers favour upload<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> other peers who have proven <str<strong>on</strong>g>to</str<strong>on</strong>g> be good uploaders. As a c<strong>on</strong>sequence,free-riders are <strong>on</strong>ly <str<strong>on</strong>g>to</str<strong>on</strong>g>lerated as l<strong>on</strong>g as there is spare capacity <strong>in</strong> the system. Our simulati<strong>on</strong>s show that even if 20% ofthe peers are free-riders, <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> c<strong>on</strong>t<strong>in</strong>ues <str<strong>on</strong>g>to</str<strong>on</strong>g> provide good performance <str<strong>on</strong>g>to</str<strong>on</strong>g> the well-behav<strong>in</strong>g peers. In particular, theyshow that <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> performs very well for short videos, which dom<strong>in</strong>ate the current VoD traffic <strong>on</strong> the Internet.Keywords: multicast<strong>in</strong>g, video stream<strong>in</strong>g, video-<strong>on</strong>-demand, peer-<str<strong>on</strong>g>to</str<strong>on</strong>g>-peer, free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>.1. INTRODUCTIONMultimedia c<strong>on</strong>tent such as movies and TV programs can be downloaded and viewed from remote servers us<strong>in</strong>g <strong>on</strong>e ofthree methods. First, with off-l<strong>in</strong>e download<strong>in</strong>g, a pre-recorded file is transferred completely <str<strong>on</strong>g>to</str<strong>on</strong>g> the user before he startswatch<strong>in</strong>g it. Sec<strong>on</strong>dly, with live stream<strong>in</strong>g, the user immediately watches the c<strong>on</strong>tent while it is be<strong>in</strong>g broadcast <str<strong>on</strong>g>to</str<strong>on</strong>g> him.The third method, which holds the middle between off-l<strong>in</strong>e download<strong>in</strong>g and live stream<strong>in</strong>g and which is the <strong>on</strong>e we willfocus <strong>on</strong>, is <str<strong>on</strong>g>Video</str<strong>on</strong>g>-<strong>on</strong>-<strong>Demand</strong> (VoD). With VoD, pre-recorded c<strong>on</strong>tent is streamed <str<strong>on</strong>g>to</str<strong>on</strong>g> the user, who can start watch<strong>in</strong>g thec<strong>on</strong>tent at the moment of his own choos<strong>in</strong>g, from the beg<strong>in</strong>n<strong>in</strong>g of the video.VoD systems have proven <str<strong>on</strong>g>to</str<strong>on</strong>g> be immensely popular. Web sites serv<strong>in</strong>g televisi<strong>on</strong> broadcasts (e.g., BBC Moti<strong>on</strong> Gallery)or user-generated c<strong>on</strong>tent (e.g., YouTube) draw huge numbers of users. However, <str<strong>on</strong>g>to</str<strong>on</strong>g> date, virtually all of these systems arecentralised, and as a c<strong>on</strong>sequence, they require high-end servers and expensive Internet c<strong>on</strong>necti<strong>on</strong>s <str<strong>on</strong>g>to</str<strong>on</strong>g> serve their c<strong>on</strong>tent<str<strong>on</strong>g>to</str<strong>on</strong>g> their large user communities. Employ<strong>in</strong>g decentralized systems such as peer-<str<strong>on</strong>g>to</str<strong>on</strong>g>-peer (<strong>P2P</strong>) networks for VoD <strong>in</strong>stead ofcentral servers seems a logical step, as <strong>P2P</strong> networks have proven <str<strong>on</strong>g>to</str<strong>on</strong>g> be an efficient way of distribut<strong>in</strong>g c<strong>on</strong>tent over theInternet. The focus of this paper lies <strong>in</strong> provid<strong>in</strong>g a distributed alternative <str<strong>on</strong>g>to</str<strong>on</strong>g> video sites like YouTube, which serve short,user-generated c<strong>on</strong>tent. The first c<strong>on</strong>tributi<strong>on</strong> of this paper is a <strong>P2P</strong> VoD algorithm called “<str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>” which uses simplec<strong>on</strong>structs, which is easy <str<strong>on</strong>g>to</str<strong>on</strong>g> implement and deploy, and which we show <str<strong>on</strong>g>to</str<strong>on</strong>g> have good performance.In <strong>P2P</strong> networks, free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> is a well-known problem. 1, 2 A free-rider <strong>in</strong> a <strong>P2P</strong> network is a peer who c<strong>on</strong>sumes moreresources than it c<strong>on</strong>tributes, and more specifically, <strong>in</strong> the case of VoD, it is a peer who downloads data but uploads littleor no data <strong>in</strong> return. The burden of upload<strong>in</strong>g is <strong>on</strong> the altruistic peers, who may be <str<strong>on</strong>g>to</str<strong>on</strong>g>o few <strong>in</strong> number <str<strong>on</strong>g>to</str<strong>on</strong>g> provide all peerswith an acceptable quality of service. In both live stream<strong>in</strong>g and VoD, peers require a m<strong>in</strong>imal download speed <str<strong>on</strong>g>to</str<strong>on</strong>g> susta<strong>in</strong>playback, and so free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> is especially harmful as the altruistic peers al<strong>on</strong>e may not be able <str<strong>on</strong>g>to</str<strong>on</strong>g> provide all the peers withsufficient download speeds. Soluti<strong>on</strong>s <str<strong>on</strong>g>to</str<strong>on</strong>g> solve the free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> problem have been proposed for off-l<strong>in</strong>e download<strong>in</strong>g 3 andlive stream<strong>in</strong>g. 4–6 However, for <strong>P2P</strong> VoD, no soluti<strong>on</strong> yet exists which takes free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> c<strong>on</strong>siderati<strong>on</strong>. The sec<strong>on</strong>dc<strong>on</strong>tributi<strong>on</strong> of this paper is the design and analysis of the mechanism that makes <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>-<str<strong>on</strong>g>resilient</str<strong>on</strong>g>. In<str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>, peers have <str<strong>on</strong>g>to</str<strong>on</strong>g> forward (give) the chunks received from a peer <str<strong>on</strong>g>to</str<strong>on</strong>g> others <strong>in</strong> order <str<strong>on</strong>g>to</str<strong>on</strong>g> get more chunks from thatpeer. By preferr<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> serve good forwarders, free-riders are excluded <strong>in</strong> favour of well-behav<strong>in</strong>g peers. When bandwidth<strong>in</strong> the <strong>P2P</strong> system becomes scarce, the free-riders will experience a significant drop <strong>in</strong> the experienced quality of service.<str<strong>on</strong>g>Free</str<strong>on</strong>g>-riders will thus be able <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong> video data <strong>on</strong>ly if there is spare capacity <strong>in</strong> the system.∗ {j.j.d.mol, j.a.pouwelse, m.meulpolder, d.h.j.epema, h.j.sips}@tudelft.nl


S<strong>in</strong>ce <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> is essentially about data distributi<strong>on</strong>, we have designed it <str<strong>on</strong>g>to</str<strong>on</strong>g> be video-codec agnostic: <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> can beimplemented us<strong>in</strong>g any video codec. <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> splits the video stream <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> chunks of fixed size, which are <str<strong>on</strong>g>to</str<strong>on</strong>g> be played ata c<strong>on</strong>stant bitrate. Any <strong>P2P</strong> VoD system also has <str<strong>on</strong>g>to</str<strong>on</strong>g> make sure that the chunks needed for playback <strong>in</strong> the immediate futureare present. For this purpose, we def<strong>in</strong>e a prebuffer<strong>in</strong>g policy for download<strong>in</strong>g the first chunks of a file before playbackstarts, as well an order<strong>in</strong>g of the requests for the downloads of the other chunks. We will use the <strong>in</strong>curred chunk lossrate and the required prebuffer<strong>in</strong>g time as the metrics <str<strong>on</strong>g>to</str<strong>on</strong>g> evaluate the performance of <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>, which we will reportseparately for the well-behav<strong>in</strong>g peers and for the free-riders.The rema<strong>in</strong>der of this paper is organized as follows. In Secti<strong>on</strong> 2, we further specify the problem we address, followed bya descripti<strong>on</strong> of the <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> algorithm <strong>in</strong> Secti<strong>on</strong> 3. Next, we present our experiments and their results <strong>in</strong> Secti<strong>on</strong> 4. InSecti<strong>on</strong> 5, we discuss related work. F<strong>in</strong>ally, we draw c<strong>on</strong>clusi<strong>on</strong>s and discuss future work <strong>in</strong> Secti<strong>on</strong> 6.2. PROBLEM DESCRIPTIONThe problem we address <strong>in</strong> this paper is the design of a <strong>P2P</strong> VoD algorithm which discourages free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>. A free-rider is apeer which c<strong>on</strong>sumes more resources than it c<strong>on</strong>tributes <str<strong>on</strong>g>to</str<strong>on</strong>g> the <strong>P2P</strong> system. We will assume that a peer will not try <str<strong>on</strong>g>to</str<strong>on</strong>g> cheatthe system <strong>in</strong> a different way, such as be<strong>in</strong>g a s<strong>in</strong>gle peer emulat<strong>in</strong>g several peers (also called a Sybil attack 7 ), or severalpeers collud<strong>in</strong>g. In this secti<strong>on</strong>, we will describe how a <strong>P2P</strong> VoD system operates <strong>in</strong> our sett<strong>in</strong>g <strong>in</strong> general. We assume thealgorithm <str<strong>on</strong>g>to</str<strong>on</strong>g> be designed for <strong>P2P</strong> VoD <str<strong>on</strong>g>to</str<strong>on</strong>g> be video-codec agnostic, and we will c<strong>on</strong>sider the video <str<strong>on</strong>g>to</str<strong>on</strong>g> be a c<strong>on</strong>stant bit-ratestream with unknown boundary positi<strong>on</strong>s between the c<strong>on</strong>secutive frames. Similarly <str<strong>on</strong>g>to</str<strong>on</strong>g> BitTorrent, 3 we assume that thevideo file <str<strong>on</strong>g>to</str<strong>on</strong>g> be streamed is split up <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> chunks of equal size, and that every peer <strong>in</strong>terested <strong>in</strong> watch<strong>in</strong>g the stream tries <str<strong>on</strong>g>to</str<strong>on</strong>g>obta<strong>in</strong> all chunks. Due <str<strong>on</strong>g>to</str<strong>on</strong>g> the similarities with BitTorrent, we will use its term<strong>in</strong>ology <str<strong>on</strong>g>to</str<strong>on</strong>g> describe both our problem and ourproposed soluti<strong>on</strong>.A <strong>P2P</strong> VoD system c<strong>on</strong>sists of peers which are download<strong>in</strong>g the video (leechers) and of peers which have f<strong>in</strong>ished download<strong>in</strong>gand upload for free (seeders). The system starts with at least <strong>on</strong>e seeder. We assume that a peer is able <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong>the addresses of a number of random other peers, and that c<strong>on</strong>necti<strong>on</strong>s are possible between any pair of peers. To provideall leechers with the video data <strong>in</strong> a <strong>P2P</strong> fashi<strong>on</strong>, a multicast tree has <str<strong>on</strong>g>to</str<strong>on</strong>g> be used for every chunk of the video. Such amulticast tree can be built explicitly or emerge implicitly as the uni<strong>on</strong> of paths over which a certa<strong>in</strong> chunk travelled <str<strong>on</strong>g>to</str<strong>on</strong>g> eachpeer. While <strong>in</strong> traditi<strong>on</strong>al applicati<strong>on</strong>-level multicast<strong>in</strong>g, the same multicast tree is created for all chunks and is changed<strong>on</strong>ly when peers arrive or depart, we allow the multicast trees of different chunks <str<strong>on</strong>g>to</str<strong>on</strong>g> be different based <strong>on</strong> the dynamicbehavior of the peers. These multicast trees are not created ahead of time, but rather come <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> be<strong>in</strong>g while chunks arebe<strong>in</strong>g propagated <strong>in</strong> the system.A peer typically behaves <strong>in</strong> the follow<strong>in</strong>g manner: It jo<strong>in</strong>s the system as a leecher and c<strong>on</strong>tacts other peers <strong>in</strong> order <str<strong>on</strong>g>to</str<strong>on</strong>g>download chunks of a video.After a prebuffer<strong>in</strong>g period, the peer starts playback. When the video has f<strong>in</strong>ished play<strong>in</strong>g,the peer will depart. If the peer is d<strong>on</strong>e download<strong>in</strong>g the video before playback is f<strong>in</strong>ished, it will stay as a seeder until itdeparts. We assume that peers can arrive at any time, but that they will start play<strong>in</strong>g the video from the beg<strong>in</strong>n<strong>in</strong>g and ata c<strong>on</strong>stant speed. Similar <str<strong>on</strong>g>to</str<strong>on</strong>g> other <strong>P2P</strong> VoD algorithms like BiToS and popular centralised soluti<strong>on</strong>s like YouTube, we d<strong>on</strong>ot c<strong>on</strong>sider seek<strong>in</strong>g or fast-forward<strong>in</strong>g. <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> can be extended <str<strong>on</strong>g>to</str<strong>on</strong>g> support these operati<strong>on</strong>s, but such extensi<strong>on</strong>s areoutside the scope of this paper. Rew<strong>in</strong>d<strong>in</strong>g can be supported <strong>in</strong> the player itself without help of <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>.3. GIVE-TO-GETIn this secti<strong>on</strong>, we will expla<strong>in</strong> <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> (G2G). First, we will describe how a peer ma<strong>in</strong>ta<strong>in</strong>s <strong>in</strong>formati<strong>on</strong> about otherpeers <strong>in</strong> the system <strong>in</strong> its so-called neighbour list. Then, the way the video pieces are forwarded from peer <str<strong>on</strong>g>to</str<strong>on</strong>g> peer isdiscussed. Next, we show <strong>in</strong> which order video pieces are transferred between peers. Fourth, we will discuss the differenceswith the related BitTorrent 3 and BiToS 8 pro<str<strong>on</strong>g>to</str<strong>on</strong>g>cols. F<strong>in</strong>ally, we will discuss our performance metrics.3.1 Neighbour ManagementThe system we c<strong>on</strong>sider c<strong>on</strong>sists of peers which are <strong>in</strong>terested <strong>in</strong> receiv<strong>in</strong>g the video stream (leechers) and peers whichhave obta<strong>in</strong>ed the complete video stream and are will<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> share it for free (seeders). We assume a peer is able <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong>addresses of other peers uniformly at random. Mechanisms <str<strong>on</strong>g>to</str<strong>on</strong>g> implement this could be centralised, with a server keep<strong>in</strong>gtrack of who is <strong>in</strong> the network, or decentralised, for example, by us<strong>in</strong>g epidemic pro<str<strong>on</strong>g>to</str<strong>on</strong>g>cols or DHT r<strong>in</strong>gs. We view thispeer discovery problem as orthog<strong>on</strong>al <str<strong>on</strong>g>to</str<strong>on</strong>g> our work, and so bey<strong>on</strong>d the scope of this paper. From the moment a peer jo<strong>in</strong>sthe system, it will obta<strong>in</strong> and ma<strong>in</strong>ta<strong>in</strong> a list of 10 neighbours <strong>in</strong> its neighbour list. When a peer is unable <str<strong>on</strong>g>to</str<strong>on</strong>g> c<strong>on</strong>tact 10neighbours, it will periodically try <str<strong>on</strong>g>to</str<strong>on</strong>g> discover new neighbours. Once a peer becomes a seeder, it will disc<strong>on</strong>nect from otherseeders <str<strong>on</strong>g>to</str<strong>on</strong>g> avoid ma<strong>in</strong>ta<strong>in</strong><strong>in</strong>g useless c<strong>on</strong>necti<strong>on</strong>s.


choke(all neighbours)N ⇐ all <strong>in</strong>terested neighbourssort N <strong>on</strong> forward<strong>in</strong>g rankfor i = 1 <str<strong>on</strong>g>to</str<strong>on</strong>g> m<strong>in</strong>(|N|, 3 + n) dounchoke(N[i])b ⇐ ∑ i(our upload speed <str<strong>on</strong>g>to</str<strong>on</strong>g> N[k])k=1if i ≥ 3 and b > UPLINK ∗ 0.9 thenbreakend ifend for(a) Unchok<strong>in</strong>g algorithm.pvideo datacgfeedback <strong>in</strong>formati<strong>on</strong>(b) The feedback c<strong>on</strong>necti<strong>on</strong>sfor an <strong>in</strong>dividualpeer.<str<strong>on</strong>g>Video</str<strong>on</strong>g>startround-triptime <str<strong>on</strong>g>to</str<strong>on</strong>g> qm (playbackpositi<strong>on</strong>)HIGHPRIOhMIDPRIOμhLOW PRIO<str<strong>on</strong>g>Video</str<strong>on</strong>g>end(c) The high-, mid- and low-priority sets <strong>in</strong> relati<strong>on</strong> <str<strong>on</strong>g>to</str<strong>on</strong>g>the playback positi<strong>on</strong>. The chunks <strong>in</strong> the grey areas, ifrequested, will not arrive before their deadl<strong>in</strong>e.Figure 1. The <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> unchok<strong>in</strong>g, feedback, and piece-pick<strong>in</strong>g systems.3.2 Chunk Distributi<strong>on</strong>The video data is split up <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> chunks of equal size. As G2G is codec agnostic, the chunk boundaries do not necessarilyco<strong>in</strong>cide with frame boundaries. Peers obta<strong>in</strong> the chunks by request<strong>in</strong>g them from their neighbours. A peer keeps itsneighbours <strong>in</strong>formed about the chunks it has, and decides which of its neighbours is allowed <str<strong>on</strong>g>to</str<strong>on</strong>g> make requests. A neighbourwhich is allowed <str<strong>on</strong>g>to</str<strong>on</strong>g> make requests is unchoked. When a chunk is requested by a neighbour, the peer appends it <str<strong>on</strong>g>to</str<strong>on</strong>g> the sendqueue for the corresp<strong>on</strong>d<strong>in</strong>g c<strong>on</strong>necti<strong>on</strong>. Chunks are uploaded us<strong>in</strong>g subchunks of 1 Kbyte <str<strong>on</strong>g>to</str<strong>on</strong>g> avoid delays <strong>in</strong> the deliveryof c<strong>on</strong>trol messages, which are sent with a higher priority.Every δ sec<strong>on</strong>ds, a peer decides which neighbours are unchoked based <strong>on</strong> <strong>in</strong>formati<strong>on</strong> gathered over the last δ sec<strong>on</strong>ds.The neighbours which have shown the best performance will be unchoked, as well as a randomly chosen neighbour (optimisticunchok<strong>in</strong>g). G2G employs a novel unchok<strong>in</strong>g algorithm, described <strong>in</strong> pseudocode <strong>in</strong> Figure 1(a). A peer p ranks itsneighbours accord<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> decreas<strong>in</strong>g forward<strong>in</strong>g ranks, which is a value represent<strong>in</strong>g how well a neighbour is forward<strong>in</strong>gchunks. The calculati<strong>on</strong> of the forward<strong>in</strong>g rank is expla<strong>in</strong>ed below. Peer p unchokes the three highest-ranked neighbours.S<strong>in</strong>ce peers are judged by the amount of data they forward, it is beneficial <str<strong>on</strong>g>to</str<strong>on</strong>g> make efficient use of the available uploadbandwidth. To help saturate the upl<strong>in</strong>k, subsequently more neighbours are unchoked until the upl<strong>in</strong>k bandwidth necessary<str<strong>on</strong>g>to</str<strong>on</strong>g> serve the unchoked peers reaches 90% of p’s upl<strong>in</strong>k. At most n neighbours are unchoked this way <str<strong>on</strong>g>to</str<strong>on</strong>g> avoid serv<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g>omany neighbours at <strong>on</strong>ce, which would decrease the performance of the <strong>in</strong>dividual c<strong>on</strong>necti<strong>on</strong>s. The optimal value for nlikely depends <strong>on</strong> the available bandwidth and latency of p’s network c<strong>on</strong>necti<strong>on</strong>s. In our experiments, we use n = 2. Tosearch for better children, p round-rob<strong>in</strong>s over the rest of the neighbours and optimistically unchokes a different <strong>on</strong>e ofthem every 2δ sec<strong>on</strong>ds. If the optimistically unchoked neighbour proves <str<strong>on</strong>g>to</str<strong>on</strong>g> be a good forwarder and ends up at the <str<strong>on</strong>g>to</str<strong>on</strong>g>p,it will be au<str<strong>on</strong>g>to</str<strong>on</strong>g>matically kept unchoked. New c<strong>on</strong>necti<strong>on</strong>s are <strong>in</strong>serted uniformly at random <strong>in</strong> the list of neighbours. Thedurati<strong>on</strong> of 2δ sec<strong>on</strong>ds turns out <str<strong>on</strong>g>to</str<strong>on</strong>g> be enough for a neighbour <str<strong>on</strong>g>to</str<strong>on</strong>g> prove its good behaviour. By hav<strong>in</strong>g a peer upload chunks<str<strong>on</strong>g>to</str<strong>on</strong>g> <strong>on</strong>ly the best forwarders, its neighbours are encouraged <str<strong>on</strong>g>to</str<strong>on</strong>g> forward the data as much as possible. Peers are not obliged<str<strong>on</strong>g>to</str<strong>on</strong>g> forward data, but may not be able <str<strong>on</strong>g>to</str<strong>on</strong>g> receive video data <strong>on</strong>ce other peers start <str<strong>on</strong>g>to</str<strong>on</strong>g> compete for it. This results <strong>in</strong> a systemwhere free-riders are <str<strong>on</strong>g>to</str<strong>on</strong>g>lerated <strong>on</strong>ly if there is sufficient bandwidth left <str<strong>on</strong>g>to</str<strong>on</strong>g> serve them.A peer p ranks its neighbours based <strong>on</strong> the number of chunks they have forwarded dur<strong>in</strong>g the last δ sec<strong>on</strong>ds. Our rank<strong>in</strong>gprocedure c<strong>on</strong>sists of two steps. First, the neighbours are sorted accord<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> the decreas<strong>in</strong>g numbers of chunks they haveforwarded <str<strong>on</strong>g>to</str<strong>on</strong>g> other peers, count<strong>in</strong>g <strong>on</strong>ly the chunks they orig<strong>in</strong>ally received from p. If two neighbours have an equal score<strong>in</strong> the first step, they are sorted <strong>in</strong> the sec<strong>on</strong>d step accord<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> the decreas<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g>tal number of chunks they have forwarded<str<strong>on</strong>g>to</str<strong>on</strong>g> other peers. Either step al<strong>on</strong>e does not suffice as a rank<strong>in</strong>g mechanism. If neighbours are ranked solely based <strong>on</strong> the<str<strong>on</strong>g>to</str<strong>on</strong>g>tal number of chunks they upload, good uploaders will be unchoked by all their neighbours, which causes <strong>on</strong>ly the bestuploaders <str<strong>on</strong>g>to</str<strong>on</strong>g> receive data and the other peers <str<strong>on</strong>g>to</str<strong>on</strong>g> starve. On the other hand, if neighbours are ranked solely based <strong>on</strong> thenumber of chunks they receive from p and forward <str<strong>on</strong>g>to</str<strong>on</strong>g> others, peers which are optimistically unchoked by p have a hardtime becom<strong>in</strong>g <strong>on</strong>e of the <str<strong>on</strong>g>to</str<strong>on</strong>g>p ranked forwarders. An optimistically unchoked peer q would have <str<strong>on</strong>g>to</str<strong>on</strong>g> receive chunks fromp and hope for q’s neighbours <str<strong>on</strong>g>to</str<strong>on</strong>g> request exactly those chunks often enough. The probability that q replaces the other <str<strong>on</strong>g>to</str<strong>on</strong>g>pforwarders ranked by p is <str<strong>on</strong>g>to</str<strong>on</strong>g>o low.Peer p has <str<strong>on</strong>g>to</str<strong>on</strong>g> know which chunks were forwarded by its children <str<strong>on</strong>g>to</str<strong>on</strong>g> others. To obta<strong>in</strong> this <strong>in</strong>formati<strong>on</strong>, it cannot ask itschildren directly, as they could make false claims. Instead, p asks its grandchildren for the behaviour of its children. Thechildren of p keep p updated about the peers they are forward<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g>. Peer p c<strong>on</strong>tacts these grandchildren, and asks them


which chunks they received from p’s children. This allows p <str<strong>on</strong>g>to</str<strong>on</strong>g> determ<strong>in</strong>e both the forward<strong>in</strong>g rates of its children as wellas the numbers of chunks they forwarded which were orig<strong>in</strong>ally provided by p. Because peer p ranks its children based<strong>on</strong> the (amount of) data they forward, the children of p have an <strong>in</strong>centive <str<strong>on</strong>g>to</str<strong>on</strong>g> forward as much as possible <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong> a highrank. Figure 1(b) shows an example of the flow of feedback <strong>in</strong>formati<strong>on</strong>. Peer p has unchoked two other peers, am<strong>on</strong>gstwhich peer c. Peer c has peer g unchoked. Informati<strong>on</strong> about the amount of video data uploaded by c <str<strong>on</strong>g>to</str<strong>on</strong>g> g is communicatedover the dashed arrow back <str<strong>on</strong>g>to</str<strong>on</strong>g> p. Peer p can subsequently rank child c based <strong>on</strong> this <strong>in</strong>formati<strong>on</strong>. Note that a node c hasno <strong>in</strong>centive <str<strong>on</strong>g>to</str<strong>on</strong>g> lie about the identities of its children, because <strong>on</strong>ly its actual children will provide feedback about c’sbehaviour.3.3 Chunk Pick<strong>in</strong>gA peer obta<strong>in</strong>s chunks by issu<strong>in</strong>g a request for each chunk <str<strong>on</strong>g>to</str<strong>on</strong>g> other peers. A peer thus has <str<strong>on</strong>g>to</str<strong>on</strong>g> decide <strong>in</strong> which order it willrequest the chunks it wants <str<strong>on</strong>g>to</str<strong>on</strong>g> download; this is called chunk pick<strong>in</strong>g. When a peer p is allowed by <strong>on</strong>e of its neighbours<str<strong>on</strong>g>to</str<strong>on</strong>g> make requests <str<strong>on</strong>g>to</str<strong>on</strong>g> it, it will always do so if the neighbour has a chunk p wants. We associate with every chunk and everypeer a deadl<strong>in</strong>e, which is the latest po<strong>in</strong>t <strong>in</strong> time the chunk has <str<strong>on</strong>g>to</str<strong>on</strong>g> be present at the peer for playback. As l<strong>on</strong>g as p has notyet started playback, the deadl<strong>in</strong>e of every chunk at p is <strong>in</strong>f<strong>in</strong>ite. Peer p wants chunk i from a neighbour q if the follow<strong>in</strong>gc<strong>on</strong>diti<strong>on</strong>s are met: a) q has chunk i, b) p does not have chunk i and has not previously requested it, and c) it is likelythat chunk i arrives at p before its deadl<strong>in</strong>e. Peer p will never request pieces it does not want. Because peers keep theirneighbours <strong>in</strong>formed about the chunks they have, the first two rules are easy <str<strong>on</strong>g>to</str<strong>on</strong>g> check. To estimate whether a chunk willarrive <strong>on</strong> time, p keeps track of the resp<strong>on</strong>se time of requests. This resp<strong>on</strong>se time is <strong>in</strong>fluenced by the l<strong>in</strong>k delay between pand q as well as the amount of traffic from p and q <str<strong>on</strong>g>to</str<strong>on</strong>g> other peers. Peers can submit multiple requests <strong>in</strong> parallel <strong>in</strong> order <str<strong>on</strong>g>to</str<strong>on</strong>g>fully utilise the l<strong>in</strong>ks with their neighbours.When decid<strong>in</strong>g the order <strong>in</strong> which chunks are picked, two th<strong>in</strong>gs have <str<strong>on</strong>g>to</str<strong>on</strong>g> be kept <strong>in</strong> m<strong>in</strong>d. First, it is necessary <str<strong>on</strong>g>to</str<strong>on</strong>g> providethe chunks <strong>in</strong>-order <str<strong>on</strong>g>to</str<strong>on</strong>g> the video player. Sec<strong>on</strong>dly, <str<strong>on</strong>g>to</str<strong>on</strong>g> achieve a good upload rate, it is necessary <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong> enough chunkswhich are wanted by other peers. The former favours download<strong>in</strong>g chunks <strong>in</strong>-order, the latter favours download<strong>in</strong>g rarechunks first. To balance between these two requirements, G2G employs a hybrid soluti<strong>on</strong> by prioritiz<strong>in</strong>g the chunks thathave yet <str<strong>on</strong>g>to</str<strong>on</strong>g> be played back. Let m be the playback positi<strong>on</strong> of peer p, or 0 if p has not yet started playback. Peer p willrequest chunk i <strong>on</strong> the first match <strong>in</strong> the follow<strong>in</strong>g list of sets of chunks (see Figure 1(c)):• High priority: m ≤ i < m + h. If p has already started playback, it will pick the lowest such i, otherwise, it willpick i rarest first.• Mid priority: m + h ≤ i < m + (µ + 1)h. Peer p will choose such an i rarest first.• Low priority: m + (µ + 1)h ≤ i. Peer p will choose such an i rarest first.In these def<strong>in</strong>iti<strong>on</strong>s, h and µ are parameters which dictate the amount of cluster<strong>in</strong>g of chunk requests <strong>in</strong> the part of thevideo yet <str<strong>on</strong>g>to</str<strong>on</strong>g> be played back. A peer picks rarest-first based <strong>on</strong> the availability of chunks at its neighbours. Am<strong>on</strong>g chunksof equal rarity, i is chosen uniformly at random.Dur<strong>in</strong>g playback, the chunks with a tight deadl<strong>in</strong>e are downloaded firstand <strong>in</strong>-order (the high priority set). The mid-priority set makes it easier for the peer <str<strong>on</strong>g>to</str<strong>on</strong>g> complete the high-priority set <strong>in</strong> thefuture, as the (beg<strong>in</strong>n<strong>in</strong>g of the) current mid-priority set will be the high-priority set later <strong>on</strong>. This lowers the probabilityof hav<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> do <strong>in</strong>-order download<strong>in</strong>g later <strong>on</strong>. The low-priority set will download the rest of the chunks us<strong>in</strong>g rarest-firstboth <str<strong>on</strong>g>to</str<strong>on</strong>g> collect chunks which will be forwarded often because they are wanted by many and <str<strong>on</strong>g>to</str<strong>on</strong>g> <strong>in</strong>crease the availability ofthe rarest chunks. Also, the low priority set allows a peer <str<strong>on</strong>g>to</str<strong>on</strong>g> collect as much of the video as fast as possible.3.4 Differences between <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g>, BitTorrent and BiToSIn our experiments, we will compare the performance of G2G <str<strong>on</strong>g>to</str<strong>on</strong>g> that of BiToS. 8 BiToS is a <strong>P2P</strong> VoD algorithm which,like G2G, is <strong>in</strong>spired by BitTorrent. For BiToS, we will use the optimal sett<strong>in</strong>gs as derived <strong>in</strong> the paper where BiToS is<strong>in</strong>troduced. 8 The major differences between G2G, BiToS and BitTorrent lie <strong>in</strong> the chunk-pick<strong>in</strong>g policy, the chok<strong>in</strong>g policyand the prebuffer<strong>in</strong>g policy. In BiToS, two priority sets are used: the high-priority set and the rema<strong>in</strong><strong>in</strong>g-pieces set. Thehigh-priority set is def<strong>in</strong>ed <str<strong>on</strong>g>to</str<strong>on</strong>g> be 8% of the video length. Peers request pieces from the high-priority set 80% of the time,and from the rema<strong>in</strong><strong>in</strong>g-pieces set 20% of the time. The rarest-first policy is used <strong>in</strong> both cases, with a bias <str<strong>on</strong>g>to</str<strong>on</strong>g>wards earlierchunks if they are equally rare. In c<strong>on</strong>trast, G2G uses three priority sets. In-order download<strong>in</strong>g is used <strong>in</strong> the high-priorityset <strong>on</strong>ce playback has started, and <strong>in</strong> the mid- and low-priority sets, the rarest-first policy chooses at random between piecesof equal rarity. BitTorrent is not designed for VoD and thus does not def<strong>in</strong>e priority sets based <strong>on</strong> the playback positi<strong>on</strong>.The chok<strong>in</strong>g policy determ<strong>in</strong>es which neighbours are allowed <str<strong>on</strong>g>to</str<strong>on</strong>g> request pieces, which def<strong>in</strong>es the flow of chunks throughthe <strong>P2P</strong> network. BiToS, like BitTorrent, is based <strong>on</strong> tit-for-tat, while G2G is not. In tit-for-tat, a peer a will allow a peer


<str<strong>on</strong>g>to</str<strong>on</strong>g> make requests for chunks if b proved <str<strong>on</strong>g>to</str<strong>on</strong>g> be <strong>on</strong>e of the <str<strong>on</strong>g>to</str<strong>on</strong>g>p uploaders <str<strong>on</strong>g>to</str<strong>on</strong>g> a. In c<strong>on</strong>trast, a peer a <strong>in</strong> G2G will allow b <str<strong>on</strong>g>to</str<strong>on</strong>g>make requests if b proves <str<strong>on</strong>g>to</str<strong>on</strong>g> be <strong>on</strong>e of the <str<strong>on</strong>g>to</str<strong>on</strong>g>p forwarders <str<strong>on</strong>g>to</str<strong>on</strong>g> others (this set of others can <strong>in</strong>clude a). Tit-for-tat works well<strong>in</strong> an off-l<strong>in</strong>e download sett<strong>in</strong>g (such as BitTorrent) where peers have enough chunks they can exchange. However, it is lesssuitable for VoD because peers <strong>in</strong> VoD bias their <strong>in</strong>terests <strong>on</strong> download<strong>in</strong>g the chunks close after their playback positi<strong>on</strong>,and are not <strong>in</strong>terested <strong>in</strong> chunks before their playback positi<strong>on</strong>. Two peers subsequently either have overlapp<strong>in</strong>g <strong>in</strong>terestsif their playback positi<strong>on</strong>s are close, or the peer with the earliest playback positi<strong>on</strong> is <strong>in</strong>terested <strong>in</strong> the other’s chunksbut not vice-versa. One-sided <strong>in</strong>terests are the bane of tit-for-tat systems. In BiToS, peers download pieces outside theirhigh-priority set 20% of the time, relax<strong>in</strong>g the <strong>on</strong>e-sided <strong>in</strong>terests problem somewhat, as all peers have a mutual <strong>in</strong>terest <strong>in</strong>obta<strong>in</strong><strong>in</strong>g the end of the video this way. The chok<strong>in</strong>g policy <strong>in</strong> G2G c<strong>on</strong>sists of unchok<strong>in</strong>g neighbours which have proven<str<strong>on</strong>g>to</str<strong>on</strong>g> be good forwarders. Subsequently, the requirement <str<strong>on</strong>g>to</str<strong>on</strong>g> exchange data between peers, and thus <str<strong>on</strong>g>to</str<strong>on</strong>g> have <strong>in</strong>terests <strong>in</strong> eachother’s chunks, is not present <strong>in</strong> G2G.3.5 Performance MetricsFor a peer <str<strong>on</strong>g>to</str<strong>on</strong>g> view a video clip <strong>in</strong> a VoD fashi<strong>on</strong>, two c<strong>on</strong>diti<strong>on</strong>s must be met <str<strong>on</strong>g>to</str<strong>on</strong>g> provide a good quality of service. First, thestart-up delay must be small, and sec<strong>on</strong>dly, the chunk loss must be low <str<strong>on</strong>g>to</str<strong>on</strong>g> provide good playback quality. If either of thesec<strong>on</strong>diti<strong>on</strong>s is not met, it is likely the user was better off download<strong>in</strong>g the whole clip before view<strong>in</strong>g it. In G2G, we say achunk is lost if a peer cannot request it <strong>in</strong> time from <strong>on</strong>e of its neighbours, or if the chunk was requested but did not arrive<strong>in</strong> time. The c<strong>on</strong>cept of buffer<strong>in</strong>g the beg<strong>in</strong>n<strong>in</strong>g of the video clip before start<strong>in</strong>g playback is comm<strong>on</strong> <str<strong>on</strong>g>to</str<strong>on</strong>g> most stream<strong>in</strong>gvideo players. We will assume that <strong>on</strong>ce prebuffer<strong>in</strong>g is f<strong>in</strong>ished and playback is started, the video will not be paused orslowed down. In general, the amount of prebuffer<strong>in</strong>g is a trade-off between hav<strong>in</strong>g a short wait<strong>in</strong>g period and hav<strong>in</strong>g lowchunk loss dur<strong>in</strong>g playback.We def<strong>in</strong>e the prebuffer<strong>in</strong>g time as follows. First, a peer waits until it has the first h chunks (the <strong>in</strong>itial high-priority set)available <str<strong>on</strong>g>to</str<strong>on</strong>g> prevent immediate chunk loss. Then, it waits until the expected rema<strong>in</strong><strong>in</strong>g download time is less than thedurati<strong>on</strong> of the video. The expected rema<strong>in</strong><strong>in</strong>g download time is extrapolated from the download speed so far, with a 20%safety marg<strong>in</strong>. This marg<strong>in</strong> allows for short or small drops <strong>in</strong> download rate later <strong>on</strong>, and will also create an <strong>in</strong>creas<strong>in</strong>gbuffer when the download rate does not drop. Drops <strong>in</strong> download rate can occur when a parent of a peer p adopts morechildren or replaces p with a different child due <str<strong>on</strong>g>to</str<strong>on</strong>g> p’s rank or <str<strong>on</strong>g>to</str<strong>on</strong>g> optimistic unchok<strong>in</strong>g. In the former case, the upl<strong>in</strong>k of p’sparent has <str<strong>on</strong>g>to</str<strong>on</strong>g> be shared by more peers, and <strong>in</strong> the latter case, p s<str<strong>on</strong>g>to</str<strong>on</strong>g>ps receiv<strong>in</strong>g anyth<strong>in</strong>g from that particular parent. Whenand how often this will occur depends <strong>on</strong> the behaviour of p and its neighbours, and is hard <str<strong>on</strong>g>to</str<strong>on</strong>g> predict. The safety marg<strong>in</strong><strong>in</strong> the prebuffer<strong>in</strong>g was added <str<strong>on</strong>g>to</str<strong>on</strong>g> protect aga<strong>in</strong>st such behaviour. It should be noted that other VoD algorithms which useBitTorrent-like unchok<strong>in</strong>g mechanisms (such as BiToS 8 ) are likely <str<strong>on</strong>g>to</str<strong>on</strong>g> suffer from the same problem. In order <str<strong>on</strong>g>to</str<strong>on</strong>g> keep theprebuffer<strong>in</strong>g time reas<strong>on</strong>able, the average upload speed of a peer <strong>in</strong> the system should thus be at least the video bitrate plusa 20% marg<strong>in</strong>. If there is less upload capacity <strong>in</strong> the system, peers both get a hard time obta<strong>in</strong><strong>in</strong>g all chunks and are forced<str<strong>on</strong>g>to</str<strong>on</strong>g> prebuffer l<strong>on</strong>ger <str<strong>on</strong>g>to</str<strong>on</strong>g> ensure the download will be f<strong>in</strong>ished before the playback is.Once a peer has started playback, it requires chunks at a c<strong>on</strong>stant rate. The chunk loss is the fracti<strong>on</strong> of chunks thathave not arrived before their deadl<strong>in</strong>es, averaged over all the play<strong>in</strong>g peers. When report<strong>in</strong>g the chunk loss, a 5-sec<strong>on</strong>dslid<strong>in</strong>g w<strong>in</strong>dow average will be used <str<strong>on</strong>g>to</str<strong>on</strong>g> improve the readibility of the figures. Neither BiToS nor BitTorrent def<strong>in</strong>e aprebuffer<strong>in</strong>g policy, so <str<strong>on</strong>g>to</str<strong>on</strong>g> be able <str<strong>on</strong>g>to</str<strong>on</strong>g> make a fair comparis<strong>on</strong> between BiToS and G2G <strong>in</strong> our experiments, we will use thesame prebuffer<strong>in</strong>g policy for BiToS as for G2G. BiToS uses a larger high-priority set (8% of the video length, or 24 sec<strong>on</strong>dsfor the 5-m<strong>in</strong>ute video we will use <strong>in</strong> our experiments), mak<strong>in</strong>g it unfair <str<strong>on</strong>g>to</str<strong>on</strong>g> let peers wait until their full high-priority set isdownloaded before playback is started. Instead, like <strong>in</strong> G2G, we wait until the first h = 10 sec<strong>on</strong>ds are downloaded.4. EXPERIMENTSIn this secti<strong>on</strong> we will present our experimental setup as well as the results of two experiments. In the first experiment, wemeasure the default behaviour with well-behav<strong>in</strong>g peers and <strong>in</strong> the sec<strong>on</strong>d experiment we let part of the system c<strong>on</strong>sist offree-riders. In both cases, we will compare the performance of G2G and BiToS.4.1 Experimental SetupOur experiments were performed us<strong>in</strong>g a discrete-event simula<str<strong>on</strong>g>to</str<strong>on</strong>g>r, emulat<strong>in</strong>g a network of 500 peers which are all <strong>in</strong>terested<strong>in</strong> receiv<strong>in</strong>g the video stream. The network is assumed <str<strong>on</strong>g>to</str<strong>on</strong>g> have no bottlenecks except at the peers. Packets are sent us<strong>in</strong>gTCP with a 1500 byte MTU and their delivery is delayed <strong>in</strong> case of c<strong>on</strong>gesti<strong>on</strong> <strong>in</strong> the upl<strong>in</strong>k of the sender or the downl<strong>in</strong>kof the receiver. Each simulati<strong>on</strong> starts with <strong>on</strong>e <strong>in</strong>itial seeder, and the rest of the peers arrive accord<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> a Poiss<strong>on</strong>


chunk loss (%)chunk loss (%)2015105chunk loss (g2g)number of peers (g2g)chunk loss (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)number of peers (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)807060504030201000 150 300 450 600 750 0876543210number of peerschunk loss (%)2015105chunk loss (g2g)number of peers (g2g)chunk loss (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)number of peers (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)5000 150 300 450 600 750 0time (s)time (s)time (s)Figure 2. The average chunk loss and the number of play<strong>in</strong>g peers for peers arriv<strong>in</strong>g at 0.2/s, 1.0/s, and 10.0/s, respectively.g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s0 20 40 60 80 100 120 140 160chunk loss (%)181614121086420350300250200150100number of peers0 50 100 150 200 250 300chunk loss (%)20500chunk loss (g2g)450number of peers (g2g)chunk loss (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s) 40015number of peers (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s) 3503001025020015051005000 150 300 450 600 750 0peer number (sorted <strong>on</strong> chunk loss)peer number (sorted <strong>on</strong> chunk loss)peer number (sorted <strong>on</strong> chunk loss)Figure 3. The distributi<strong>on</strong> of the chunk loss over the peers for peers arriv<strong>in</strong>g at 0.2/s, 1.0/s, and 10.0/s, respectively.g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>schunk loss (%)20181614121086420g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>snumber of peers0 100 200 300 400 500process. Unless stated otherwise, peers arrive at a rate of 1.0/s, and depart when playback is f<strong>in</strong>ished. When a peer is d<strong>on</strong>edownload<strong>in</strong>g the video stream, it will therefore become a seeder until the playback is f<strong>in</strong>ished and the peer departs.We will simulate a group of peers with asymmetrical bandwidth capacities, which is typical for end-users <strong>on</strong> the Internet.Every peer has an upl<strong>in</strong>k capacity chosen uniformly at random between 0.5 and 1.0 Mbit/s. The downl<strong>in</strong>k capacity of apeer is always four times its upl<strong>in</strong>k capacity. The round-trip times between peers vary between 100 ms and 300 ms † . A peerrec<strong>on</strong>siders the behaviour of its neighbours every δ = 10 sec<strong>on</strong>ds, which is a balance between keep<strong>in</strong>g the overhead lowand allow<strong>in</strong>g neighbour behaviour changes (<strong>in</strong>clud<strong>in</strong>g free-riders) <str<strong>on</strong>g>to</str<strong>on</strong>g> be detected. The high-priority set size h is def<strong>in</strong>ed <str<strong>on</strong>g>to</str<strong>on</strong>g>be the equivalent of 10 sec<strong>on</strong>ds of video. The mid-priority set size is µ = 4 times the high-priority set size.A 5-m<strong>in</strong>ute video of 0.5 Mbit/s is cut up <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> 16 Kbyte chunks (i.e., 4 chunks per sec<strong>on</strong>d <strong>on</strong> average) and is be<strong>in</strong>g distributedfrom the <strong>in</strong>itial seeder with a 2 Mbit/s upl<strong>in</strong>k. We will use the prebuffer<strong>in</strong>g time and the chunk loss as the metrics <str<strong>on</strong>g>to</str<strong>on</strong>g> assessthe performance of G2G. The actual frame loss depends <strong>on</strong> the video codec and the dependency between encoded frames.Because G2G is codec-agnostic, it does not know the frame boundaries <strong>in</strong> the video stream and thus, we cannot use frameloss as a metric. In all experiments, we will compare the performance of G2G and a BiToS system. We will present theresults of a representative sample run <str<strong>on</strong>g>to</str<strong>on</strong>g> show typical behaviour. We will use the same arrival patterns, neighbour sets andpeer capabilities when compar<strong>in</strong>g the performance of G2G and BiToS.4.2 Default BehaviourIn the first experiment, peers depart <strong>on</strong>ly when their playback is f<strong>in</strong>ished, and there are no free-riders. We do three runs,lett<strong>in</strong>g peers arrive at an average rate of 0.2/s, 1.0/s, and 10.0/s, respectively.<str<strong>on</strong>g>to</str<strong>on</strong>g> that of BiToS when us<strong>in</strong>g the same arrivalpattern and network c<strong>on</strong>figurati<strong>on</strong>. In Figure 2, the number of play<strong>in</strong>g peers <strong>in</strong> the system is shown as well as the averagepercentage of chunk loss. In the left and middle graphs, peers start depart<strong>in</strong>g before all of them have arrived, result<strong>in</strong>g <strong>in</strong>a more or less stable number of peers for a certa<strong>in</strong> period that ends when all peers have arrived. In the right graph, allpeers have arrived with<strong>in</strong> approximately 50 sec<strong>on</strong>ds, after which the number of peers is stable until all of them are d<strong>on</strong>eplay<strong>in</strong>g. As l<strong>on</strong>g as the <strong>in</strong>itial seed is the <strong>on</strong>ly seed <strong>in</strong> the system, peers experience some chunk loss. Because all peers aredownload<strong>in</strong>g the video, there is much competiti<strong>on</strong> for bandwidth. Once some peers are d<strong>on</strong>e download<strong>in</strong>g the video, theycan seed it <str<strong>on</strong>g>to</str<strong>on</strong>g> others and after a short period of time, no peer experiences any chunk loss at all. In effect, the seeders forma c<strong>on</strong>tent distributi<strong>on</strong> network aided by the peers which c<strong>on</strong>t<strong>in</strong>ue <str<strong>on</strong>g>to</str<strong>on</strong>g> forward chunks <str<strong>on</strong>g>to</str<strong>on</strong>g> each other.Figure 3 shows the distributi<strong>on</strong> of the chunk loss for each arrival rate across the peers, sorted decreas<strong>in</strong>gly. At all threerates, the chunk loss is c<strong>on</strong>centrated <strong>on</strong> a small number of peers, but much more so for G2G than for BiToS. The graphs for† These figures are realistic for broadband usage with<strong>in</strong> the Netherlands. The results are nevertheless representative, because theoverhead of G2G is low compared <str<strong>on</strong>g>to</str<strong>on</strong>g> the video bandwidth.


number of peers(cumulative)number of peers(cumulative)number of peers(cumulative)500400300200100g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s00 10 20 30 40 50 60 70500400300time (s)200100g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s00 10 20 30 40 50 60 70500400300time (s)200100g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s00 10 20 30 40 50 60 70time (s)Figure 4. The cumulative distributi<strong>on</strong>of the prebuffer<strong>in</strong>g time forpeers arriv<strong>in</strong>g at 0.2/s, 1.0/s, and10.0/s, respectively.chunk loss (%)chunk loss (%)504030201000 150 300 450 600 7505040302010chunk loss (g2g)number of peers (g2g)chunk loss (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)number of peers (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)time (s)chunk loss (g2g)number of peers (g2g)chunk loss (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)number of peers (bi<str<strong>on</strong>g>to</str<strong>on</strong>g>s)00 150 300 450 600 750300250200150100time (s)Figure 5. The average chunk loss and the numberof play<strong>in</strong>g peers, for well-behav<strong>in</strong>g peers (<str<strong>on</strong>g>to</str<strong>on</strong>g>p) andfree-riders (bot<str<strong>on</strong>g>to</str<strong>on</strong>g>m).5009080706050403020100number of peersnumber of peersnumber of peers(cumulative)number of peers(cumulative)4003503002502001501005001008060g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s0 10 20 30 40 50 60 70time (s)4020g2gbi<str<strong>on</strong>g>to</str<strong>on</strong>g>s00 10 20 30 40 50 60 70time (s)Figure 6. The prebuffer<strong>in</strong>g time, forwell-behav<strong>in</strong>g peers (<str<strong>on</strong>g>to</str<strong>on</strong>g>p) and freeriders(bot<str<strong>on</strong>g>to</str<strong>on</strong>g>m).G2G are mostly below those of BiToS, imply<strong>in</strong>g that when c<strong>on</strong>sider<strong>in</strong>g chunk loss, most peers are better off us<strong>in</strong>g G2G. Amore sophisticated playback policy, such as allow<strong>in</strong>g the video stream <str<strong>on</strong>g>to</str<strong>on</strong>g> pause for rebuffer<strong>in</strong>g, could potentially alleviatethe heavy losses that occur for some peers us<strong>in</strong>g either algorithm.Figure 4 shows the cumulative distributi<strong>on</strong> of the required prebuffer<strong>in</strong>g time, which <strong>in</strong>creases with the arrival rate. Athigher arrival rates, it takes an <strong>in</strong>creas<strong>in</strong>g amount of time before the <strong>in</strong>itial pieces are spread across the <strong>P2P</strong> network. Apeer has <str<strong>on</strong>g>to</str<strong>on</strong>g> wait l<strong>on</strong>ger before its neighbours have obta<strong>in</strong>ed any pieces, and thus the average prebuffer<strong>in</strong>g time <strong>in</strong>creases.The required prebuffer<strong>in</strong>g time is l<strong>on</strong>ger <strong>in</strong> BiToS, which can be expla<strong>in</strong>ed by the fact that the high-priority set is large (24sec<strong>on</strong>ds) and is downloaded with the rarest-first policy, so it takes l<strong>on</strong>ger <str<strong>on</strong>g>to</str<strong>on</strong>g> obta<strong>in</strong> the <strong>in</strong>itial 10 sec<strong>on</strong>ds of the video.4.3 <str<strong>on</strong>g>Free</str<strong>on</strong>g>-ridersIn the sec<strong>on</strong>d experiment, we add free-riders <str<strong>on</strong>g>to</str<strong>on</strong>g> the system by hav<strong>in</strong>g 20% of the peers not upload anyth<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> others.Figure 5 shows the average chunk loss separately for the well-behav<strong>in</strong>g peers and the free-riders. The well-behav<strong>in</strong>g peersare affected by the presence of the free-riders, and experience a higher chunk loss than <strong>in</strong> the previous experiment. Aslight performance degradati<strong>on</strong> is <str<strong>on</strong>g>to</str<strong>on</strong>g> be expected, as well-behav<strong>in</strong>g peers occasi<strong>on</strong>ally upload <str<strong>on</strong>g>to</str<strong>on</strong>g> free-riders as part of theoptimistic unchok<strong>in</strong>g process. Without any means <str<strong>on</strong>g>to</str<strong>on</strong>g> detect free-riders before serv<strong>in</strong>g them data, los<strong>in</strong>g a bit of performancedue <str<strong>on</strong>g>to</str<strong>on</strong>g> the presence of free-riders is unavoidable. When us<strong>in</strong>g G2G, the well-behav<strong>in</strong>g peers lose significantly fewer chunkswhen compared <str<strong>on</strong>g>to</str<strong>on</strong>g> BiToS, and free-riders suffer higher amounts of chunk loss.The required prebuffer<strong>in</strong>g times for both groups are shown <strong>in</strong> Figure 6. Both groups require more prebuffer<strong>in</strong>g time than<strong>in</strong> the previous experiment: the well-behav<strong>in</strong>g peers require 33 sec<strong>on</strong>ds when us<strong>in</strong>g G2G, compared <str<strong>on</strong>g>to</str<strong>on</strong>g> 14 sec<strong>on</strong>ds whenfree-riders are not present. Because the free-riders have <str<strong>on</strong>g>to</str<strong>on</strong>g> wait for bandwidth <str<strong>on</strong>g>to</str<strong>on</strong>g> become available, they either start earlyand suffer a high chunk loss, or they have a l<strong>on</strong>g prebuffer<strong>in</strong>g time. In the shown run, the free-riders required 89 sec<strong>on</strong>dsof prebuffer<strong>in</strong>g <strong>on</strong> average when us<strong>in</strong>g G2G.5. RELATED WORKIn P2Cast, 9 P2VoD 10 and OBN, 11 peers are grouped accord<strong>in</strong>g <str<strong>on</strong>g>to</str<strong>on</strong>g> similar arrival times. The groups forward the streamwith<strong>in</strong> the group or between groups, and turn <str<strong>on</strong>g>to</str<strong>on</strong>g> the source if no eligible supplier can be found. In all three algorithms, apeer can decide how many children it will adopt, mak<strong>in</strong>g the algorithms vulnerable <str<strong>on</strong>g>to</str<strong>on</strong>g> free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g>.


Our G2G algorithm borrows the idea of barter<strong>in</strong>g for chunks of a file from BitTorrent, 3 which is a very popular <strong>P2P</strong> pro<str<strong>on</strong>g>to</str<strong>on</strong>g>colfor off-l<strong>in</strong>e download<strong>in</strong>g. In BitTorrent, the c<strong>on</strong>tent is divided <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> equally sized chunks which are exchanged by peers <strong>on</strong> atit-for-tat basis. In deployed BitTorrent networks, the observed amount of free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> is low 12 (however, Locher et al. 2 haveshown that with a specially designed client, free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> <strong>in</strong> BitTorrent is possible without a significant performance impactfor the free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> peer). Of course, the performance of the network as a whole suffers if <str<strong>on</strong>g>to</str<strong>on</strong>g>o many peers free-ride. TheBitTorrent pro<str<strong>on</strong>g>to</str<strong>on</strong>g>col has been adapted for VoD by both BASS 13 and BiToS. 8 In BASS, 13 all peers c<strong>on</strong>nect <str<strong>on</strong>g>to</str<strong>on</strong>g> a stream<strong>in</strong>gserver, and use the <strong>P2P</strong> network <str<strong>on</strong>g>to</str<strong>on</strong>g> help each other <strong>in</strong> order <str<strong>on</strong>g>to</str<strong>on</strong>g> shift the burden off the source. BASS is thus a hybridbetween a centralised and a distributed VoD soluti<strong>on</strong>. The differences between G2G and BiToS are described <strong>in</strong> Secti<strong>on</strong>3.4. Annapureddy et al. 14 propose a comprehensive VoD pro<str<strong>on</strong>g>to</str<strong>on</strong>g>col <strong>in</strong> which the video stream is divided <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> segments,which are further divided <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> blocks (chunks). Peers download segments sequentially from each other and spend a smallamount of additi<strong>on</strong>al bandwidth prefetch<strong>in</strong>g the next segment. Topology management is <strong>in</strong>troduced <str<strong>on</strong>g>to</str<strong>on</strong>g> c<strong>on</strong>nect peers whichare download<strong>in</strong>g the same segment, and network cod<strong>in</strong>g is used with<strong>in</strong> each segment <str<strong>on</strong>g>to</str<strong>on</strong>g> improve resilience. However, theiralgorithm assumes peers are h<strong>on</strong>est and thus free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> is not c<strong>on</strong>sidered. Us<strong>in</strong>g <strong>in</strong>direct <strong>in</strong>formati<strong>on</strong> as <strong>in</strong> G2G <str<strong>on</strong>g>to</str<strong>on</strong>g> judgeother peers is not new. EigenTrust 15 def<strong>in</strong>es a central authority <str<strong>on</strong>g>to</str<strong>on</strong>g> keep track of all trust relati<strong>on</strong>ships <strong>in</strong> the network. Lianet al. 16 generalise the use of <strong>in</strong>direct <strong>in</strong>formati<strong>on</strong> <str<strong>on</strong>g>to</str<strong>on</strong>g> combat collusi<strong>on</strong> <strong>in</strong> tit-for-tat systems.As is comm<strong>on</strong> <strong>in</strong> stream<strong>in</strong>g video players, G2G buffers the beg<strong>in</strong>n<strong>in</strong>g of the video clip (the prefix) before playback isstarted. Sen et al. 17 propose a scheme <str<strong>on</strong>g>to</str<strong>on</strong>g> let proxy servers do this prefix cach<strong>in</strong>g for a centralised VoD service, thussmooth<strong>in</strong>g the path from server <str<strong>on</strong>g>to</str<strong>on</strong>g> client.6. CONCLUSIONS AND FUTURE WORKWe have presented <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> (G2G) a <strong>P2P</strong> VoD algorithm which discourages free-<str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> and <strong>in</strong>troduces a novel chunkpick<strong>in</strong>gpolicy. We evaluated the performance of <str<strong>on</strong>g>Give</str<strong>on</strong>g>-<str<strong>on</strong>g>to</str<strong>on</strong>g>-<str<strong>on</strong>g>Get</str<strong>on</strong>g> by c<strong>on</strong>duct<strong>in</strong>g tests under various c<strong>on</strong>diti<strong>on</strong>s as well ascompar<strong>in</strong>g its performance <str<strong>on</strong>g>to</str<strong>on</strong>g> that of BiToS. 8 If free-riders are present, they suffer heavy performance penalties if uploadbandwidth <strong>in</strong> the system is scarce. Once enough peers have downloaded the full video, there is enough upload bandwidth<str<strong>on</strong>g>to</str<strong>on</strong>g> susta<strong>in</strong> free-riders without compromis<strong>in</strong>g the well-behav<strong>in</strong>g peers. We plan <str<strong>on</strong>g>to</str<strong>on</strong>g> enhance G2G by implement<strong>in</strong>g seek andfast-forward functi<strong>on</strong>ality, by perform<strong>in</strong>g a sensitivity analyses <strong>on</strong> the parameters of G2G, and by add<strong>in</strong>g a trust system <str<strong>on</strong>g>to</str<strong>on</strong>g>enhance the quality of the feedback from peers <strong>in</strong> G2G, and <str<strong>on</strong>g>to</str<strong>on</strong>g> detect collusi<strong>on</strong> attacks. In additi<strong>on</strong>, we plan <str<strong>on</strong>g>to</str<strong>on</strong>g> <strong>in</strong>corporateG2G <strong>in</strong><str<strong>on</strong>g>to</str<strong>on</strong>g> Tribler, 18 our social-based <strong>P2P</strong> system, <str<strong>on</strong>g>to</str<strong>on</strong>g> allow real-world measurements of G2G’s performance.REFERENCES1. E. Adar and B. Huberman, “<str<strong>on</strong>g>Free</str<strong>on</strong>g><str<strong>on</strong>g>rid<strong>in</strong>g</str<strong>on</strong>g> <strong>on</strong> Gnutella,” First M<strong>on</strong>day 5(10), 2000.2. T. Locher, P. Moor, S. Schmid, and R. Wattenhofer, “<str<strong>on</strong>g>Free</str<strong>on</strong>g> Rid<strong>in</strong>g <strong>in</strong> BitTorrent is Cheap,” <strong>in</strong> HotNets-V, 2006.3. B. Cohen, “BitTorrent,” http://www.bit<str<strong>on</strong>g>to</str<strong>on</strong>g>rrent.com/ .4. I. Keidar, R. Melamed, and A. Orda, “EquiCast: Scalable Multicast with Selfish Users,” <strong>in</strong> PODC 2006, pp. 63–71.5. H. Li, A. Clement, E. W<strong>on</strong>g, J. Napper, I. Roy, L. Alvisi, and M. Dahl<strong>in</strong>, “BAR Gossip,” <strong>in</strong> OSDI’06, pp. 191–206.6. J. Mol, D. Epema, and H. Sips, “The Orchard Algorithm: <strong>P2P</strong> Multicast<strong>in</strong>g without <str<strong>on</strong>g>Free</str<strong>on</strong>g> Rid<strong>in</strong>g,” <strong>in</strong> <strong>P2P</strong>2006, pp. 275–282.7. J. Douceur, “The Sybil Attack,” <strong>in</strong> IPTPS’02,8. A. Vlavianos, M. Iliofo<str<strong>on</strong>g>to</str<strong>on</strong>g>u, and M. Faloutsos, “BiToS: Enhanc<strong>in</strong>g BitTorrent for Support<strong>in</strong>g Stream<strong>in</strong>g Applicati<strong>on</strong>s,” <strong>in</strong> IEEEGlobal Internet Symposium, 2006.9. Y. Guo, K. Suh, J. Kurose, and D. Towsley, “P2Cast: <strong>P2P</strong> Patch<strong>in</strong>g Scheme for VoD Services,” <strong>in</strong> WWW2003, pp. 301–309.10. T. Do, K. Hua, and M. Tantaoui, “P2VoD: Provid<strong>in</strong>g Fault Tolerant <str<strong>on</strong>g>Video</str<strong>on</strong>g>-<strong>on</strong>-<strong>Demand</strong> Stream<strong>in</strong>g <strong>in</strong> Peer-<str<strong>on</strong>g>to</str<strong>on</strong>g>-Peer Envir<strong>on</strong>ment,” <strong>in</strong>Proc. of the IEEE Intl. C<strong>on</strong>f. <strong>on</strong> Communicati<strong>on</strong>s, 3, pp. 1467–1472, 2004.11. C. Liao, W. Sun, C. K<strong>in</strong>g, and H. Hsiao, “OBN: Peer<strong>in</strong>g F<strong>in</strong>d<strong>in</strong>g Suppliers <strong>in</strong> <strong>P2P</strong> On-demand Stream<strong>in</strong>g <strong>Systems</strong>,” <strong>in</strong> ICPADS, 1,pp. 8–15, 2006.12. N. Andrade, M. Mowbray, A. Lima, G. Wagner, and M. Ripeanu, “Influences <strong>on</strong> Cooperati<strong>on</strong> <strong>in</strong> BitTorrent Communities,” <strong>in</strong> Proc.ACM SIGCOMM, pp. 111–115, 2005.13. C. Dana, D. Li, D. Harris<strong>on</strong>, and C.-N. Chuah., “BASS: BitTorrent Assisted Stream<strong>in</strong>g System for <str<strong>on</strong>g>Video</str<strong>on</strong>g>-<strong>on</strong>-<strong>Demand</strong>,” <strong>in</strong> MMSP2005, pp. 1–4.14. S. Annapureddy, S. Guha, and C. Gkantsidis, “Is High-Quality VoD Feasible us<strong>in</strong>g <strong>P2P</strong> Swarm<strong>in</strong>g?,” <strong>in</strong> WWW2007, pp. 903–911.15. P. Ganesan and M. Seshadri, “The EigenTrust Algorithm for Reputati<strong>on</strong> Management <strong>in</strong> <strong>P2P</strong> Networks,” <strong>in</strong> WWW2003, pp. 446–457.16. Q. Lian, Y. Peng, M. Yang, Z. Zhang, Y. Dai, and X. Li, “Robust Incentives via Multi-level Tit-for-tat,” <strong>in</strong> IPTPS’06,17. S. Sen, J. Rexford, and D. Towsley, “Proxy Prefix Cach<strong>in</strong>g for Multimedia Streams,” <strong>in</strong> Proc. of IEEE INFOCOM, 3, pp. 1310–1319, 1999.18. J. Pouwelse, P. Garbacki, J. Wang, A. Bakker, J. Yang, A. Iosup, D. Epema, M. Re<strong>in</strong>ders, M. van Steen, and H. Sips, “Tribler: ASocial-Based Peer-<str<strong>on</strong>g>to</str<strong>on</strong>g>-Peer System,” <strong>in</strong> C<strong>on</strong>currency and Computati<strong>on</strong>: Practice and Experience (<str<strong>on</strong>g>to</str<strong>on</strong>g> appear),

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!