12.07.2015 Views

Implementation of a Peer-to-Peer Multiplayer Game with ... - DVS

Implementation of a Peer-to-Peer Multiplayer Game with ... - DVS

Implementation of a Peer-to-Peer Multiplayer Game with ... - DVS

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ArchitectureCensus operates in epochs, which are fixed-length time intervals (30s is a typical length). Each epochhas a fixed and consistent membership view which only changes on epoch transitions. Also, each epochhas one member that acts as the leader, multicasting an item informing about membership changes atthe end <strong>of</strong> the epoch. Within each epoch, application data can be multicast according <strong>to</strong> the groupmemberships. Multicast trees are built based on a network coordinate system like Vivaldi [9]. Theauthors present three versions <strong>of</strong> the algorithm, starting <strong>with</strong> a simplified single-region setup (scaling<strong>to</strong> more than 10,000 nodes), then introducing the multi-region system and finally adding the partialknowledge extension for particularly large or dynamic systems.In the first version, there is a single leader processing all membership events. A joining node identifiesitself <strong>to</strong> the leader and provides its network coordinates (for network proximity management) and anoptional identity certificate. The leader informs the joining node about the current epoch number and afew members from whose the latter obtains the current membership information. When leaving, a nodesends a departure request; failed nodes are detected by others. Nodes report major changes <strong>of</strong> theirnetwork location, enabling relocations for continuous network proximity.For larger deployments, the system is divided in<strong>to</strong> multiple regions based on network proximity. Eachregion has a region leader which is responsible for membership activities <strong>with</strong>in the region. Towards theend <strong>of</strong> an epoch, each region leader sends a report for its region <strong>to</strong> the global leader who then createsthe item for the next epoch. When a node’s network coordinates change, it may move <strong>to</strong> another regionby sending a move request <strong>to</strong> the region leader. The number <strong>of</strong> regions is not fixed. Instead, the systemstarts <strong>with</strong> one region, and whenever a region size has reached a certain threshold, it is split. If a regionshrinks below a certain threshold, it merges <strong>with</strong> a neighboring region.Even <strong>with</strong> multiple regions, each node still has the full membership view. This effects a high bandwidthrequirement for all participating nodes particularly in very large and/or dynamic (i.e., membershipschange frequently) systems. To solve this problem, there is a partial knowledge deployment option inwhich participants only have a full membership view for their own region but only summary informationabout the other regions. The summary includes the region leader, the region’s size and centroid, andsome region members.MulticastMulticast is is based on multiple distribution trees which are built deterministically and on-the-fly usingthe consistent membership view. Thus, each node computes where <strong>to</strong> forward messages <strong>to</strong>, usinga deterministic algorithm. Doing so, the tree is rebuilt every epoch, independent from the previousone. Multiple trees allow for redundancy using m-<strong>of</strong>-n erasure-coded fragments, i.e., there are n trees(typically 4 <strong>to</strong> 16) and a message can be reconstructed if at least m fragments have been received. Ifmore than n − m fragments are missing, a node requests missing fragments from random nodes <strong>of</strong> hismembership view.The trees <strong>with</strong>in each region are interior-node-disjoint, i.e., each node is an interior node <strong>of</strong> at most onetree. This ensures that failed nodes cannot break more than one distribution tree. Trees are recursivelybuilt based on node locality. In each step, the current sub-region is split through the centroid across thelongest edge, choosing one node on each side as a child. Before building the trees, all nodes are colored(round robin), and one tree is built per color, using only nodes <strong>with</strong> that color as interior nodes. Withmultiple regions, inter-region trees are built the same way, just containing only one representative <strong>of</strong>each color in each region.Bandwidth requirement is minimized by letting only one parent send the membership update fora particular child, and only m parents send a fragment. Latency is optimized using the membershipknowledge. A fragment is only sent if it is on one <strong>of</strong> the m fastest paths <strong>to</strong> the child.16 2.2 Application Layer Multicast

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!