12.07.2015 Views

Tech Report - University of Virginia

Tech Report - University of Virginia

Tech Report - University of Virginia

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5Contention Window175150125100755025Throughput (Mbps)8765432Throughput <strong>of</strong> Proposed AlgorithmThroughput <strong>of</strong> Original AlgorithmJitter <strong>of</strong> Proposed AlgorithmJitter <strong>of</strong> Originial AlgorihtmBAggregateAAggregateABAJitter (ms)00 13 26 39 52 65 78 91 104 117 130 143Bit Rate (Mbps)10BB AFig. 3: Floor PlanFig. 4: ValidationFig. 5: Infrastructurethis configuration, the bit rate set consists <strong>of</strong> 13, 26, 39, 52,78, 104, 117, and 130 Mbps. In our experiments, the basic ratewas 13 Mbps when there were two streams enabled over 20MHz bandwidth with short SGI disabled in IEEE 802.11n. Inexperiments, α in Formula 1 was set to 100 and the weightedwindow t w for temporal fairness was set to 100 ms.The traffic transmission lasted for 20 seconds for eachrun <strong>of</strong> experiment. The iperf manual reports that 10 secondsare enough to measure the network throughput. In order toeliminate the variation at the startup and completion <strong>of</strong> theexperiment, we trimmed the 5 seconds at the beginning andthe 5 seconds at the end <strong>of</strong> the measurement, so we still had10 seconds <strong>of</strong> measurement. We repeated each scenario sixtimes and the final data in the figures were averaged from themultiple runs.2) Implementation Validation: We validated the correctness<strong>of</strong> our implementation as shown in Formula 1 by evaluatingthe initial contention window computed against the bit rate.Figure 4 shows in general the variation <strong>of</strong> contention windowalong with the bit rate. As can observed on this figure, theinitial contention window shrinks as the bit rate increases. Thefigure shows that the implementation validation numericallymatches the relation <strong>of</strong> the initial contention window and bitrate in Formula 1.3) Infrastructure Mode: This experiment is designed toevaluate the performance <strong>of</strong> the proposed Overlapped Contentionin infrastructure mode with an access point and twoclient nodes as in Figure 3. A supported 130 Mbps and B had52 Mbps. The transmissions were from the clients to the accesspoint. In this experiment, the access point was configured torun the iperf server to receive the UDP traffic from clients.In this experiment, we measured the jitter and throughput<strong>of</strong> UDP packets reported by the iperf server. Jitter is definedas the latency variation. The results are shown in Figure 5.The left two plots respectively show the throughputs <strong>of</strong> theproposed algorithm and the original algorithm in CSMA/CA.The right two plots illustrate the jitter performance <strong>of</strong> thesetwo algorithms. Two obvious observations are: 1) the proposedopportunistic access algorithm outperforms the originalone in both throughputs and jitter in general, because theopportunistic access spent much less time on back<strong>of</strong>f withsmaller contention windows than the original access whennodes were at bit rates higher than the basic rate, 2) althoughthe clients A and B have identical throughputs in the originalalgorithm as expected, the opportunistic access enables themwith different throughputs based on their channel conditionsand the aggregate network throughput is there<strong>of</strong> significantlyimproved, by more than 50% in the measured case.4) Ad-hoc Mode: We also conducted performance evaluationon an ad-hoc topology as shown by the placement <strong>of</strong>the circles in Figure 3. In our experiments, all nodes usedthe default routing protocol implemented in the module <strong>of</strong>MAC 80211 in the Linux kernel. MAC 80211 is the wirelessmanagement layer in the Linux kernel. Two traffic flows wentfrom C to D, and from B to E. C and D were so placedthat they cannot hear each other and their flow had to berouted by A. The hops from C to D had bit rate <strong>of</strong> 52 Mbpsand the hop from B to E was at 130 Mbps. As a result, theexperimental topology resulted in a two-hop mesh network.In the experiments, the iperf flows started at B and C at thesame time to transmit UDP traffic to D and E respectively.As in infrastructure topology, the jitter and throughput weremeasured.Figure 6 shows the measurement results. The left part <strong>of</strong>the figure illustrates the aggregate network throughputs <strong>of</strong> theproposed opportunistic access and the original algorithm inCSMA/CA. It shows about 30% improvement <strong>of</strong> the throughputin this specific case. The right two plots on the figuredepict the jitters <strong>of</strong> these two flows. As we can observe, whenthe opportunistic access was enabled, the jitter <strong>of</strong> flow C → Donly slightly increased, but that <strong>of</strong> flow B → E significantlyreduced because the channel condition <strong>of</strong> B → E was betterthan that <strong>of</strong> C → D and therefore transmitted more frequently.5) Impact <strong>of</strong> t w : As discussed in Section IV-C, the size<strong>of</strong> the weighted window, t w , has a significant impact on thenetwork performance. Actually, it determines the degree <strong>of</strong>opportunism in that if it is large, the contention metric willnot be updated until after a long period and then the userwith good channel conditions can opportunistically access thechannel for long. But this is at the cost <strong>of</strong> delaying the userswith poor conditions and may hurt real-time applications thatrequire small delay and jitters. We numerically evaluated theimpact <strong>of</strong> t w on the throughput and jitter in the infrastructure

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!