12.07.2015 Views

Tech Report - University of Virginia

Tech Report - University of Virginia

Tech Report - University of Virginia

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Opportunistic Random Access with TemporalFairness in Wireless NetworksChong Tang ∗ , Jagadeesh Balasubramani † , Lixing Song ∗ , Gordon Pettey ∗ , Shaoen Wu ∗ , Saâd Biaz †∗ School <strong>of</strong> Computing, The <strong>University</strong> <strong>of</strong> Southern Mississippi{chong.tang, lixing.song, gordon.pettey}@eagles.usm.edu, shaoen.wu@usm.edu† Computer Science and S<strong>of</strong>tware Engineering Department, Auburn <strong>University</strong>jzb0047@tigermail.auburn.edu, biazsaa@auburn.edu<strong>Tech</strong>nical <strong>Report</strong>Abstract—User diversity arises when wireless users experiencediverse channel conditions. This paper proposes three opportunisticrandom access algorithms that favor users with the bestchannel conditions in channel access. These algorithms operateon the contention mechanism <strong>of</strong> CSMA/CA and compute the contentionwindow based on channel conditions. In one algorithm,the contention windows <strong>of</strong> all users share the same lower boundzero. In another, the contention windows <strong>of</strong> users with differentchannel conditions are segmented without overlapping. The thirdalgorithm adopts a normal distribution to select the back<strong>of</strong>f valuein the contention window. These algorithms are also enhancedto provide temporal fairness and avoid starving the users withpoor channel conditions. We implemented a Linux-based testbedfor a real world performance evaluation and also developed thealgorithms into the NS3 simulator to conduct comprehensiveand controllable experiments. Extensive evaluations show thatthe proposed opportunistic access can significantly improve thenetwork performance in throughput, delay, and jitter over thecurrent default CSMA/CA method.I. INTRODUCTIONIn wireless networks, channel conditions are determinedby many factors such as fading, mobility, shadowing, andlocation. Because <strong>of</strong> spatial difference, wireless users <strong>of</strong>tenexperience different channel conditions, which is referred asuser or spatial diversity in wireless communication. Dueto spatial diversity, a wireless user with excellent channelcondition may be able to transmit data at the highest bit ratewhile another user with a poor link may not be able to transmitany data even at the lowest rate. Channel condition variationsalso lead to time diversity: a user may have a link <strong>of</strong> high bitrate now, but may have to use a low rate thereafter.The variations <strong>of</strong> channel conditions are <strong>of</strong>ten considereddetrimental in the traditional wireless communication because1) they are unpredictable and 2) each user is treated with nodifference. In recent years, opportunistic approaches have beenattempted to exploit the inherent randomness <strong>of</strong> wireless channelto improve wireless network performance and utilization,including opportunistic rate adaptation [11], transmission [1],scheduling [12]–[14] and routing [15]–[18]. Opportunisticprotocols exploit user diversity by granting higher priority tousers with good channel conditions and/or time diversity byextending the use <strong>of</strong> channel in good conditions.In this paper, we propose and evaluate three opportunisticmedia access schemes that exploit user diversity in randomaccess wireless networks. These schemes enable the userwith the best channel conditions in a random access wirelessnetwork to have the largest probability to access the sharedchannel at a certain moment, but they do not starve the userswith poor links. In the long run, each user probabilisticallyobtains a throughput proportional to its channel conditions interms <strong>of</strong> bit rate.The main contributions <strong>of</strong> this work consist <strong>of</strong>:• three distributed opportunistic random access algorithms• the development <strong>of</strong> a Linux-based testbed through prototypingthe algorithms into open source implementation• extensive performance evaluation on the testbed and theNS3 [25] simulator.The rest <strong>of</strong> this paper is organized as follows. The relatedcontention scheme <strong>of</strong> CSMA/CA is briefly reviewedin Section II. Then, Section III presents the problems thatmotivate this work. Next, Section IV discusses the threeproposed opportunistic access algorithms. Section V presentsthe implementation <strong>of</strong> a testbed, experiment settings, andperformance evaluations on the testbed and the NS3 simulator.Some observations are discussed in Section VI followed by therelated work in Section VII. Finally, Section VIII concludesthis paper.II. OVERVIEW OF CSMA/CANowadays, most random access networks such as IEEE802.11 [6] adopt CSMA/CA [4] as the core media accessmechanism. The opportunistic algorithms proposed in thiswork are therefore based on this mechanism. CSMA/CA isa contention-based MAC protocol. In a typical CSMA/CAnetwork, regardless <strong>of</strong> ad-hoc or infrastructure mode, all nodesthat have data to transmit on the shared wireless link mustundergo a contention procedure first. Only the node that winsthe contention can transmit while all others freeze the contentionprocedure until the winner completes its transmission.The contention is regulated by a Binary Exponential Back<strong>of</strong>fprocess. Each node maintains a contention window that has alower bound <strong>of</strong> “0” and an upper bound <strong>of</strong> CW that starts with


2an initial value <strong>of</strong> CW min and may exponentially increaseup to a maximum <strong>of</strong> CW max . Then, a node that is readyto transmit randomly selects a back<strong>of</strong>f value CW bf from (0,CW ] using a uniform distribution. The node will keep sensingthe channel. If the channel is busy, the back<strong>of</strong>f value is frozenuntil the channel becomes idle. Otherwise, it is decremented byone at every (idle) time slot. When the back<strong>of</strong>f value reaches“0”, the node starts its transmission. If the transmission fails,the current contention window upper-bound CW is doubledunless it has reached the maximum CW max . Then anotherback<strong>of</strong>f procedure is repeated with the updated contentionwindow.III. PROBLEM STATEMENT AND MOTIVATIONThe motivation <strong>of</strong> this work is to exploit user diversity inrandom access wireless networks. This motivation stems froma few observed problems as described below.A. Impact <strong>of</strong> Access on Network PerformanceThe access mechanism in a wireless network with userdiversity has significant impact on the network performanceand channel utilization. Let us examine a network with a basestation and two client nodes: A and B. Suppose A has poorchannel conditions supporting a bit rate <strong>of</strong> R l and B has goodchannel conditions supporting a bit rate <strong>of</strong> R h . In one extremecase where only A has the access to the channel, the networkthroughput is A’s bit rate R l assuming no packet failure. Inthe other extreme case where only B uses the channel, thenetwork throughput is B’s bit rate R h . Otherwise, if A and Bshare the channel, the network throughput will be some valuebetween R l and R h . Namely, R h and R l are respectively theupper and lower bounds <strong>of</strong> the network throughput. Therefore,to improve the network throughput and channel utilization, Bshould be favored for accessing the channel, which is referredas opportunistic access.B. Equal Probability Access in CSMA/CAFrom the brief review <strong>of</strong> CSMA/CA in Section II, regardless<strong>of</strong> the channel conditions, all nodes probabilisticallyhave equal opportunities to access the channel because theyuniformly select the back<strong>of</strong>f value from the same initialcontention windows. In a two-node network with CSMA/CA,a node A with poor channel condition may beat the node Bwith good channel condition because both nodes have equalprobability in channel access. However, intuitively and fromthe discussion in the last section (Section III-A), it is morebeneficial to allow B to transmit under these conditions fortwo reasons. First, B would use the wireless channel moreefficiently because it takes less time in transmitting the sameamount <strong>of</strong> data at a higher bit rate than A. This opportunismcan lead to higher utilization, efficiency, and throughput <strong>of</strong>the overall network. Second, because <strong>of</strong> the inherent temporalvariations <strong>of</strong> the wireless channel, B with presently goodchannel conditions may not keep as good when it wins thechannel later.C. Opportunistic Transmission vs. Opportunistic AccessUnlike opportunistic access that exploits user diversity inwireless networks, opportunistic transmission takes advantage<strong>of</strong> time diversity due to the temporal dynamic characteristics<strong>of</strong> the wireless channel. In opportunistic transmission, after anode wins the contention, it tries to transmit aggressively whileits channel remains good because the condition may degradelater. In CSMA/CA, a node only transmits one frame when itwins the channel while in opportunistic transmission schemessuch as OAR [1] and MOAR [3], more than one frame may betransmitted when a node wins access. The number <strong>of</strong> framesto be transmitted is determined by the channel conditions. InOAR and MOAR, the number <strong>of</strong> frames transmitted after anode wins the contention is calculated as the ratio <strong>of</strong> currentbit rate over the basic rate. For example, if the current bit rate<strong>of</strong> a node is 11 Mbps and the basic rate is 2 Mbps, then theratio should be ⌊11/2⌋ = 5, so the node will transmits fivepackets before the channel is released for contention.Each communication cycle in the random access wirelessnetworks can actually be considered as two phases: access(contention) followed by transmission. From the above discussion,opportunistic transmission focuses on the transmissionphase but not the access phase. Although opportunistic transmissionimproves the network performance by exploiting timediversity, it does not guarantee the node with the best channelcondition has the best chance to win the channel. However,the node with the best channel conditions deserves the chanceto use the channel because its channel may degrade later. Toimprove the utilization <strong>of</strong> scarce wireless resources, we aremotivated to design opportunistic access algorithms that grantthe channel in probability to the node that has the best channelcondition, namely that is most likely to generate the largestinstantaneous network throughput.With the observations discussed above, to improve networkefficiency and channel utilization, we are motivated to designdistributed opportunistic access algorithms that probabilisticallyfavor the users with best channel conditions in winningthe channel contention.IV. DESIGN OF OPPORTUNISTIC RANDOM ACCESSTo achieve opportunistic random access, we proposethree algorithms based on the contention mechanism in theCSMA/CA. Two <strong>of</strong> these algorithms target at calculatingthe contention window for each node based on its instantaneouschannel conditions. The third algorithm proposes toselect a back<strong>of</strong>f value from the contention window with anormal distribution, rather than a uniform distribution as inthe CSMA/CA. In addition, opportunistic access inherentlytends to starve nodes with poor channel conditions. Thissection elaborates on these algorithms and how to address thestarvation problem with temporal fairness.A. Contention Window Based Opportunistic AccessTwo algorithms achieve opportunistic random access bydetermining contention windows based on channel conditions.


31) Overlapped Contention: In the first approach, the contentionwindows <strong>of</strong> all nodes share the same lower bound <strong>of</strong>“0” as CSMA/CA does, but have different initial upper boundsthat are determined by the channel conditions in terms <strong>of</strong>achievable bit rates. Therefore, the contention windows <strong>of</strong> allnodes overlap in ranges as plotted on the left plot in Figure 1.The initial upper bound CW is inversely proportional to theratio <strong>of</strong> current achievable bit rate over the basic bit rate andcomputed as in Formula 1 below whenever a node is ready tocontend on the channel for a new transmission.CW = ⌈α × R bR i× CW base ⌉ (1)where R i refers to the current achievable bit rate <strong>of</strong> a particularnode i, R b denotes the basic rate in a bit rate set and CW baseis a constant base value, e.g. 15 in IEEE 802.11n. Note that R bR i6.5Mbps600Mbps ismay be very small, for example, in IEEE 802.11n,almost 0.01. Therefore, a coefficient, α, is introduced to makesure that the computed window for the highest bit rate is noless than a certain small value to maintain a random access.From this formula, intuitively, a high bit rate, namely goodchannel conditions, leads to a small CW and thereby a largerprobability to win the channel contention with the uniformselection <strong>of</strong> a back<strong>of</strong>f value from the contention window. Then,the computed CW can be used by the Binary ExponentialBack<strong>of</strong>f procedure in the CSMA/CA to fulfill the opportunisticaccess.Fig. 1: Illustration <strong>of</strong> Contention WindowNote that, in the Overlapped Contention, a node with lowbit rate still has the probability to beat another node with ahigh bit rate: the lower rate node still has chance to select asmaller back<strong>of</strong>f value because their contention windows havethe same lower bound <strong>of</strong> ”0”.2) Segmented Contention: To strictly grant a higher priority<strong>of</strong> accessing channel to the users with better channel conditions,another algorithm separates the contention windowsfor nodes at different bit rates as illustrated on the right <strong>of</strong>Figure 1. In this approach, the initial contention window is stillcomputed as in Formula 1. However, unlike the OverlappedContention that maintains the same lower bound <strong>of</strong> “0”for all contention windows, this algorithm differentiates thelower bounds <strong>of</strong> the contention window for different channelconditions. A higher bit rate results in an upper bound <strong>of</strong> thecontention window smaller than the lower bound <strong>of</strong> a node ata lower bit rate. For example, on the right <strong>of</strong> Figure 1, W i maxand W i+1 max respectively denote the computed initial upperbounds <strong>of</strong> the contention window <strong>of</strong> bit rate R i and R i+1according to Formula 1. Then, the lower bound <strong>of</strong> the contentionwindow CW i <strong>of</strong> bit rate R i is assigned the value that islarger by one than W i+1 max , the upper bound <strong>of</strong> CW i+1 , i.e.the window is [W i+1 max + 1, W i max ]. This segments thecontention <strong>of</strong> nodes with different channel conditions in that anode at bit rate R i can never get a back<strong>of</strong>f value smaller than anode at bit rate R i+1 . This approach can be considered semiprobabilisticin that (1) the access <strong>of</strong> the nodes at the same bitrate is random since they have the same initial window size torandomly generate a back<strong>of</strong>f value, but (2) the access <strong>of</strong> nodesat different rates is deterministically prioritized because thelower rate nodes can never get a smaller back<strong>of</strong>f value than thehigher rate nodes. This approach provides a tight opportunismby grouping nodes with similar channel conditions into thesame random access team at the cost <strong>of</strong> randomness acrossteams. Note that this approach leads to a significant problem:starvation <strong>of</strong> nodes with poor conditions. This problem willbe addressed in Section IV-C.B. Normal Distribution Based Back<strong>of</strong>f SelectionIn the Overlapped Contention, although nodes with differentchannel conditions obtain different initial contention windows,each node still uses a uniform distribution to select theback<strong>of</strong>f value from its contention window. The third proposedopportunistic access approach consists <strong>of</strong> using a normal ratherthan a uniform distribution, in selecting the back<strong>of</strong>f value oncethe contention window is determined as in the OverlappedContention. With the expectation <strong>of</strong> the normal distribution setto a proper value within the contention window and a properstandard deviation, a node with higher bit rate has significantlygreater probability to obtain a smaller back<strong>of</strong>f value than alower bit rate node. Systematically, the expectation <strong>of</strong> thenormal distribution <strong>of</strong> a node at rate R is computed as below:E = ⌈ CW 2 ⌉ (2)where CW is computed as in Formula 1.Let us examine anexample where a networkhas two nodesA at rate R andB at rate R 2 . Then,the expectations <strong>of</strong>the normal distributionat A and Bare respectively setas N 4 and N 2to selecttheir back<strong>of</strong>f valuesas shown in Figure 2.As a result, in thelong run, A will selecta smaller back<strong>of</strong>f value than B because most likelyFig. 2: Normal Distribution Back<strong>of</strong>fthe


4selected values are near to the expectation in the normaldistribution.C. Temporal Fairness to Avoid StarvationThe equal probability access regardless <strong>of</strong> channel conditionsin the CSMA/CA leads to an anomaly that all nodes willhave identical throughputs in the long run [8], which is calledthroughput fairness. It is obvious that the nodes at lower bitrate hurt the throughputs <strong>of</strong> the nodes at higher bit rates aswell as the overall throughput <strong>of</strong> the network. This fairnessis not “fair” in temporal use <strong>of</strong> the channel among nodes.With the identical throughput, a node at the lowest bit rate ina network will use the channel for the longest time. On theother hand, as discussed in Section III, although opportunisticaccess can improve the network throughput by always favoringthe users with the best channel conditions, it may starve theusers with poor channel conditions. A solution both problems<strong>of</strong> identical throughputs and starvation is to achieve temporalfairness among nodes [19] that is defined as each node hasapproximately the identical amount <strong>of</strong> time in using channel.To achieve temporal fairness, we propose to use a bitrate normalized average throughput as a metric in computingthe initial contention window. Each node tracks the averagethroughput T updated in an exponentially weighted windowt w . Suppose Node K is the transmitter at a certain moment,T is updated at each node k that has packets ready fortransmission in each time slot with a low-pass filter as:{(1 −1tT k [m+1] =w) × T k [m] + 1t w× R k [m] if k = K(1 − 1t w) × T k [m]if k ≠ KThe bit rate normalized average throughput for node k havingbit rate R k in the m-th window is defined as:T normalized [k, m] = T k [m]/R k (3)This is used to compute the initial contention window. As aresult, Formula 1 is accordingly updated as:CW = α × T normolizd [k, m] × CW base (4)The contention with Formula 3 maintains two importantfeatures: temporal fairness and opportunism. The temporalfairness is achieved because the bit rate normalized averagethroughput can actually be explained as the temporal quota<strong>of</strong> a node in transmission period. This is clear if we rewritethe definition <strong>of</strong> T normalized [k, m] as (t c × T k [m]/R k )/t c ina period <strong>of</strong> length t c : t c × T k [m] is the average transmitteddata in bits and thereby t c × T k [m]/R k is the transmissiontime. Thus, (t c × T k [m]/R k )/t c is the the percentage <strong>of</strong> thetime that node k gains for transmission in a period <strong>of</strong> t c . Theopportunism in Formula 3 is driven by the bit rate. If a nodehas a high bit rate, its T normalized [k, m] tends to be small.According to Formula 4, a small T normalized [k, m] leads to asmall contention window CW to win the channel. If a nodeuses the channel for too long, it will end up with a largeaverage throughput T k [m], thereby a large T normalized [k, m]that enlarges its contention window and decreases its chanceto win the channel.Note that the size <strong>of</strong> the weighted window, t w , is associatedwith the latency requirement <strong>of</strong> applications. If t w is large, itallows the node with the optimal channel condition to use thechannel for a long duration, but may hurt other nodes havingapplications requiring low latency. If it is small, the channel isfrequently switched among nodes <strong>of</strong> different bit rates and theoverall performance degrades. Another concern is the support<strong>of</strong> QoS. If multiple classes <strong>of</strong> applications are involved, eachclass has different requirements, especially regarding latency.Then, a weight parameter φ c for each class <strong>of</strong> application isnecessary in updating the average throughput as: T k [m + 1] =(1 − 1t w) × T k [m] + 1t w× φ × R k [m].V. PERFORMANCE EVALUATIONWe extensively evaluated the performance <strong>of</strong> the proposedopportunistic access algorithms with both prototyping andsimulation.A. Evaluation with PrototypingTo evaluate the opportunistic access performance in realworld, we implemented a Linux based testbed. The implementationincludes only the first algorithm–Overlapped Contentionbecause the other two require the modification <strong>of</strong> the lowerbound <strong>of</strong> the back<strong>of</strong>f window that is “sealed” as 0 in the closedfirmware to which we do not have access.1) Implementation Platform and Experiment Settings: Ourprototype testbed consists <strong>of</strong> six embedded computing nodesbased on Alix 3dc SBC that has a 512 MHz AMD LX800CPU, 256M RAM, and 8GB SD Card storage, and runsLinux kernel version 3.3.1. Each node is equipped with aminiPCI IEEE 802.11a/b/g/n WiFi adapter, MkroTik R52Hn,that is based on Atheros AR9220 chipset that is well supportedby the open source Linux driver Ath9k [10]. Each interfaceis connected to a pair <strong>of</strong> external high gain antennas. Theimplementation is based on the Linux wireless modules [9].The testbed were deployed in the second floor <strong>of</strong> the BobbyChain Science <strong>Tech</strong>nology Building on campus with a floorplan as shown in Figure 3. The building consists <strong>of</strong> laboratories,<strong>of</strong>fices and classrooms. Both infrastructure (WLAN)and infrastructureless (Ad-hoc) modes were evaluated. Ininfrastructure mode, one <strong>of</strong> the nodes was configured to workas the access point represented by the black triangle in Figure 3and two client nodes were placed at the locations <strong>of</strong> circlesA and B in the figure. The five hexagons in Figure 3 showthe infrastructureless mesh topology. We conducted all ourexperiments at midnight to minimize the external surroundinginterference such as walking people because all classes weredismissed and people were away from work. There are threenon-overlapping channels(1, 6, 11) available in IEEE 802.11.We chose channel 1 since channel 6 and channel 11 wereheavily used by surrounding networks. As for the s<strong>of</strong>twaretools, we use iperf [26] to generate and collect UDP trafficfrom clients to the access point. We did not choose TCP trafficbecause we would focus on the evaluation <strong>of</strong> the MAC accessperformance. The network was set in the IEEE 802.11n modewith two transmission streams and 20MHz bandwidth. With


5Contention Window175150125100755025Throughput (Mbps)8765432Throughput <strong>of</strong> Proposed AlgorithmThroughput <strong>of</strong> Original AlgorithmJitter <strong>of</strong> Proposed AlgorithmJitter <strong>of</strong> Originial AlgorihtmBAggregateAAggregateABAJitter (ms)00 13 26 39 52 65 78 91 104 117 130 143Bit Rate (Mbps)10BB AFig. 3: Floor PlanFig. 4: ValidationFig. 5: Infrastructurethis configuration, the bit rate set consists <strong>of</strong> 13, 26, 39, 52,78, 104, 117, and 130 Mbps. In our experiments, the basic ratewas 13 Mbps when there were two streams enabled over 20MHz bandwidth with short SGI disabled in IEEE 802.11n. Inexperiments, α in Formula 1 was set to 100 and the weightedwindow t w for temporal fairness was set to 100 ms.The traffic transmission lasted for 20 seconds for eachrun <strong>of</strong> experiment. The iperf manual reports that 10 secondsare enough to measure the network throughput. In order toeliminate the variation at the startup and completion <strong>of</strong> theexperiment, we trimmed the 5 seconds at the beginning andthe 5 seconds at the end <strong>of</strong> the measurement, so we still had10 seconds <strong>of</strong> measurement. We repeated each scenario sixtimes and the final data in the figures were averaged from themultiple runs.2) Implementation Validation: We validated the correctness<strong>of</strong> our implementation as shown in Formula 1 by evaluatingthe initial contention window computed against the bit rate.Figure 4 shows in general the variation <strong>of</strong> contention windowalong with the bit rate. As can observed on this figure, theinitial contention window shrinks as the bit rate increases. Thefigure shows that the implementation validation numericallymatches the relation <strong>of</strong> the initial contention window and bitrate in Formula 1.3) Infrastructure Mode: This experiment is designed toevaluate the performance <strong>of</strong> the proposed Overlapped Contentionin infrastructure mode with an access point and twoclient nodes as in Figure 3. A supported 130 Mbps and B had52 Mbps. The transmissions were from the clients to the accesspoint. In this experiment, the access point was configured torun the iperf server to receive the UDP traffic from clients.In this experiment, we measured the jitter and throughput<strong>of</strong> UDP packets reported by the iperf server. Jitter is definedas the latency variation. The results are shown in Figure 5.The left two plots respectively show the throughputs <strong>of</strong> theproposed algorithm and the original algorithm in CSMA/CA.The right two plots illustrate the jitter performance <strong>of</strong> thesetwo algorithms. Two obvious observations are: 1) the proposedopportunistic access algorithm outperforms the originalone in both throughputs and jitter in general, because theopportunistic access spent much less time on back<strong>of</strong>f withsmaller contention windows than the original access whennodes were at bit rates higher than the basic rate, 2) althoughthe clients A and B have identical throughputs in the originalalgorithm as expected, the opportunistic access enables themwith different throughputs based on their channel conditionsand the aggregate network throughput is there<strong>of</strong> significantlyimproved, by more than 50% in the measured case.4) Ad-hoc Mode: We also conducted performance evaluationon an ad-hoc topology as shown by the placement <strong>of</strong>the circles in Figure 3. In our experiments, all nodes usedthe default routing protocol implemented in the module <strong>of</strong>MAC 80211 in the Linux kernel. MAC 80211 is the wirelessmanagement layer in the Linux kernel. Two traffic flows wentfrom C to D, and from B to E. C and D were so placedthat they cannot hear each other and their flow had to berouted by A. The hops from C to D had bit rate <strong>of</strong> 52 Mbpsand the hop from B to E was at 130 Mbps. As a result, theexperimental topology resulted in a two-hop mesh network.In the experiments, the iperf flows started at B and C at thesame time to transmit UDP traffic to D and E respectively.As in infrastructure topology, the jitter and throughput weremeasured.Figure 6 shows the measurement results. The left part <strong>of</strong>the figure illustrates the aggregate network throughputs <strong>of</strong> theproposed opportunistic access and the original algorithm inCSMA/CA. It shows about 30% improvement <strong>of</strong> the throughputin this specific case. The right two plots on the figuredepict the jitters <strong>of</strong> these two flows. As we can observe, whenthe opportunistic access was enabled, the jitter <strong>of</strong> flow C → Donly slightly increased, but that <strong>of</strong> flow B → E significantlyreduced because the channel condition <strong>of</strong> B → E was betterthan that <strong>of</strong> C → D and therefore transmitted more frequently.5) Impact <strong>of</strong> t w : As discussed in Section IV-C, the size<strong>of</strong> the weighted window, t w , has a significant impact on thenetwork performance. Actually, it determines the degree <strong>of</strong>opportunism in that if it is large, the contention metric willnot be updated until after a long period and then the userwith good channel conditions can opportunistically access thechannel for long. But this is at the cost <strong>of</strong> delaying the userswith poor conditions and may hurt real-time applications thatrequire small delay and jitters. We numerically evaluated theimpact <strong>of</strong> t w on the throughput and jitter in the infrastructure


6mode with two clients: A at 130 Mbps and B at 39 Mbps. t wtook the values <strong>of</strong> 50, 80, and 100 ms in the evaluation.Figure 7 shows the impact on throughput. When t w wasdoubled from 50 to 100 ms, the throughput <strong>of</strong> A improved byabout 40% while that <strong>of</strong> B reduced by about only 15% and theaggregate network throughput improved by about 20%. Thisis because, as the opportunism is increased along with t w , Agot more chance and B got less chance <strong>of</strong> using the channel.Figure 8 shows the impact on jitter. The impact is oppositeto that on throughput. When t w was doubled, the jitter <strong>of</strong> Awas reduced from 5 ms to about 3.5 ms while the jitter <strong>of</strong>B increased from 7.7 ms to 10 ms just because A got morechance and B got less chance <strong>of</strong> using the channel. However,both <strong>of</strong> their jitters are much lower than what is required forreal-time applications.B. Evaluation with SimulationWe developed all three proposed algorithms into the simulatorNS3 [25] and comprehensively evaluated the performancewith extensive simulations. Our algorithms were comparedwith the default CSMA/CA without opportunism for each case.The evaluation began with a simple infrastructure topologyhaving one access point and two clients, then studied theimpact <strong>of</strong> hidden terminal on opportunistic access, comparedwith opportunistic transmission in the case <strong>of</strong> mobility, andfinally evaluated the performance on a multiple-hop meshnetwork. All experiments were performed with constant bitrate (CBR) UDP traffic at a rate <strong>of</strong> 10 Mbps for 5 minuteswith packet size <strong>of</strong> 1K bytes. The results are averaged over50 runs for each case. In simulations, α in Formula 1 wasset to 1.7 and the weighted window t w was set to 50 ms.In the following performance figures, “Original” representsthe default algorithm in the CSMA/CA, “Overlapped” forthe Overlapped Contention, “Segmented” for the SegmentedContention and “Normal” for the Normal Distribution BasedBack<strong>of</strong>f Selection.1) Triple-node Infrastructure Topology: We first evaluatedthe throughput and jitter performance <strong>of</strong> opportunistic accessover the original algorithm in a simple topology: one accesspoint and two client nodes. All nodes were in radio range<strong>of</strong> each other. These client nodes transmitted packets to theaccess point at different bit rates. One client node was set toa 54 Mbps, but the other node changed its bit rate from theIEEE 802.11 rate set consisting <strong>of</strong> 6, 12, 24, 36, and 48 Mbps.Throughput Performance: Figure 9 plots the throughputperformance when one client changes its bit rate each timeto imitate the changes <strong>of</strong> channel conditions. The X-axisrepresents the different bit rates and the Y -axis represents thethroughputs <strong>of</strong> opportunistic access algorithms and the originalone in the CSMA/CA. From the figure, opportunistic accessimproves the throughput by approximately 30 - 100% basedon channel conditions as compared to the original algorithm.Delay Performance: Figure 10 plots the measured averagedelay at different bit rates The opportunistic access shows asignificant improvement in delay as compared to the originalaccess. This is because the high bit rate nodes in opportunisticaccess has very less contention time and transmits packetsrapidly. The lower bit rate nodes experiences a much largerdelay comparitively beacause <strong>of</strong> collision due to back-<strong>of</strong>fterminate at the same time and the contetion window sizegets doubled everytime when it encounters a collision. Yet theoverall network performance in delay is improved significantlyover the default CSMA/CA method. To show more clearly, wefurther studied the delay performance with increase in trafficrate Figure 11. As expected, the delay gets increased as CBRtraffic rate increases because more packets are transmittedfrequently. Although the variation in individual packet delaybetween high bit rate nodes and low bit rate nodes is high,the overall network performance <strong>of</strong> opportunistic access isimproved.Jitter Performance: Figure 12 shows the measured jittersat different bit rates. Jitter is the delay variation between twoconsecutive successful packet receptions and the result plottedis the average <strong>of</strong> delay variation <strong>of</strong> total received packets.Surprisingly, the opportunistic access outperforms the originalone in jitter as well. This is because, with opportunistic access,the high bit rate nodes obtain much smaller initial contentionwindows than those in the default CSMA/CA and the selectedback<strong>of</strong>f in the contention is thereby significantly reduced.As a result, although lower bit rate nodes experience largerjitters than higher bit rate nodes, the overall jitter across thenetwork is improved because <strong>of</strong> the shortened time spent onthe contention with opportunistic access.Throughput (Mbps)876543210Throughput <strong>of</strong> Proposed AlgorithmThroughput <strong>of</strong> Original AlgorithmJitter <strong>of</strong> Proposed AlgorithmJitter <strong>of</strong> Originial AlgorihtmAggregate ThroughputC->DB->EB->EC->DJitter (ms)Throughput (Mbps)43.532.52A1.51AggregateAggregateAggregateAABBB0.50tw=50ms tw=80ms tw=100mstw value (ms)Jitter (ms)109B8 B76 AA543BA210tw=50ms tw=80ms tw=100mstw value(ms)Fig. 6: Ad-hocFig. 7: Impact <strong>of</strong> t w on ThroughputFig. 8: Impact <strong>of</strong> t w on Jitter


7Throughput (Mbps)43.532.521.510.50Average Jitter (ms)2.221.81.61.41.210.80.60.4AAggregateBAAggregateBtw=50ms tw=80ms tw=100mstw value (ms)Fig. 9: ThroughputAOriginalOverlappedSegmentedNormalAggregate0.26 12 24 36 48Bit Rate (Mbps)Fig. 12: Average JitterBAverage Packet Delay (ms)Average Back-<strong>of</strong>f (Ts)1110987651.110.90.80.70.60.50.40.30.2OriginalOverlappedSegmentedNormal0.16 12 24 36 48Bit Rate (Mbps)Fig. 10: Average Delay4OriginalOverlapped3 SegmentedNormal26 12 24 36Bit Rate (Mbps)48Fig. 13: Average Back<strong>of</strong>fThroughput (Mbps)Average Packet Delay (ms)131211109140012001000800600400200OriginalOverlappedSegmentedNormal01 2 3 5 10 15 20CBR Traffic Rate (Mbps)Fig. 11: Traffic Rate Vs Delay8Original7OverlappedSegmentedNormal66 12 24 36Bit Rate (Mbps)48Fig. 14: Hidden Terminal ProblemAverage Back<strong>of</strong>f Time: To further investigate how theopportunistic access improves the throughput and jitter, wecollected the back<strong>of</strong>f time in terms <strong>of</strong> time slots <strong>of</strong> eachtransmission. Figure ?? shows the average back<strong>of</strong>f time persuccessful transmission for each access algorithm. Averageback<strong>of</strong>f slot time per packet is calculated as the sum <strong>of</strong>individual packet back<strong>of</strong>f time slots over the total number <strong>of</strong>packets transmitted successfully. From the figure, the opportunisticaccess algorithms consume much less time in back<strong>of</strong>fthan the original access, up to 400% less at 48 Mbps. Theoverall network performance improves because 1) the equalprobability access in the original algorithm results in almostconstant average back<strong>of</strong>f, and 2) the opportunistic accessalgorithms spend less time on back<strong>of</strong>f, namely contention.Discussion on Opportunistic Access Algorithms: Amongthe proposed three opportunistic access schemes, the SegmentedContention yields in general the best performance fromFigure 9 to ??. This is because the channel access <strong>of</strong> nodesat different bit rates is deterministically prioritized when theircontention windows are segmented. Nodes with lower ratesnever get a smaller back<strong>of</strong>f value than the ones with higher bitrates. The Overlapped Contention and the Normal DistributionBased Back<strong>of</strong>f result in almost identical performance. This isbecause both <strong>of</strong> them have a similar expectation in selectingback<strong>of</strong>f values for the nodes at the same bit rate. Because theyboth have contention windows starting at “0”, the higher bitrate nodes may still be occasionally beaten by nodes at lowerbit rates.2) Impact <strong>of</strong> Hidden Terminal Problem: In this case, thenetwork was configured to still have one access point and twoclient nodes with one at 54 Mbps and the other varying itsbit rate, but the clients do not hear each other. RTS and CTSare disabled to fully stimulate the hidden terminal problem.Figure 14 plots the measured throughputs at different rates.The hidden terminal problem does hurt the performance <strong>of</strong> allalgorithms. The figure tells that the overall network throughputis still improved by the opportunistic access, although theimprovement is reduced to some degree by the hidden terminalproblem. To show more clearly the effect <strong>of</strong> hidden terminal,we further studied the percentage <strong>of</strong> packet loss ratio <strong>of</strong>the network. Packet loss ratio percentage is the differencebetween total transmitted packets and total received packetsover the total number <strong>of</strong> transmitted packets. Figure 16 showsthat packet loss ratio is considerably reduced in opportunisticaccess which explains the performance improvement <strong>of</strong> opportunisticaccess as compared to the original access.3) With RTSCTS: Although opportunistic access performsbetter than original one, the measured throughput performanceis mainly due to high bit rate nodes in the presence <strong>of</strong> hiddenterminal. This is because the high bit rate nodes wins thechannel more frequently due to its initial smaller contentionwindow size, suppressing the channel acces to lower bit ratenodes. Hence the channel should be reserved by nodes beforeits access to provide the fairness among the nodes. We haveenabled the RTS/CTS reservation technique for this purpose.As expected, the opportunistic access outperforms the originalone in throughput and gives the throughput proportional to its


8bit rates because <strong>of</strong> channel reservation technique and longterm temporal fairness provided by our scheme. The packetloss ratio <strong>of</strong> our scheme is also considerably reduced in thepresence <strong>of</strong> RTS/CTS channel reservation technique as shownin Figure 17 which increases the overall network performance<strong>of</strong> opportunistic access.4) Comparison with Opportunistic Transmission under Mobilityand Auto Rate: We also evaluated the performance inan infrastructure topology <strong>of</strong> multiple flows with mobility. Wetested topologies with a different number <strong>of</strong> nodes varyingfrom 2 to 6 nodes with one flow between each node andthe access point. The maximum transmission range <strong>of</strong> a nodewas set to 30m and mobility was enabled such that eachnode would be moving randomly within the bounded area <strong>of</strong>50m×50m at a speed <strong>of</strong> 5 m/s. The nodes sometimes move out<strong>of</strong> range and packet losses occur more frequently. Because <strong>of</strong>mobility, the supported bit rate has to be dynamically adapted.RRAA [20] rate adaptation algorithm was enabled for thispurpose. In addition, we also implemented Opportunistic AutoRate (OAR) protocol as a representative <strong>of</strong> opportunistictransmission algorithms to compare with the opportunisticaccess in this experiment setting.Figure ?? shows the throughput performance <strong>of</strong> the opportunisticaccess algorithms, the original access and OARwith mobility and RRAA in the case <strong>of</strong> multiple flows.The X-axis represents the number <strong>of</strong> flows and the Y -axisrepresents the throughput for the entire network. From thefigure, the opportunistic access (our algorithms) and the opportunistictransmission (OAR) have close performance whenthe network only has a couple <strong>of</strong> nodes, but the opportunisticaccess gradually outperforms the opportunistic transmissionas the number <strong>of</strong> mobile nodes increases. This is because<strong>of</strong> the difference in the opportunism between access andtransmission. With the growing number <strong>of</strong> mobile users, theuser diversity increases as well. OAR does not exploit userdiversity and it uniformly selects a user to transmit. As a result,it does not favor the user with the best channel conditionto use the channel each time. On the contrary, opportunisticaccess exploits user diversity. With more users, it is morelikely that someone is at a very high bit rate. Because theopportunistic access always favors the user with the highestbit rate to use the channel, the overall network performance isimproved. In addition, in this environment, the mobility andrate adaptation introduce fast channel variations. This mayresult in transmission failure when OAR opportunisticallytransmits a burst <strong>of</strong> frames if the channel degrades beforethe opportunistic transmission finishes. However, because theopportunistic access only transmits one frame per contention,its short transmission time is resilient to the fast variations.Moreover, the variations generate new user diversity in thenetwork that facilities the opportunistic access to exploit forhigh network performance. The performance figure also showsthat the overall throughput increases slowly when the number<strong>of</strong> flows grows because <strong>of</strong> the increasing contention amongthe flows.5) High Performance <strong>of</strong> Opportunistic transmission overOpportunistic access: In this case, the network is still configuredwith infra-structure topology <strong>of</strong> multiple flows butwithout mobility and auto rate algorithms. The number <strong>of</strong>flows are increased from 2 to 6 by gradually adding onenode each time. All nodes transmit packets to the accesspoint. Because <strong>of</strong> stable channel conditions and less networkoverhead, opportunistic transmission OAR protocol transmitsmultiple frames when it gains access to the channel whereasopportunistic access relies on one frame transmission per contentionand network overhead is more comparitively.AlthoughOAR protocol shows a high performance over opportunisticaccess scheme as shown in Figure 19, opportuinistic accessshows a better performance than default CSMA/CA method.6) Fairness Index Measurement: Jain Fairness Index is themost commonly used metric to measure the fairness variationusing the individual flow throughput over wireless networks.Basically Jains Fairness rates the individual throughput variationwith respect to number <strong>of</strong> flows. If resources are allocatedequally, then Jain fairness index will have the maximum value1 and vice-versa.Jain Fairness Index is given byI J = (∑ ni=1 r i) 2n ∑ ni=1 r2 i(5)Throughput (Mbps)15141312111098Original7Overlapped6SegmentedNormal56 12 24 36 48Bit Rate (Mbps)Fig. 15: Hidden Terminal Problem withRTSCTSPacket Loss Ratio (%)757065605550454035OriginalOverlappedSegmentedNormal306 12 24 36 48Bit Rate (Mbps)Fig. 16: Packet Loss RatioPacket Loss Ratio (%)757065605550454035OriginalOverlappedSegmentedNormal306 12 24 36 48Bit Rate (Mbps)Fig. 17: Packet Loss Ratio with RTSCTS


9Where r i and n are the allocated resources to user i and thetotal number <strong>of</strong> users respectively.Figure 20 shows that the the fairness <strong>of</strong> opportunistic accessscheme gets increased as the number <strong>of</strong> flows increases.Thisis because the opportunistic access actually reduces the contentiontime <strong>of</strong> high bit rate nodes, thereby giving opportunismto send more packets but the lower bit rate node remain inits regular channel access time. The fairness <strong>of</strong> opportunistictransmission is also compared with opportunistic access whichshows that the resources are allocated equally over the longterm.7) Multi-hop Mesh Topology: In this scenario, 30 nodeswere randomly and statically distributed in a 150m×150m areaand bit rates (12, 24, 36, 48, 54 Mbps) were assigned to thehops in a uniform distribution. Since the radio range was 30m, these nodes form a multi-hop mesh network. Source anddestination pairs <strong>of</strong> flows were preselected among the nodesand the number <strong>of</strong> flows was increased from 1 to 9. OLSR [27]protocol was used to route the packets over the network.Throughput (Mbps)54.543.532.52OriginalOverlapped1.5SegmentedNormal11 3 5 7 9Number Of FlowsFig. 21: Multi-hop Mesh TopologyThe network throughput performance is plotted in Figure ??.The opportunistic access shows an average throughput improvement<strong>of</strong> up to 40% over the original access. This isbecause the original access takes the same initial contentionwindow upper bound regardless <strong>of</strong> bit rates, but the opportunisticaccess obtains a smaller initial contention window upperbound than that in the original access if the bit rate is largerthan the basic rate. As a result, the opportunistic access spendsless overhead time on contention than the original access if thechannel condition for the single flow is good. As the number <strong>of</strong>flows grows, the throughputs decrease because <strong>of</strong> the increasedcollisions among flows. The Overlapped Contention and theoriginal access boost their throughputs in the case <strong>of</strong> 9 flowsbecause their uniform selection <strong>of</strong> back<strong>of</strong>f from overlappingcontention windows helps dilute the collision.VI. DISCUSSIONThis section presents some issues with opportunistic accessas observed during the performance evaluation. We also brieflydiscuss the trade<strong>of</strong>f between the opportunistic transmission andaccess.A. Performance Outage in Chain TopologyDuring the performance evaluation with simulation, wetested a 5-hop chain network where the sender and receiverwere placed at the extremities <strong>of</strong> the chain with 4 nodesbetween. Each node had a buffer <strong>of</strong> 130 frames. The bit rates<strong>of</strong> these 5 hops were set in four different patterns as shown inFigure 22. Pattern A has ascending bit rates (12, 24, 36, 48,54 Mbps) along the transmission path, B has descending bitrates (54, 48, 36, 24, 12 Mbps), C has hybrid ordering withlower rates first (12, 24, 54, 36, 48 Mbps), and D has hybridordering with high bit rates first (36, 48, 54, 12, 24 Mbps).Fig. 22: Four Chain TopologiesFigure 23 plots the throughput performance <strong>of</strong> all algorithmsin the four bit rate distribution patterns. From the figure,the opportunistic access is significantly outperformed by theoriginal access in the decreasing order case (B) and slightlyoutperformed in the hybrid order that has high bit rates first(D). This performance outage problem occurs because theopportunistic access transmits more packets in the early hopsThroughput (Mbps)12.51211.51110.5109.598.5OriginalOverlapped8Segmented7.5NormalOAR72 3 4 5Number Of Flows6Fig. 18: MobilityThroughput (Mbps)1816141210864OriginalOverlappedSegmentedNormal2OAR02 3 4 5 6Number <strong>of</strong> FlowsFig. 19: No MobilityFairness Index1.210.80.60.4OriginalOverlappedSegmentedNormalOAR0.22 3 4 5 6Number <strong>of</strong> FlowsFig. 20: Fairness


10Throughput (Mbps)32.521.510.50OriginalOverlappedSegmentedNormalA B C DFig. 23: Throughput in Chain TopologiesThroughput (Mbps)3.63.43.232.82.62.42.22OriginalOverlappedNormal1.81 2 3 4 5 6 7 8 9 10Number Of FlowsFig. 24: Random Adhoc Topologythan what the low bit rate nodes in the downstream hops candrain out <strong>of</strong> the network. Therefore, many packets have tobe dropped on the intermediate hops. This problem shouldexist to some degree whenever the bit rate bottleneck is in thedownstream <strong>of</strong> a path, but opportunistic access or transmissionexacerbates it.B. Random Adhoc topologyIn this experiment, we have evaluated the performance <strong>of</strong>random topology <strong>of</strong> 50 nodes placed initially in a randomposition bounded by 150m×150m area. The bit rates (12,24, 36, 48, 54 Mbps) were assigned to the nodes in auniform distribution and the radio range was 30m. Mobilityis enabled such that the nodes are moving at a speed <strong>of</strong>5m/sec in a random direction. Because <strong>of</strong> mobility, the autorate algorithm (RRAA) is enabled to adapt dynamically thechannel conditions and reduce loss due to hidden terminalproblem. Pre-selected source and destination are configuredsuch that the numbet <strong>of</strong> flows are increased from 1 to 10.OLSR protocol was used to route the packets. The opportunisticacces performance is degraded considerably due tomobility and auto rate as shown in Figure 24. This is becauseopportunistic access doesn’t give the OLSR protocol enoughtime to compute its MPR (Multi-point Relay) and routingtable. Opportunistic access injects packets into the networkmore frequently due to its less contention time.The position <strong>of</strong>nodes also changes randomly over a wider area due to mobilitywhich may sometimes lead to multi-hop routing <strong>of</strong> packetsto reach the destination. Because <strong>of</strong> this two reasons, OLSRprotocol not able to build its routing table in less time. Sonetwork congestion and packet losses occurs which leads todegradation in performance.C. Increased CollisionIn the proposed opportunistic access, high bit rate nodestend to have small contention windows. Although the back<strong>of</strong>fvalues <strong>of</strong> these nodes are selected randomly, the initialcontention window size for very high bit rate nodes is toosmall, which increases the probability for these nodes toselect the same back<strong>of</strong>f value, especially when a networkhas many such nodes. Then, this exacerbates the collisionbecause the back<strong>of</strong>f values at the nodes terminate at the sametime. Although the binary exponential back<strong>of</strong>f mechanismcan address this problem, the network efficiency is degradedwith the channel resource wasted by frequent collisions. Thenetwork throughput and delay will be hurt as well. Furthereffort is required to mitigate this problem.D. Opportunistic Access and TransmissionAlthough the performance evaluation showed a case thatthe opportunistic access outperformed the opportunistic transmissionOAR under fast channel variations due to mobility,the opportunistic transmission may perform better in somecases. One case is when the user diversity is light, then theopportunistic access does not make much difference from theoriginal access. Since the opportunistic transmission sendsmultiple frames per contention period, it results in averagesmaller overhead per frame than the opportunistic access.Another possible case is when the channel is stable enough tosustain the completion <strong>of</strong> transmitting multiple frames in theopportunistic transmission.VII. RELATED WORKHwang and Ci<strong>of</strong>fi [24] proposed an opportunisticCSMA/CA that also targets the user diversity in WLAN.Their algorithm instructs a node to transmit at a specifictime slot according to the signal-to-noise-ratio (SNR) <strong>of</strong> itschannel. To avoid the nodes with the same SNR to collidewith each other on the same time slot, a random number isintroduced when the time slot is determined. Zhai et al. [11],[21] proposed an opportunistic media access control protocolthat focused on the opportunistic scheduling <strong>of</strong> traffic at anode to its neighbors based on their channel conditions. Thetraffic to the neighbor with the best channel conditions isscheduled first for transmission.


11Opportunistic transmission based on multiple rates wasdemonstrated to yield a significant improvement <strong>of</strong> the networkperformance in IEEE 802.11 networks [1]–[3] where anode opportunistically transmits multiple frames if its bit rateis high, instead <strong>of</strong> traditionally one frame. These algorithmsrely on rate adaptations such as RBAR [5] to estimate thechannel conditions in terms <strong>of</strong> bit rate and then a sender calculatesthe number <strong>of</strong> frames that should be transmitted basedon the adapted bit rate to maintain temporal fairness amongnodes. Opportunistic transmission occurs after the contention<strong>of</strong> channel access that is still conducted with traditional nonopportunisticapproach: equal probability access. It exploits thetime diversity <strong>of</strong> a node, not the user diversity <strong>of</strong> a network.IEEE 802.11e [7] categorizes network traffic into varioustypes and specifies different contention window size for eachtraffic type such that the highest priority traffic type such asreal-time video/audio is assigned the smallest size. In this way,the higher priority traffic has a chance to be delivered beforethe lower priority types. Vaidya [22], [23] proposed to dynamicallyvary the contention window to achieve the distributedproportional scheduling in IEEE 802.11 WLANs. Based on theweight assigned to each traffic flow, the contention windowis calculated such that a more weighted flow has a smallercontention window. Then, the flow has more chances to usethe channel for delivering more data than less weighted flows.VIII. CONCLUSIONThis work proposes three opportunistic random access algorithmsto exploit user diversity in wireless networks. Thesealgorithms probabilistically or semi-deterministically enablethe users to access the shared wireless channel based on theirchannel conditions such that the user at the highest achievablebit rate is favored. To avoid starving users with poor channelconditions, a slow filtering scheme is proposed to maintaintemporal fairness among nodes. With extensive experimentson a developed linux-based testbed and the NS3 network simulator,the proposed opportunistic access significantly improvesthe network performance in both throughput, delay, and jitter.[7] IEEE LAN MAN Standards, part 11: Amendment 8: Medium AccessControl (MAC) Quality <strong>of</strong> Service Enhancements, 2005[8] Martin Heusse, Franck Rousseau, Gilles Berger-Sabbatel, Andrzej Duda,Performance Anomaly <strong>of</strong> 802.11b, Infocom 2003[9] Vipin M, Srikanth S, Analysis <strong>of</strong> Open Source Drivers for IEEE 802.11WLANs, ICWCSC 2010[10] http://linuxwireless.org/en/users/Drivers/ath9k[11] H. Zhai, Y. Fang, M.C. Yuang, Opportunistic Media Access Control andRate Adaptation for Wireless Ad Hoc Networks, ICC, 2004[12] Hiroyuki Yomo, Petar Popovski, Opportunistic Scheduling for WirelessNetwork Coding, ICC, 2007[13] Xiaojun Liu, Edwin K. P. Chong, Ness B. Shr<strong>of</strong>f, Opportunistictransmission Scheduling with resources-sharing constraints in wirelessnetworks, JSAC, vol. 19, no. 10, pp. 2053-2064, 2001[14] Yonghe Liu, Edward W. Knightly, Opportunistic Fair Scheduling overMultiple Wireless Channels, Infocom 2003[15] Sanjit Biswas, Robert Morris, Opportunistic routing in multi-hop wirelessnetworks, CCR, vol. 34, no. 1, pp. 69-74, 2004[16] Sanjit Biswas, Robert Morris, ExOR: Opportunistic multi-hop routingfor wireless networks, SIGCOMM, 2005[17] Szymon Chachulski, Michael Jennings, Sachin Katti, Dina Katabi,Trading structure for randomness in wireless Opportunistic routing,SIGCOMM 2007[18] Luciana Pelusi, Andrea Passarella, Marco Conti, Opportunistic networking:data forwarding in disconnected mobile ad hoc networks, IEEECommunications Magazine, vol. 44, no.11, pp. 134-141. 2006[19] Godfrey Tan, John V. Guttag. Time-based Fairness Improves Performancein Multi-Rate WLANs. USENIX 2004[20] Starsky H. Y. Wong, Hao Yang, Songwu Lu, and Vaduvur Bharghavan.“Robust rate adaptation for 802.11 wireless networks.” In Proceedings<strong>of</strong> the 12th annual international conference on Mobile computing andnetworking (MobiCom ’06). ACM, New York, NY, USA, 2006[21] J. Wang, H. Zhai, Y. Fang and M.C. Yuang. Opportunistic mediaaccess control and rate adaptation for wireless ad hoc networks. IEEEInternational Conference on Communications, 2004[22] Vaidya, N.; Dugar, A.; Gupta, S.; Bahl, P.; “Distributed fair scheduling ina wireless LAN,” IEEE Transactions on Mobile Computing, vol.4, no.6,pp. 616- 629, Nov.-Dec. 2005[23] N. Vaidya, P. Bahl, and S. Gupta. Distributed fair scheduling in awireless LAN. In Proceedings <strong>of</strong> the 6th annual international conferenceon Mobile computing and networking (MobiCom ’00), New York, NY,USA, 2000[24] Chan-soo Hwang; Ci<strong>of</strong>fi, J.M.; , “Opportunistic CSMA/CA for achievingmulti-user diversity in wireless LAN,” IEEE Transactions on WirelessCommunications, vol.8, no.6, pp.2972-2982, June 2009[25] NS3–Network Simulator 3, http://www.nsnam.org[26] Iperf, http://sourceforge.net/projects/iperf/[27] IETC, RFC3626: Optimized Link State Routing Protocol (OLSR)IX. ACKNOWLEDGMENTSThe authors would thank the support from the National ScienceFoundation through the awards #1041292 and #1041095to perform this work.REFERENCES[1] B Sadeghi, V Kanodia, A Sabharwal, E Knightly, Opportunistic mediaaccess for multirate ad hoc networks, IEEE MobiCom 2002[2] B. Sadeghi, V. Kanodia, A. Sabharwal, and E. Knightly. OAR: An opportunisticauto-rate media access protocol for ad hoc networks. WirelessNetworks, 11:39-53, 2005[3] V. Kanodia, A. Sabharwal, and E. Knightly, MOAR: a multi-channelopportunistic auto-rate media access protocol for ad hoc networks. Inproceedings. First International Conference on Broadband Networks 2004[4] Vaduvur Bharghavan, Alan Demers, Scott Shenker, Lixia Zhang.MACAW: a media access protocol for wireless LAN’s. SIGCOMM 1994[5] G. Holland, N. Vaidya, P. Bahl. A rate-adaptive MAC protocol for multi-Hop wireless networks. MobiCom 2001[6] IEEE LAN MAN Standards, part 11: Wireless LAN medium accesscontrol (mac) and physical layer (phy) specifications, March 1999.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!