23.07.2014 Views

Rate Adaptation in Time Varying Channels using Acknowledgement ...

Rate Adaptation in Time Varying Channels using Acknowledgement ...

Rate Adaptation in Time Varying Channels using Acknowledgement ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

a conditional distribution p(r K |a K−1 ). Second, we need not<br />

perform the expectations over the rates. Hence, (8) becomes<br />

CT K (¯γ)=sup Tave(¯γ,r K<br />

all ),<br />

(9a)<br />

r all<br />

Tave(¯γ,r K<br />

[<br />

all )E aK−1 E hK|a K−1,r K−1 T K (¯γ,r K , h K ) ] .(9b)<br />

The global solution can be computed off-l<strong>in</strong>e and used for<br />

rate adaptation over every K packets. However, the size of<br />

r all grows large quickly when K is large, result<strong>in</strong>g <strong>in</strong> a high<br />

complexity of O(|A| K ). Unless a simple optimization procedure<br />

can be found, which is not expected for general channels,<br />

obta<strong>in</strong><strong>in</strong>g an optimum solution is computationally difficult.<br />

Another problem, though less severe, is the requirement of<br />

large storage of the rates based on all possible a K−1 .<br />

V. SUB-OPTIMAL RATE ADAPTATION<br />

The high computational complexity of f<strong>in</strong>d<strong>in</strong>g globally<br />

optimal rates motivates us to look for another sub-optimal approach.<br />

The alternative solution should ideally yield a simple<br />

optimization criterion, preferably be calculated easily onl<strong>in</strong>e<br />

and most importantly, yield high throughput.<br />

A. Successive Optimization<br />

First, we need to massage the throughput capacity to a<br />

more enlighten<strong>in</strong>g form. The total throughput capacity can<br />

be decomposed us<strong>in</strong>g (3) <strong>in</strong> (9), so that each summand<br />

corresponds to the throughput on a packet-by-packet basis:<br />

[<br />

]<br />

K × CT K (¯γ)=sup f(R 1 )+E A1 sup f(R 2 , r 1 , a 1 )<br />

R 1 R 2<br />

]<br />

+E A2|a 1<br />

[sup f(R 3 , r 2 , a 2 ) + ··· (10)<br />

R 3<br />

where f(R k , r k−1 , a k−1 ) E hk |r k−1 ,a k−1<br />

[T (¯γ,R k ,h k )].<br />

Here, the sup operator has been brought <strong>in</strong>to the expectation<br />

for packet 2 and onwards as a result of the causality of the<br />

ACKs.<br />

As seen from (10), s<strong>in</strong>ce a 1 = A 1 depends on R 1 ,the<br />

choice of R 1 affects the throughput of packets 2, 3, ···. Similarly,<br />

R 2 affects the throughput of latter packets via A 2 , and so<br />

on. Hence, the optimal choice of R k will affect all subsequent<br />

packets due to A k . By mak<strong>in</strong>g the (<strong>in</strong>valid) assumption that<br />

A k is <strong>in</strong>dependent of R k , the optimization from packet to<br />

packet <strong>in</strong> (10) is then decoupled. This resulted <strong>in</strong> the solution<br />

when R k is optimized by treat<strong>in</strong>g previous rates and partial<br />

channel <strong>in</strong>formation as given parameters. We then maximize<br />

the average throughput for the current packet, without regard<br />

as to how it will affect future throughput. The rate chosen<br />

may not reveal sufficient <strong>in</strong>formation about the channel via<br />

the ACK/NACK and hence may be detrimental to the overall<br />

throughput.<br />

We call this sub-optimal rate adaptation solution as successive<br />

optimization, given formally as<br />

R sub<br />

k<br />

=argsup<br />

R k<br />

E hk |a k−1 ,r k−1<br />

[T (¯γ,R k ,h k )] . (11)<br />

Although sub-optimal (11) still allows us to <strong>in</strong>clude the<br />

<strong>in</strong>formation about past rates and ACKs for rate adaptation.<br />

Furthermore, the successive optimization can be viewed as a<br />

method to select a rate for the current packet without regard<br />

as to how past rates and a partial CSI are obta<strong>in</strong>ed. Hence, we<br />

may even use an arbitrary rate adaptation strategy (for some<br />

other reason) before switch<strong>in</strong>g to this solution.<br />

B. Particle Filter<br />

To perform onl<strong>in</strong>e rate optimization us<strong>in</strong>g (11) requires<br />

the knowledge of p(h k |r k−1 , a k−1 ). This probability density<br />

function (PDF) gives a probabilistic description of the channel<br />

given some knowledge of the past channels. In general, this<br />

PDF cannot be easily determ<strong>in</strong>ed analytically. To obta<strong>in</strong> a<br />

tractable solution, we propose consider<strong>in</strong>g a class of channel<br />

model known as the FSMC. It is assumed <strong>in</strong> this model that<br />

the channel is discrete and follows a first order Markovian<br />

process. The partial channel <strong>in</strong>formation A k can be viewed as<br />

an observation of the channel h k . By def<strong>in</strong><strong>in</strong>g p(h 1 |h −1 )=<br />

p(h 1 ), the jo<strong>in</strong>t PDF can then be factored as<br />

p(h K , a K |r K )=<br />

K∏<br />

p(h k |h k−1 )p(A k |R k ,h k ). (12)<br />

k=1<br />

This is known commonly as the hidden Markov model [10].<br />

To expose the capability of the sub-optimal approach, we<br />

use a particle filter to compute the PDF. The particle filter<br />

[11], made popular as a sampl<strong>in</strong>g importance resampl<strong>in</strong>g (SIR)<br />

filter <strong>in</strong> [12], approximates a PDF by recursive importance<br />

sampl<strong>in</strong>g of random samples known as particles. By exploit<strong>in</strong>g<br />

the structure of (12), the particle filter estimates the a posterior<br />

probability p(h k |a K−1 , r K−1 ) as required for comput<strong>in</strong>g the<br />

expectation <strong>in</strong> (11). A computational cost <strong>in</strong> the order of the<br />

number of particles is required, rather than on the length of<br />

observations. Hence, we can let K →∞<strong>in</strong> the successive<br />

optimization and takes all past partial CSI <strong>in</strong>to account.<br />

VI. SIMULATION RESULTS<br />

A. Scenario<br />

We consider the follow<strong>in</strong>g scenario for our simulations.<br />

1) Channel: The degree of the channel variation over time<br />

depends on the transition probability p(h k |h k−1 ), assumed to<br />

be stationary and known. We assume that (h k ,h k−1 ) is a<br />

bivariate circularly symmetric complex Gaussian distribution<br />

with correlation coefficient ρ = E[h k h ∗ k−1 ]/σ2 h .<br />

For most applications, <strong>in</strong>clud<strong>in</strong>g the one described next,<br />

we are only <strong>in</strong>terested <strong>in</strong> the channel amplitudes (which are<br />

Rayleigh distributed). Hence, we let r k = |h k | and we work<br />

only with the distribution of r k us<strong>in</strong>g the FSMC [9]. Furthermore,<br />

we approximate the channel r k us<strong>in</strong>g discrete values<br />

from 0 to 5 <strong>in</strong> steps of 0.01, which is experimentally found to<br />

be have sufficient accuracy <strong>in</strong> represent<strong>in</strong>g the channel when<br />

¯γ =0dB. At other SNR, the <strong>in</strong>stantaneous SNR is scaled<br />

accord<strong>in</strong>gly so that we can still use the same Rayleigh PDF<br />

with ¯γ =0dB. The bivariate rayleigh PDF and the cumulative<br />

distribution function (CDF) of (r k ,r k−1 ) can be specified by<br />

E[r<br />

the power correlation coefficient ρ r 2 =<br />

2 k r2 k−1<br />

√ ]<br />

The<br />

E[r 4<br />

k ]E[rk−1 4 ].<br />

CDF is represented as a power series <strong>in</strong> [13] for the Rayleigh

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!