13.07.2015 Views

step-size optimization of the bndr-lms algorithm - LPS/UFRJ

step-size optimization of the bndr-lms algorithm - LPS/UFRJ

step-size optimization of the bndr-lms algorithm - LPS/UFRJ

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Table 1: The Binormalized Data-Reusing LMS Algorithm[4].BNDR-LMS = small valuefor each kf x 1 = x(k)x 2 = x(k 1)d 1 = d(k)d 2 = d(k 1)a = x T 1 x 2b = x T 1 x 1c = x T 2 x 2d = x T 1 w(k)if a 2 == bcf w(k +1)=w(k)+(d 1 d)x 1 =(b+)gelsef e = x T 2 w(k)den = bc a 2A =(d 1 c+ea dc d 2 a)=denB =(d 2 b+da eb d 1 a)=denw(k +1)=w(k)+(Ax 1 +Bx 2 )ggto be equal to unity. This expression for <strong>the</strong> excess in <strong>the</strong>MSE [1] was obtained with <strong>the</strong> help <strong>of</strong> a simple model[6] for <strong>the</strong> input signal vector given byx(k) =s k r k V k (2)where s k is 1 with probability 1=2, r k has <strong>the</strong> samedistribution as k x(k) k or E[rk 2] = (N +1)2 x , andV k is one <strong>of</strong> <strong>the</strong> N + 1 orthonormal eigenvectors <strong>of</strong> <strong>the</strong>autocorrelation matrix R <strong>of</strong> <strong>the</strong> input signal. It wasassumed in <strong>the</strong> analysis that each vector V i occurs withan equal probability <strong>of</strong> 1=(N +1) which means whitenoise type input.3 THE OPTIMAL STEP-SIZE SEQUENCEIn this section <strong>the</strong> optimal <strong>step</strong>-<strong>size</strong> sequence for <strong>the</strong>given problem is derived. From <strong>the</strong> expression <strong>of</strong> (k+1), we will follow an approach similar to that used in [6]and rewrite (1) assuming that up to time k we have <strong>the</strong>optimal sequence (0) to (k 1) already availableand also <strong>the</strong> optimal quantities (k) and (k 1).(k)((k) 2)(k +1) =1+ (k)N +1+ N(k)(1 (k))2 ((k) 2)(N +1) 2 (k 1)+ (1 + N((k) 2)2 )(k) 2(N +1) 2 2 n (3)If we now compute <strong>the</strong> derivative <strong>of</strong> (k +1) withrespect to (k) and make it equal to zero, we obtainafter some algebraic manipulation (k) = 1= 1ss11 (k)+ (k 1)2( (k 1) + 2 n ) (k)+ (k 1) 2n22 (k 1)(4)It is worth mentioning that (4) is in accordance with <strong>the</strong>situation when <strong>the</strong> convergence is reached; in that casewe have (k)= (k 1) = n 2 and <strong>the</strong>refore we have (k)=0as expected. Moreover, if we have n 2 = 0<strong>the</strong> value <strong>of</strong> (k) will be close to one (admitting that (k) (k 1)) even after convergence, whichmeans that we should have maximum speed <strong>of</strong> convergencewith minimum misadjustment if <strong>the</strong> noise is zero.For <strong>the</strong> normalized LMS (NLMS) <strong>algorithm</strong>, a recursiveformula for (k) in terms <strong>of</strong> (k 1) and <strong>the</strong>order N was obtained in [6]. Unfortunately, a similarexpression could not be obtained for <strong>the</strong> BNDR-LMS<strong>algorithm</strong>. Instead, a simple <strong>algorithm</strong> was used to produce<strong>the</strong> optimal <strong>step</strong>-<strong>size</strong> sequence. This <strong>algorithm</strong> ispresented in Table 2 1 and has one important initializationparameter with a strong inuence on <strong>the</strong> behavior<strong>of</strong> (k). This parameter is <strong>the</strong> ratio 2 dwhere <strong>the</strong> numeratoris <strong>the</strong> variance <strong>of</strong> <strong>the</strong> referencen2 signal.Table 2: Algorithm for computing <strong>the</strong> optimal <strong>step</strong>-<strong>size</strong>sequence.(k) <strong>of</strong> <strong>the</strong> BNDR-LMS <strong>algorithm</strong>(0)=( 1) = d2n 2 = noise varianceN = adaptive lter order(0)=1for each k(k)+(k 1)f (k) =1q1aa =h1+(k)((k) 2)N+1i2((k 1)+ 2 n )bb = N(k)(1 (k))2 ((k) 2)(N+1) 2cc = (1+N((k) 2)2 )(k) 2(N+1) 2 n2(k +1)=aa(k)+bb(kg1) + ccWe next present in Fig. 1 <strong>the</strong> curves <strong>of</strong> (k) for dierentvalues <strong>of</strong> what should be called in this case (desired)signal to noise ratio or SNR =10log 2 dfrom 0 to 40 dB.n2Note that for n 2 = 0 (noiseless case), <strong>the</strong> SNR goes toinnity and <strong>the</strong> <strong>step</strong>-<strong>size</strong> would remain xed at unity.1 Note that <strong>the</strong> asterisk (*) was dropped out from <strong>the</strong> optimalvalues for simplicity only.


20LEARNING CURVES FOR TIME−VARYING STEP−SIZE5LEARNING CURVES FOR COLORED SIGNAL1510fixed <strong>step</strong>−<strong>size</strong>optimal <strong>step</strong>−<strong>size</strong>NLMS approximation1/k approximation0fixed <strong>step</strong>−<strong>size</strong> sequenceoptimal sequence5MSE in dB0−5−10MSE in dB−5−10−15−20−15−250 50 100 150 200 250k−200 50 100 150 200 250kFigure 3: Learning curves for <strong>the</strong> xed <strong>step</strong>-<strong>size</strong>, <strong>the</strong>optimal <strong>step</strong>-<strong>size</strong> and its two approximations.Figure 4: Comparing <strong>the</strong> learning curves for <strong>the</strong> case <strong>of</strong>colored input signal.x1+ p 1 xapproach is <strong>the</strong> possibility <strong>of</strong> fast tracking <strong>of</strong> suddenand strong changes in <strong>the</strong> environment. In this case,<strong>the</strong> instantaneous error becomes high and <strong>the</strong> estimated(k + 1) is increased such that <strong>the</strong> value <strong>of</strong> approaches<strong>the</strong> unity again and a fast re-adaptation starts.When using this approach, it is worth rememberingpthat, since equation (4) is <strong>of</strong> <strong>the</strong> type 1 1 x, <strong>the</strong><strong>step</strong>-<strong>size</strong> (k) can be written as which isanumericallyless sensitive expression. Equation (8) showsthis expression.(k) =1+q15 CONCLUSIONS(k)+(k 1) 2 2 n2(k 1)(k)+(k 1) 2 2 n2(k 1)(8)This paper addressed <strong>the</strong> <strong>optimization</strong> <strong>of</strong> <strong>the</strong> <strong>step</strong>-<strong>size</strong><strong>of</strong> <strong>the</strong> BNDR-LMS <strong>algorithm</strong> when <strong>the</strong> input signalis uncorrelated. An optimal sequence was proposedand a simple <strong>algorithm</strong> to nd this sequence was introduced.Alternative approximation sequences were alsopresented and <strong>the</strong>ir initialization parameters compared.Simulations carried out in a system identication problemshowed <strong>the</strong> good performance <strong>of</strong> <strong>the</strong> optimal <strong>step</strong><strong>size</strong>sequence as well as <strong>the</strong> possibility <strong>of</strong> using alternativessequences obtained with less eort and with similareciency. In ano<strong>the</strong>r simulation it was possible to observethat <strong>the</strong> same <strong>step</strong>-<strong>size</strong> sequence, optimal for <strong>the</strong>white noise input, can also be used in applications wherea highly correlated input signal is present.References[1] P. S. R. Diniz, Adaptive Filtering: Algorithms andPractical Implementation. Kluwer Academic Press,Boston, MA, USA, 1997.[2] S. Haykin, Adaptive Filter Theory. Prentice Hall,Englewood Clis, NJ, USA, 3rd edition, 1996.[3] J. A. Apolinario Jr., M. L. R. de Campos andP. S. R. Diniz, \The Binormalized Data-ReusingLMS Algorithm," Proceedings <strong>of</strong> <strong>the</strong> XV SimposioBrasileiro de Telecomunicac~oes, Recife, Brazil,1997.[4] J. A. Apolinario Jr., M. L. R. de Campos and P. S.R. Diniz, \Convergence analysis <strong>of</strong> <strong>the</strong> BinormalizedData-Reusing LMS Algorithm," Proceedings<strong>of</strong> <strong>the</strong> European Conference onCircuit Theory andDesign, Budapest, Hungary, 1997.[5] M. L. R. de Campos, J. A. Apolinario Jr. and P.S. R. Diniz, \Mean-squared error analysis <strong>of</strong> <strong>the</strong>Binormalized Data-Reusing LMS <strong>algorithm</strong>," Proceedings<strong>of</strong> <strong>the</strong> International Conference onAcoustics,Speech, and Signal Processing, Seattle, USA,1998.[6] D. Slock, \On <strong>the</strong> convergence behavior <strong>of</strong> <strong>the</strong> LMSand <strong>the</strong> normalized LMS <strong>algorithm</strong>s," IEEE Transactionson Signal Processing, vol. 41, pp. 2811-2825, Sept. 1993.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!