06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Since by definition xy = R x y and xx = x2 = R xx , we have

8.5 Linear Mean Square Estimation 44 1

XE = R x y - R xy = 0 (8.84)

The condition of Eq. (8.84) is known as the principle of orthogonality. The physical interpretation

is that the data (x) used in estimation and the (minimum) error (E) are orthogonal

(implying uncorrelatedness in this case) when the mean square error is minimum.

Given the principle of orthogonality, the minimum mean square error is given by

E 2 = (y - ax) 2

= (y - ax)y - a • EX

= (y - ax)y

= y 2 -a• yx

= R yy

- aR xy

(8.85)

Using n Random Variables to Estimate a Random Variable

If a random variable xo is related to n RVs x1, x 2 , ..., X n , then we can estimate xo using a

linear combination* of XJ, x 2 , ..., X n :

xo = a1 x1 + a2x2 + · · · + a n X n = L aiXi

i=l

(8.86)

The mean square error is given by

To minimize E 2 , we must set

that is,

BE 2 BE 2 BE 2

-=-= .. ·=-=0

Ba1 Ba2 Ba n

Interchanging the order of differentiation and averaging, we have

BE 2

- = -2[xo - (a1 x1 + a 2 x 2 + · · · + a n X n )]x; = 0

aa;

(8.87a)

Equation (8.87a) can be written as

i = 1, 2, ... , n

(8.87b)

* Throughout this section as before, we assume that all the random variables have zero mean values. This can be

done without loss of generality.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!