06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.4 Signals Versus Vectors 29

Fi g

ure 2.8

Component

(projection) of a

vector along

another vector.

e

ex

X

Fi g ure 2.9

Approximations

of a vector in

terms of another

vector.

X

X

(a)

(b)

Fig. 2.8, the vector g can be expressed in terms of vector x as

g=ex+e (2.16)

However, this does not describe a unique way to decompose g in terms of x and e. Figure 2.9

shows two of the infinite other possibilities. From Fig. 2.9a and b, we have

(2.17)

The question is: Which is the "best" decomposition? The concept of optimality depends on

what we wish to accomplish by decomposing g into two components.

In each of these three representations, g is given in terms of x plus another vector called

the error vector. If our goal is to approximate g by ex (Fig. 2.8),

g::::: g = ex (2.18)

then the error in this approximation is the (difference) vector e = g - ex. Similarly, the errors

in approximations of Fig. 2.9a and bare e1 and e 2 , respectively. The approximation in Fig. 2.8

is unique because its error vector is the shortest (with the smallest magnitude or norm). We

can now define mathematically the component (or projection) of a vector g along vector x to

be ex, where e is chosen to minimize the magnitude of the error vector e = g - ex.

Geometrically, the magnitude of the component of g along x is I lgl I cos 0, which is also

equal to c!lxll- Therefore

cllxll = llgll cos e

Based on the definition of inner product between two vectors, multiplying both sides by I lx l I

yields

and

cllx ll 2 = llgll llx ll cos 0 = < g, x >

c = < g, X >

< X, X >

I

jjxjj 2 < g, X > (2.19)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!