23.07.2012 Views

Linear Algebra

Linear Algebra

Linear Algebra

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

258 Chapter 3. Maps Between Spaces<br />

For the i = 2 case, expand the definition of �κ2.<br />

�κ2 = � β2 − proj [�κ1]( � β2) = � β2 − � β2 �κ1<br />

· �κ1 =<br />

�κ1 �κ1<br />

� β2 − � β2 �κ1<br />

·<br />

�κ1 �κ1<br />

� β1<br />

This expansion shows that �κ2 is nonzero or else this would be a non-trivial linear<br />

dependence among the � β’s (it is nontrivial because the coefficient of � β2 is 1) and<br />

also shows that �κ2 is in the desired span. Finally, �κ2 is orthogonal to the only<br />

preceding vector<br />

�κ1 �κ2 = �κ1 ( � β2 − proj [�κ1]( � β2)) = 0<br />

because this projection is orthogonal.<br />

The i = 3 case is the same as the i = 2 case except for one detail. As in the<br />

i = 2 case, expanding the definition<br />

�κ3 = � β3 − � β3 �κ1<br />

· �κ1 −<br />

�κ1 �κ1<br />

� β3 �κ2<br />

· �κ2<br />

�κ2 �κ2<br />

= � β3 − � β3 �κ1<br />

·<br />

�κ1 �κ1<br />

� β1 − � β3 �κ2<br />

·<br />

�κ2 �κ2<br />

� β2<br />

� − � β2 �κ1<br />

·<br />

�κ1 �κ1<br />

� β1<br />

shows that �κ3 is nonzero and is in the span. A calculation shows that �κ3 is<br />

orthogonal to the preceding vector �κ1.<br />

� �β3 − proj [�κ1]( � β3) − proj [�κ2]( � β3) �<br />

�κ1 �κ3 = �κ1<br />

= �κ1<br />

�<br />

�β3 − proj [�κ1]( � β3) � − �κ1 proj [�κ2]( � =0<br />

β3)<br />

(Here’s the difference from the i = 2 case—the second line has two kinds of<br />

terms. The first term is zero because this projection is orthogonal, as in the<br />

i = 2 case. The second term is zero because �κ1 is orthogonal to �κ2 and so is<br />

orthogonal to any vector in the line spanned by �κ2.) The check that �κ3 is also<br />

orthogonal to the other preceding vector �κ2 is similar. QED<br />

Beyond having the vectors in the basis be orthogonal, we can do more; we<br />

can arrange for each vector to have length one by dividing each by its own length<br />

(we can normalize the lengths).<br />

2.8 Example Normalizing the length of each vector in the orthogonal basis of<br />

Example 2.6 produces this orthonormal basis.<br />

⎛<br />

1/<br />

〈 ⎝<br />

√ 3<br />

1/ √ 3<br />

1/ √ ⎞ ⎛<br />

−1/<br />

⎠ , ⎝<br />

3<br />

√ 6<br />

2/ √ 6<br />

−1/ √ ⎞ ⎛<br />

⎠ , ⎝<br />

6<br />

−1/√2 0<br />

1/ √ ⎞<br />

⎠〉<br />

2<br />

Besides its intuitive appeal, and its analogy with the standard basis En for R n ,<br />

an orthonormal basis also simplifies some computations. See Exercise 17, for<br />

example.<br />

Exercises<br />

2.9 Perform the Gram-Schmidt process on each of these bases for R 2 .<br />

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!