12.07.2015 Views

Linear Algebra

Linear Algebra

Linear Algebra

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

254 Chapter Three. Maps Between Spaces(b) Find the coefficients c 1 and c 2 for the projection of ⃗v into the spans of thetwo vectors in B 2 . Check that ‖⃗v ‖ 2 ≥ |c 1 | 2 + |c 2 | 2 .(c) Find c 1 , c 2 , and c 3 associated with the vectors in B 3 , and c 1 , c 2 , c 3 , and c 4for the vectors in B 4. Check that ‖⃗v ‖ 2 ≥ |c 1| 2 + · · · + |c 3| 2 and that ‖⃗v ‖ 2 ≥|c 1 | 2 + · · · + |c 4 | 2 .Show that this holds in general: where {⃗κ 1, . . . , ⃗κ k } is an orthonormal set and c i iscoefficient of the projection of a vector ⃗v from the space then ‖⃗v ‖ 2 ≥ |c 1 | 2 + · · · +|c k | 2 . Hint. One way is to look at the inequality 0 ≤ ‖⃗v − (c 1 ⃗κ 1 + · · · + c k ⃗κ k )‖ 2and expand the c’s.2.19 Prove or disprove: every vector in R n is in some orthogonal basis.2.20 Show that the columns of an n×n matrix form an orthonormal set if and onlyif the inverse of the matrix is its transpose. Produce such a matrix.2.21 Does the proof of Theorem 2.2 fail to consider the possibility that the set ofvectors is empty (i.e., that k = 0)?2.22 Theorem 2.7 describes a change of basis from any basis B = 〈 β ⃗ 1 , . . . , β ⃗ k 〉 toone that is orthogonal K = 〈⃗κ 1 , . . . , ⃗κ k 〉. Consider the change of basis matrixRep B,K (id).(a) Prove that the matrix Rep K,B (id) changing bases in the direction oppositeto that of the theorem has an upper triangular shape — all of its entries belowthe main diagonal are zeros.(b) Prove that the inverse of an upper triangular matrix is also upper triangular(if the matrix is invertible, that is). This shows that the matrix Rep B,K (id)changing bases in the direction described in the theorem is upper triangular.2.23 Complete the induction argument in the proof of Theorem 2.7.VI.3 Projection Into a SubspaceThis subsection, like the others in this section, is optional. It also requiresmaterial from the optional earlier subsection on Combining Subspaces.The prior subsections project a vector into a line by decomposing it into twoparts: the part in the line proj [⃗s ] (⃗v ) and the rest ⃗v − proj [⃗s ] (⃗v ). To generalizeprojection to arbitrary subspaces, we follow this idea.3.1 Definition For any direct sum V = M ⊕N and any ⃗v ∈ V , the projectionof ⃗v into M along N isproj M,N (⃗v ) = ⃗mwhere ⃗v = ⃗m + ⃗n with ⃗m ∈ M, ⃗n ∈ N.This definition doesn’t involve a sense of ‘orthogonal’ so we can apply it tospaces other than subspaces of an R n . (Definitions of orthogonality for otherspaces are perfectly possible, but we haven’t seen any in this book.)3.2 Example The space M 2×2 of 2×2 matrices is the direct sum of these two.( ) ( ) a b ∣∣ 0 0 ∣∣M = { a, b ∈ R} N = { c, d ∈ R}0 0c d

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!