06.09.2021 Views

Linear Algebra, Theory And Applications, 2012a

Linear Algebra, Theory And Applications, 2012a

Linear Algebra, Theory And Applications, 2012a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

296 INNER PRODUCT SPACES<br />

∑<br />

(Av k ,w l )(v p ,v k )(w l ,w r )<br />

k,l<br />

=(Av p ,w r ) − ∑ k,l<br />

(Av k ,w l ) δ pk δ rl<br />

=(Av p ,w r ) − (Av p ,w r )=0.<br />

Letting A − ∑ k,l (Av k,w l ) w l ⊗ v k = B, this shows that Bv p =0sincew r is an arbitrary<br />

element of the basis for Y. Since v p is an arbitrary element of the basis for X, it follows<br />

B = 0 as hoped. This has shown {w j ⊗ v i : i =1, ··· ,n, j =1, ··· ,m} spans L (X, Y ) .<br />

It only remains to verify the w j ⊗ v i are linearly independent. Suppose then that<br />

∑<br />

c ij w j ⊗ v i =0<br />

Then do both sides to v s . By definition this gives<br />

i,j<br />

0= ∑ i,j<br />

c ij w j (v s ,v i )= ∑ i,j<br />

c ij w j δ si = ∑ j<br />

c sj w j<br />

Now the vectors {w 1 , ··· ,w m } are independent because it is an orthonormal set and so the<br />

above requires c sj = 0 for each j. Since s was arbitrary, this shows the linear transformations,<br />

{w j ⊗ v i } form a linearly independent set. <br />

Note this shows the dimension of L (X, Y )=nm. The theorem is also of enormous<br />

importance because it shows you can always consider an arbitrary linear transformation as<br />

a sum of rank one transformations whose properties are easily understood. The following<br />

theorem is also of great interest.<br />

Theorem 12.4.5 Let A = ∑ i,j c ijw i ⊗ v j ∈L(X, Y ) where as before, the vectors, {w i } are<br />

an orthonormal basis for Y and the vectors, {v j } are an orthonormal basis for X. Then if<br />

the matrix of A has entries M ij , it follows that M ij = c ij .<br />

Also<br />

Proof: Recall<br />

Av i ≡ ∑ k<br />

M ki w k<br />

Av i = ∑ k,j<br />

c kj w k ⊗ v j (v i )= ∑ k,j<br />

c kj w k (v i ,v j )<br />

= ∑ k,j<br />

c kj w k δ ij = ∑ k<br />

c ki w k<br />

Therefore,<br />

∑<br />

M ki w k = ∑ c ki w k<br />

k<br />

k<br />

and so M ki = c ki for all k. This happens for each i. <br />

12.5 Least Squares<br />

A common problem in experimental work is to find a straight line which approximates as<br />

well as possible a collection of points in the plane {(x i ,y i )} p i=1<br />

. The usual way of dealing<br />

with these problems is by the method of least squares and it turns out that all these sorts<br />

of approximation problems can be reduced to Ax = b where the problem is to find the best<br />

x for solving this equation even when there is no solution.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!