06.09.2021 Views

Linear Algebra, Theory And Applications, 2012a

Linear Algebra, Theory And Applications, 2012a

Linear Algebra, Theory And Applications, 2012a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

390 NUMERICAL METHODS FOR FINDING EIGENVALUES<br />

A case of all this is of great interest. Suppose A has a largest eigenvalue λ which is<br />

real. Then A n is of the form ( )<br />

A n−1 a 1 , ··· ,A n−1 a n and so likely each of these columns<br />

will be pointing roughly in the direction of an eigenvector of A which corresponds to this<br />

eigenvalue. Then when you do the QR factorization of this, it follows from the fact that R<br />

is upper triangular, that the first column of Q will be a multiple of A n−1 a 1 and so will end<br />

up being roughly parallel to the eigenvector desired. Also this will require the entries below<br />

the top in the first column of A n = Q T AQ will all be small because they will be of the form<br />

q T i Aq 1 ≈ λq T i q 1 =0. Therefore, A n will be of the form<br />

( )<br />

λ<br />

′<br />

a<br />

e B<br />

where e is small. It follows that λ ′ will be close to λ and q 1 will be close to an eigenvector for<br />

λ. Then if you like, you could do the same thing with the matrix B to obtain approximations<br />

for the other eigenvalues. Finally, you could use the shifted inverse power method to get<br />

more exact solutions.<br />

15.2.2 The Case Of Real Eigenvalues<br />

With these lemmas, it is possible to prove that for the QR algorithm and certain conditions,<br />

the sequence A k converges pointwise to an upper triangular matrix having the eigenvalues<br />

of A down the diagonal. I will assume all the matrices are real here. ( ) 0 1<br />

This convergence won’t always happen. Consider for example the matrix<br />

.<br />

1 0<br />

You can verify quickly that the algorithm will return this matrix for each k. The problem<br />

here is that, although the matrix has the two eigenvalues −1, 1, they have the same absolute<br />

value. The QR algorithm works in somewhat the same way as the power method, exploiting<br />

differences in the size of the eigenvalues.<br />

If A has all real eigenvalues and you are interested in finding these eigenvalues along<br />

with the corresponding eigenvectors, you could always consider A + λI instead where λ is<br />

sufficiently large and positive that A + λI has all positive eigenvalues. (Recall Gerschgorin’s<br />

theorem.) Then if μ is an eigenvalue of A + λI with<br />

(A + λI) x = μx<br />

then<br />

Ax =(μ − λ) x<br />

so to find the eigenvalues of A you just subtract λ from the eigenvalues of A + λI. Thus<br />

there is no loss of generality in assuming at the outset that the eigenvalues of A are all<br />

positive. Here is the theorem. It involves a technical condition which will often hold. The<br />

proof presented here follows [26] and is a special case of that presented in this reference.<br />

Before giving the proof, note that the product of upper triangular matrices is upper<br />

triangular. If they both have positive entries on the main diagonal so will the product.<br />

Furthermore, the inverse of an upper triangular matrix is upper triangular. I will use these<br />

simple facts without much comment whenever convenient.<br />

Theorem 15.2.4 Let A be a real matrix having eigenvalues<br />

λ 1 >λ 2 > ···>λ n > 0<br />

and let<br />

A = SDS −1 (15.11)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!