20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

104 CHAPTER 6. ITERATIVE EIGENVALUE SOLVERS<br />

(λ − θ)v by the orthogonal projec<strong>to</strong>r (I − vv ∗ ) which also fixes r, gives rise <strong>to</strong> the<br />

Jacobi–Davidson correction equation:<br />

(I − vv ∗ )(A pλ − θI)(I − vv ∗ )t = −r where t ⊥ v. (6.63)<br />

In the Jacobi–Davidson method, the vast majority of the computational work is spent<br />

in solving the correction equation. Therefore, the simple but crucial idea is <strong>to</strong> make<br />

use of the sparser matrices A xi , i =1,...,n, that have the same eigenvec<strong>to</strong>rs as A pλ .<br />

Since v is an approximate eigenvec<strong>to</strong>r for A pλ , it is likewise an approximate eigenvec<strong>to</strong>r<br />

for the matrix A xi . Let η be the Rayleigh quotient of A xi and v: η = v ∗ A xi v.<br />

Instead of Equation (6.61) we now want <strong>to</strong> update the vec<strong>to</strong>r v such that we get an<br />

eigenvec<strong>to</strong>r for A xi :<br />

A xi (v + t) =µ(v + t) (6.64)<br />

for a certain eigenvalue µ of A xi . Using the approximate value η for A xi , this leads,<br />

similarly <strong>to</strong> Equation (6.62), <strong>to</strong>:<br />

where the residual r is defined here as:<br />

(A xi − ηI)t = −r +(µ − η)v +(µ − η)t, (6.65)<br />

r = A xi v − ηv. (6.66)<br />

By again neglecting the higher-order term (µ−η)t and projecting out the unknown<br />

term (µ − η)v by the orthogonal projec<strong>to</strong>r (I − vv ∗ ), we get the Jacobi–Davidson<br />

correction equation involving the matrix A xi :<br />

(I − vv ∗ )(A xi − ηI)(I − vv ∗ )t = −r where t ⊥ v. (6.67)<br />

The advantage of this correction equation compared <strong>to</strong> the correction equation in<br />

(6.63) is that the matrix-vec<strong>to</strong>r products with A xi spent in approximately solving<br />

this equation are much cheaper than matrix-vec<strong>to</strong>r multiplication with A pλ . Typical<br />

practical examples indicate that the number of non-zeros of A xi is often 10% or less<br />

than the number of non-zeros of A pλ .<br />

6.5.3 Pseudocode of the JDCOMM method<br />

In this subsection we give in Algorithm 3 the pseudocode of the JDCOMM method, the<br />

Jacobi–Davidson type method for commuting matrices. Note that in the subspace<br />

extraction phase of every (outer) iteration (Step 5 of the algorithm) we need <strong>to</strong> work<br />

with A pλ , since we should head for the leftmost real eigenvalue of A pλ .<br />

We note that in Step 11, we may choose between two procedures. First, we may<br />

just compute η by computing A xi v and subsequently left-multiplying by v ∗ . This<br />

implies an extra cost of one matrix-vec<strong>to</strong>r product per iteration. Alternatively, we<br />

can s<strong>to</strong>re A xi V .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!