20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6.5. A JACOBI–DAVIDSON METHOD FOR COMMUTING MATRICES 103<br />

the N eigenvalues. Therefore it is a plausible choice <strong>to</strong> compute eigenvalues of the<br />

matrix A pλ by means of a Jacobi–Davidson eigenvalue solver.<br />

This section describes a Jacobi–Davidson eigenvalue solver called the JDCOMM<br />

method. The first important feature of this solver is that it targets on just one<br />

eigenvalue of the matrix A pλ : it tries <strong>to</strong> compute the smallest real eigenvalue of the<br />

matrix A pλ as it corresponds <strong>to</strong> the value of the global minimum of a polynomial<br />

p λ . Available options of iterative eigenvalue solvers are <strong>to</strong> target on eigenvalues with<br />

the smallest or largest magnitudes or smallest or largest real parts. Computing the<br />

smallest real eigenvalue is our own developed option. The second important feature<br />

of the JDCOMM eigenvalue solver is that it takes advantage of the fact that the matrix<br />

A pλ commutes with the other matrices A xi ,...,A xn and the fact that the matrices<br />

A xi ,...,A xn are much sparser than the matrix A pλ . This Jacobi–Davidson type<br />

method for commuting matrices, JDCOMM, is able <strong>to</strong> outperform a normal Jacobi–<br />

Davidson method by iterating with one of the very sparse matrices A xi in the inner<br />

loop and only iterate with the less sparse matrix A pλ in the outer loop. In general,<br />

in Jacobi–Davidson type methods, most of the work is done in the inner loops and<br />

therefore a speed up is expected.<br />

6.5.2 The JDCOMM expansion phase<br />

As a basis for the developed JDCOMM method we use the algorithm of the JD method<br />

described in [58]. The pseudocode of this JD method is given in Algorithm 2 of this<br />

<strong>thesis</strong>. To make it possible for the JDCOMM method <strong>to</strong> iterate with the sparse matrices<br />

A xi in the inner loop and iterate with the less sparse matrix A pλ in the outer loop,<br />

the expansion phase of the JD method, as described in Section 6.2.1, is extended and<br />

modified.<br />

The Jacobi–Davidson expansion phase of the JDCOMM method works as follows.<br />

Suppose we have an approximate eigenpair (θ, v) for the matrix A pλ , where v has<br />

unit norm and where θ is the Rayleigh quotient of A pλ and v defined as θ = v ∗ A pλ v.<br />

The deviation of an approximate eigenpair (θ, v) <strong>to</strong> a true eigenpair of the matrix<br />

A pλ is similarly as in Equation (6.2) measured by the norm of the residual r, defined<br />

here as:<br />

r = A pλ v − θv. (6.60)<br />

We now look for an update t ⊥ v such that the updated vec<strong>to</strong>r v + t is a true<br />

eigenvec<strong>to</strong>r of the matrix A pλ :<br />

Rearranging the terms gives:<br />

A pλ (v + t) =λ(v + t). (6.61)<br />

(A pλ − θI)t = −r +(λ − θ)v +(λ − θ)t. (6.62)<br />

Analogously as in Section 6.2.1, we derive the Jacobi–Davidson correction equation<br />

for the JDCOMM method: discarding the (λ − θ)t term, which is asymp<strong>to</strong>tically secondorder<br />

(or third-order if A pλ is symmetric), and projecting out the unknown quantity

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!