20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

84 CHAPTER 6. ITERATIVE EIGENVALUE SOLVERS<br />

The main issue of this phase is how <strong>to</strong> expand the current search space U with a<br />

new direction <strong>to</strong> get a better approximation in the next iteration. The key idea here is<br />

<strong>to</strong> look for an orthogonal direction s ⊥ u such that (u+s) satisfies A(u+s) =λ (u+s),<br />

or in other words such that (u + s) is proportional <strong>to</strong> the normalized eigenvec<strong>to</strong>r x.<br />

This implies that:<br />

which can be rewritten as:<br />

As − θs =(−Au + θu)+(λu − θu)+(λs − θs) (6.5)<br />

(A − θI)s = −r +(λ − θ)u +(λ − θ)s. (6.6)<br />

Because ||s|| and (λ − θ) both are expected <strong>to</strong> be small and much smaller than the<br />

other terms in this equation, the last term on the right-hand side of (6.6) is neglected.<br />

Furthermore, the term (λ−θ)u is projected out with the orthogonal projection matrix<br />

(I − uu ∗ ). Note that (I − uu ∗ )u = 0 (for ||u|| = 1), (I − uu ∗ )r = r (since r ⊥ u)<br />

and (I − uu ∗ )s = s (upon requiring s ⊥ u). This gives rise <strong>to</strong> the Jacobi–Davidson<br />

correction equation:<br />

(I − uu ∗ )(A − θI)(I − uu ∗ )s = −r where s ⊥ u. (6.7)<br />

In a Jacobi–Davidson method the vec<strong>to</strong>r s is solved approximately (with modest<br />

accuracy) from (6.7) in a matrix-free fashion using, for example, methods as GMRES<br />

of BiCGStab. A preconditioner can be used in this case <strong>to</strong> prevent for (possible) illconditioning<br />

and <strong>to</strong> (possibly) speed up the convergence. The vec<strong>to</strong>r s is then used<br />

<strong>to</strong> expand the search space U: the vec<strong>to</strong>r s is orthogonalized against all the vec<strong>to</strong>rs<br />

in U and added <strong>to</strong> it.<br />

6.2.2 Subspace extraction<br />

In the subspace extraction phase, the main goal is <strong>to</strong> compute an approximate eigenpair<br />

(θ, u) for the matrix A with u ∈U. The extraction approach described here is<br />

the Rayleigh-Ritz approach (see [58]).<br />

Let U be an N × k matrix whose columns constitute an orthonormal basis of U<br />

and where k ≪ N. Any vec<strong>to</strong>r u ∈U can be written as u = Uc, a linear combination<br />

of the columns of U with c a small k-dimensional vec<strong>to</strong>r. To compute an approximate<br />

eigenvec<strong>to</strong>r u, the Rayleigh-Ritz approach proceeds by computing c by imposing the<br />

Ritz-Galerkin condition:<br />

r = Au − θu = AUc − θUc ⊥ U (6.8)<br />

which holds if and only if (θ, c) is an eigenpair of the k-dimensional projected eigenproblem:<br />

U ∗ AU c = θc (6.9)<br />

(the orthonormality of U is used here: U ∗ U = I k ). Once this low-dimensional eigenproblem<br />

is solved and an eigenpair (θ, c) is available, the approximate eigenvec<strong>to</strong>r

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!