20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

6.2. THE JACOBI–DAVIDSON METHOD 85<br />

for the original problem is computed as u = Uc. The projected low-dimensional<br />

eigenproblem is usually solved by a direct method. The vec<strong>to</strong>r u is called a Ritz<br />

vec<strong>to</strong>r. Moreover, because of (6.8) r ⊥ u and θ is the Rayleigh quotient of u. Thus,<br />

(θ, Uc) =(θ, u) is the back-transformed eigenpair which acts as an approximation <strong>to</strong><br />

(λ, x) of the original problem (6.1).<br />

An important property of the Jacobi–Davidson method is that it allows the user<br />

<strong>to</strong> focus on a subset of the eigenvalues via the specification of a target, as further<br />

discussed in Section 6.3 of this chapter.<br />

In some cases the Rayleigh-Ritz extraction approach can give poor approximations.<br />

This issue is addressed by the Refined Rayleigh-Ritz extraction method. Given<br />

an approximate eigenpair (θ, c) of the small projected eigenproblem (6.9), this refined<br />

approach solves:<br />

ĉ =<br />

argmin<br />

c∈C k ,||c||=1<br />

|| (AU − θU) c || 2 2. (6.10)<br />

The back-transformed eigenvec<strong>to</strong>r û = Uĉ, called the refined Ritz vec<strong>to</strong>r, usually<br />

gives a much better approximation than the ordinary Ritz vec<strong>to</strong>r u (see [94]). A new<br />

approximate eigenvalue ˆθ can be computed using the Rayleigh quotient (6.3) of û.<br />

Note that both the subspace expansion and the subspace extraction phase of<br />

the Jacobi–Davidson method can be performed in a matrix-free fashion as all the<br />

operations in which the matrix A is involved are matrix-vec<strong>to</strong>r products, which can<br />

be computed without having the matrix A explicitly at hand. This enables the use<br />

of the matrix-free nD-systems approach discussed in Chapter 5.<br />

6.2.3 Remaining important issues<br />

In the two previous subsections the main phases of the Jacobi–Davidson method, the<br />

subspace extension phase and the subspace extraction phase, are presented. In this<br />

subsection we discuss the important remaining issues of the Jacobi–Davidson method.<br />

First, during the iterations the search space U grows. When this space reaches the<br />

maximum dimension j max the method is restarted with a search space of dimension<br />

j min containing the j min most promising vec<strong>to</strong>rs. A ‘thick’ restart indicates that<br />

the method is restarted with the approximate eigenvec<strong>to</strong>rs from the previous j min<br />

iterations. The quantities j min and j max are user specified parameters. Second, before<br />

the first iteration the search space U is empty. One can decide <strong>to</strong> start with a Krylov<br />

search space of the matrix A of dimension j min <strong>to</strong> reach the wanted eigenvalue in less<br />

iteration steps. Finally, when one wants <strong>to</strong> compute more than one eigenvalue the<br />

so-called ‘deflation step’ becomes important. In this step the matrix A is deflated<br />

<strong>to</strong> a matrix  such that it no longer contains the already computed eigenpair (λ, x):<br />

 =(I − xx ∗ ) A (I − xx ∗ ).<br />

All the parameter options mentioned in this paragraph, and also the assumed <strong>to</strong>lerance<br />

and the maximum allowed number of iterations, have a considerable influence<br />

on the performance of the method.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!