06.08.2013 Views

Abstract

Abstract

Abstract

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 2. TEMPORAL INTEGRATION 35<br />

Then the Krylov subspace for this problem at iteration j is<br />

Kj =span(r 0 ,Dr 0 ,D 2 r 0 ,...,D j−1 r 0 ).<br />

So the jth Krylov iterate is the solution of the least squares problem. GMRES<br />

terminates after reducing the relative residual by a specific tolerance η ∈ (0, 1], that<br />

is<br />

b − Da j 2<br />

r 0 2<br />

For the case of using GMRES to solve for the Newton step sm,whereD is the Jacobian<br />

matrix F ′ (zm), b = F (zm), and the initial iterate is a 0 = 0, then this reduces to the<br />

inexact-Newton condition. The standard theorem for convergence of GMRES [19] is<br />

given by<br />

Theorem 2.3. Let D be a nonsingular matrix in R M×M . Then GMRES will converge<br />

to the solution of the linear equation Da = b for y, b ∈ R M in at most M iterations.<br />

At each iteration j, aj dimensional least squares problem is solved to find a j ,and<br />

we must compute and store a basis for the Krylov space Kj. Appendix D explains<br />

how this j dimensional least squares problem is formed. The storage cost is j vectors,<br />

and the computation of the new basis vector is through a matrix-vector product<br />

of D with a vector. The advantage of using GMRES is that it does not require<br />

the coefficient matrix, but only a way of evaluating the matrix-vector product of<br />

the coefficient. Therefore this method is called matrix-free. In particular, we are<br />

≤ η

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!