20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 6<br />

Iterative eigenvalue solvers in<br />

polynomial optimization<br />

Because only the smallest real eigenvalue of the large and sparse matrix A pλ is needed<br />

for the polynomial optimization problem of a Minkowski dominated polynomial, it<br />

is a plausible choice <strong>to</strong> compute this eigenvalue with an iterative eigenvalue solver.<br />

Iterative eigenvalue solvers have the useful feature that they can focus on a certain<br />

subset of the eigenvalue spectrum. Moreover they can be implemented <strong>to</strong> operate in<br />

a matrix-free fashion: iterative solvers accept a device which computes the matrixvec<strong>to</strong>r<br />

product on a given vec<strong>to</strong>r as input. This avoids the explicit construction of the<br />

matrix under consideration and enables the use of the nD-systems approach as described<br />

in Chapter 5. Iterative methods are based, for example, on an Arnoldi method<br />

or on a Jacobi–Davidson method. The Jacobi–Davidson (JD) method as presented by<br />

Sleijpen and van der Vorst (see [94]) is among the best iterative eigenvalue solvers. In<br />

the literature its implementation is called the JDQR method, for standard eigenvalue<br />

problems, and the JDQZ method, for generalized eigenvalue problems.<br />

In the first two sections of this chapter a general introduction <strong>to</strong> iterative eigenvalue<br />

solvers is given, followed by a brief explanation of the techniques used in Jacobi–<br />

Davidson solvers (based on [58]). In the Sections 6.3 and 6.4 some of our own modifications<br />

of the Jacobi–Davidson method, which aim <strong>to</strong> improve its efficiency for<br />

polynomial optimization, are introduced. Among these improvements are the ability<br />

<strong>to</strong> focus on real eigenvalues only, which is non-standard among iterative solvers, and<br />

a procedure <strong>to</strong> project an approximate eigenvec<strong>to</strong>r <strong>to</strong> a close-by vec<strong>to</strong>r with Stetter<br />

structure. The development of a Jacobi–Davidson eigenvalue solver for a set of commuting<br />

matrices is described in Section 6.5 of this chapter. This Jacobi–Davidson<br />

method is called the JDCOMM method and has two distinctive properties: (i) it is able<br />

<strong>to</strong> target the smallest real eigenvalue of a large sparse matrix and moreover (ii) it<br />

computes the eigenvalues of the sparse matrix A pλ while iterating with one of the<br />

81

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!