20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

102 CHAPTER 6. ITERATIVE EIGENVALUE SOLVERS<br />

The Lagrange function is constructed as:<br />

h(˜c, λ) =f(˜c)+λg(˜c). (6.57)<br />

If ˜c is a minimum for the constrained problem (6.56), then there exists a real λ<br />

such that (˜c, λ) is a stationary point for the Lagrange function (6.57). To compute<br />

∂h<br />

the stationary points, the first-order conditions for optimality are considered:<br />

∂λ<br />

and ∂h<br />

∂˜c 1<br />

,..., ∂h<br />

∂˜c k<br />

are computed. The partial derivatives with respect <strong>to</strong> ˜c i yield the<br />

equations:<br />

˜c i =<br />

d iṽ i<br />

d 2 for i =1,...,k. (6.58)<br />

i + λ<br />

These ˜c i , for i =1,...,k, can be substituted in<strong>to</strong> the partial derivative of h(˜c, λ) with<br />

respect <strong>to</strong> λ, which is:<br />

∂h<br />

∂λ = g(˜c) =˜c2 1 + ...+˜c 2 k − 1=0. (6.59)<br />

This yields a univariate polynomial equation in the only unknown λ of degree 2k.<br />

From this univariate polynomial the real-valued value for the Lagrange multiplier λ<br />

can be solved. Once λ is known, the values for ˜c i are computed by (6.58) and are backtransformed<br />

<strong>to</strong> c i by using the matrix V . In this way the solution c =(c 1 ,...,c k )<br />

of problem (6.52) is computed. However, at this point this promising idea is not<br />

yet implemented in combination with a Jacobi–Davidson method but is subject for<br />

further research.<br />

6.5 JDCOMM: A Jacobi–Davidson type method for commuting<br />

matrices<br />

This section introduces a Jacobi–Davidson eigenvalue solver for commuting matrices<br />

<strong>to</strong> use in combination with the optimization method described in the previous sections<br />

and chapters. For the specific application of polynomial optimization, this solver<br />

is more efficient than other Jacobi–Davidson implementations. The optimization approach<br />

described in the previous chapter has some promising properties which are used<br />

in designing this efficient Jacobi–Davidson method: (i) because of the common eigenvec<strong>to</strong>rs,<br />

all the matrices A pλ ,A x1 ,...,A xn commute, (ii) the matrices A x1 ,...,A xn<br />

are much sparser than the matrix A pλ , and finally, (iii) only the smallest real eigenvalue<br />

and its corresponding eigenvec<strong>to</strong>r of the matrix A pλ are required <strong>to</strong> locate the<br />

(guaranteed) global optimum of p λ , without addressing any (possible) local optimum<br />

p λ contains.<br />

6.5.1 The method JDCOMM<br />

Because the size of the commuting matrices mentioned in the previous section is<br />

usually very large, N × N with N = m n and m =2d − 1, it is not advisable <strong>to</strong><br />

manipulate the full matrix in every iteration step of an algorithm and <strong>to</strong> address all

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!