20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

118 CHAPTER 7. NUMERICAL EXPERIMENTS<br />

Table 7.10: Sparsity of the matrices A p1<br />

and A x1 ,A x2 ,A x3 ,A x4<br />

Matrix of dimension 2401 × 2401 Number of non-zeros % filled<br />

A p1 1326556 23.01<br />

A x1 63186 1.10<br />

A x2 52558 0.91<br />

A x3 52663 0.91<br />

A x4 54238 0.94<br />

To put the performance of this method in<strong>to</strong> perspective, its performance is compared<br />

with the performance of the JD method. The JD method iterates with the relatively<br />

‘expensive’ opera<strong>to</strong>r A pλ , whereas the JDCOMM method tries <strong>to</strong> outperform the JD<br />

method by targeting on the smallest real eigenvalue while iterating with a ‘cheap’<br />

opera<strong>to</strong>r A xi in the inner loop and with the opera<strong>to</strong>r A pλ in the outer loop. Finally,<br />

the outcomes of both the methods JD and JDCOMM are compared <strong>to</strong> the outcomes of<br />

the SOSTOOLS and GloptiPoly software.<br />

All the results throughout this section are obtained with Matlab version R2007B<br />

running on an Intel Pentium PIV 2.8GHz platform with 1024MB of internal memory.<br />

7.5.1 Experiment 1<br />

Here we demonstrate the computation of the global minimum of a multivariate polynomial<br />

by using the JD and the JDCOMM method. For the first experiment a polynomial<br />

p λ with n =4,d =4,m = 7, and λ = 1 is considered:<br />

p 1 (x 1 ,x 2 ,x 3 ,x 4 )=(x 8 1 + x 8 2 + x 8 3 + x 8 4)+<br />

14x 4 1x 2 x 3 x 4 +6x 1 x 4 2x 3 x 4 − 11x 3 2x 2 3x 2 4 + x 1 x 2 x 2 3x 2 4 +8x 2 x 3 x 3 4+<br />

+x 3 x 2 4 +3x 1 x 2 + x 2 x 3 +2x 3 x 4 + x 4 +8.<br />

(7.6)<br />

Considering the quotient space R[x 1 ,x 2 ,x 3 ,x 4 ]/I of dimension N =(2d − 1) n =<br />

2401, the matrices A p1 , A x1 , A x2 , A x3 , and A x4 are constructed explicitly. From the<br />

2401 eigenvalues of these matrices, we are interested in the smallest real eigenvalue<br />

and corresponding eigenvec<strong>to</strong>r of the matrix A p1 . Table 7.10 shows the differences in<br />

the number of non-zero elements of all the involved matrices. The matrices A xi , i =<br />

1,...,4, are much sparser than the matrix A p1 . See also Figure 7.3 for a representation<br />

of the sparsity structure of these matrices.<br />

One approach would be <strong>to</strong> compute all the eigenvalues of the matrix A p1 using a<br />

direct solver and <strong>to</strong> select the smallest real one as being the global optimum of polynomial<br />

p 1 (x 1 ,x 2 ,x 3 ,x 4 ). A similar approach would be <strong>to</strong> compute all the eigenvalues<br />

of the matrices A x1 , A x2 , A x3 or A x4 and <strong>to</strong> read off the coordinates of the stationary<br />

points from the corresponding eigenvec<strong>to</strong>rs. The global optimum of p 1 (x 1 ,x 2 ,x 3 ,x 4 )<br />

can then be selected from all the stationary points by computing the associated function<br />

values and <strong>to</strong> pick out the smallest real one. Computing all the eigenvalues of the<br />

matrices A p1 , A x1 , A x2 , A x3 , and A x4 using a direct solver takes 57.9, 56.4, 54.5, 52.8,<br />

and 48.8 seconds respectively. The global minimizer we are looking for has the value

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!