20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

112 CHAPTER 7. NUMERICAL EXPERIMENTS<br />

solutions of this system can be substituted in<strong>to</strong> the polynomial (7.1) <strong>to</strong> find the global<br />

minimum.<br />

The results of the computations using these software packages (using the default<br />

parameters) are collected in Table 7.3. Note that SOSTOOLS does not return the<br />

coordinates of the global minimum.<br />

Table 7.3: Results of SOSTOOLS, GloptiPoly, and SYNAPS<br />

Method x 1 ,x 2 ,x 3 ,x 4 Global Minimum Error Time (s)<br />

SOSTOOLS (−, −, −, −) 4.09516477401837 2.97 × 10 −8 10<br />

GloptiPoly (0.876535, −0.903963, 0.862021, −0.835180) 4.09516476247764 1.81 × 10 −8 11<br />

SYNAPS (0.876536, −0.903965, 0.862026, −0.835184) 4.09516474461324 2.50 × 10 −10 2<br />

When comparing the results in Table 7.2 and Table 7.3, we see that the methods<br />

SOSTOOLS, GloptiPoly and SYNAPS are faster than the methods using our nD-systems<br />

approach. Moreover, the method Eigs performs poorly: it uses very many opera<strong>to</strong>r<br />

actions and needs a lot of running time. But where the methods in Table 7.3 appear<br />

<strong>to</strong> be faster, they are not as accurate as the iterative eigenvalue solvers in Table 7.2.<br />

This may be due <strong>to</strong> the default <strong>to</strong>lerance settings used in these software packages:<br />

a trade off is involved between computation time and accuracy/error. Although the<br />

methods in Table 7.3 outperform the nD-systems approach in running time, the nDsystems<br />

approach leaves us the possibility <strong>to</strong> increase its performance and accuracy<br />

by making use of the internal structure and the sparsity of the problem. The results<br />

of these attempts are described in the next sections.<br />

7.2 Computing the global minimum using target selection<br />

In [53] a set of 22 Minkowski dominated polynomials is generated. The number of<br />

variables in these polynomials ranges from 2 <strong>to</strong> 4 and the <strong>to</strong>tal degree ranges from<br />

4 <strong>to</strong> 24. The explicit polynomials are given in Appendix B of this <strong>thesis</strong> and some<br />

characteristics are displayed in Table 7.4.<br />

Table 7.4 shows the number n of variables of the polynomials, the <strong>to</strong>tal degree<br />

2d, the matrix size of the involved matrices A T p which is computed as (2d − 1) n , the<br />

value of the global minimum (also the smallest real eigenvalue λ min ) and the number<br />

of terms of each of these polynomials.<br />

The global minimum of all the polynomials in Table 7.4 is computed using the<br />

nD-systems approach and an iterative solver. Each computation is repeated 20 times<br />

because of the randomly chosen first approximate eigenvec<strong>to</strong>r these solvers start with.<br />

The JD method, the implementation of the Jacobi–Davidson method by M. Hochstenbach<br />

in MATLAB (see [58]), is used for the research presented here. Also, the original<br />

Jacobi–Davidson method using QZ-decomposition, JDQZ [39], is used <strong>to</strong> compare the<br />

influence of different implementations of the iterative eigenvalue solver. Both implementations<br />

have various options that control their behavior. A preliminary analysis<br />

showed that some of these options should be adjusted <strong>to</strong> make sure that the algorithms<br />

are able <strong>to</strong> correctly converge <strong>to</strong> the smallest real eigenvalue. The set of parameters

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!