20.05.2014 Views

link to my thesis

link to my thesis

link to my thesis

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

110 CHAPTER 7. NUMERICAL EXPERIMENTS<br />

are exactly the same suggests that the NSolve function uses a matrix method <strong>to</strong>o <strong>to</strong><br />

solve the system of equations). To gain insight in the accuracy of the various methods,<br />

all the locations of the global minimum found with the above described methods have<br />

been used as the starting point for a local search method. Using a thirty digit working<br />

precision, which is a user option in the Mathematica/Maple software, the local search<br />

method obtains in all these cases the following coordinates for the global minimizer:<br />

x 1 =+0.876539213106233894587289929758<br />

x 2 = −0.903966282304642050057296045914<br />

x 3 =+0.862027936174326572650513966373<br />

x 4 = −0.835187476756286528192781820247<br />

(7.4)<br />

The corresponding criterion value of this point is computed as<br />

4.095164 744359157279770316678156. These values have been used as the ‘true’ minimizer<br />

and global minimum of the polynomial p 1 (x 1 ,x 2 ,x 3 ,x 4 ) for the purpose of accuracy<br />

analysis of the numerical outcomes of the various computational approaches.<br />

The norm of the difference between this ‘true’ minimizer and the minimizer computed<br />

by the NSolve/Eigensystem and the Eig methods is 7.10543 × 10 −15 and<br />

8.26006 × 10 −14 , respectively.<br />

Following the approach of the previous chapters, we then proceed <strong>to</strong> determine<br />

the global minimum of the polynomial (7.1) using the nD-system implementation of<br />

the linear opera<strong>to</strong>r A p1 <strong>to</strong> compute only its smallest real eigenvalue with an iterative<br />

eigenvalue solver (instead of working with the explicit matrix A p1 ). The heuristic<br />

method used here is the least-increments method. The coordinates of the global minimum<br />

are computed from the corresponding eigenvec<strong>to</strong>r, employing the Stetter vec<strong>to</strong>r<br />

structure. For this purpose the iterative eigenvalue solvers JDQR, JDQZ, and Eigs have<br />

been used. JDQR is a normal – and JDQZ is a generalized iterative eigenvalue solver.<br />

Both methods employ Jacobi–Davidson methods coded in Matlab. The method Eigs<br />

is an iterative standard Matlab eigenproblem solver which uses (restarted) Arnoldimethods<br />

through ARPACK. The selection criterion of the iterative solvers used here<br />

is <strong>to</strong> focus on the eigenvalues with the smallest magnitude first, hoping <strong>to</strong> find the<br />

smallest real eigenvalue first.<br />

For a fast convergence of these iterative eigenvalue methods, a balancing technique<br />

is used <strong>to</strong> balance the linear opera<strong>to</strong>r, see [27]. Balancing a linear opera<strong>to</strong>r or a matrix<br />

M means finding a similarity transform D −1 MD, with D a diagonal matrix, such<br />

that, for each i, row i and column i have (approximately) the same norm. The<br />

algorithms described in [27] help <strong>to</strong> reduce the norm of the matrix by using methods<br />

of Perron-Frobenius and the direct iterative method. Such a balancing algorithm is<br />

expected <strong>to</strong> be a good preconditioner for iterative eigenvalue solvers, especially when<br />

the matrix is not available explicitly.<br />

In Table 7.2 the results of these computations are displayed. The columns denote<br />

the methods used, the minimal real eigenvalue computed, the error as the difference<br />

between this eigenvalue and the ‘true’ eigenvalue computed above, the number of

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!