22.10.2014 Views

CERFACS CERFACS Scientific Activity Report Jan. 2010 – Dec. 2011

CERFACS CERFACS Scientific Activity Report Jan. 2010 – Dec. 2011

CERFACS CERFACS Scientific Activity Report Jan. 2010 – Dec. 2011

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

PARALLEL ALGORITHMS PROJECT<br />

7.16 An active set trust-region method for derivative-free nonlinear<br />

bound-constrained optimization.<br />

S. Gratton : INPT-IRIT, UNIVERSITY OF TOULOUSE AND ENSEEIHT, France ; Ph. L. Toint : FUNDP<br />

UNIVERSITY OF NAMUR, Belgium ; A. Tröltzsch : <strong>CERFACS</strong>, France<br />

In [ALG59], we consider an implementation of a recursive model-based active-set trust-region method<br />

for solving bound-constrained nonlinear non-convex optimization problems without derivatives using<br />

the technique of self-correcting geometry proposed by K. Scheinberg and Ph. L. Toint [Self-correcting<br />

geometry in model-based algorithms for derivative-free unconstrained optimization. SIAM Journal on<br />

Optimization, 20(6) :3512-3532, <strong>2010</strong>]. Considering an active-set method in bound-constrained modelbased<br />

optimization creates the opportunity of saving a substantial amount of function evaluations. It<br />

allows us to maintain much smaller interpolation sets while proceeding optimization in lower-dimensional<br />

subspaces. The resulting algorithm is shown to be numerically competitive.<br />

7.17 A hybrid optimization algorithm for gradient-based and<br />

derivative-free optimization.<br />

S. Gratton : INPT-IRIT, UNIVERSITY OF TOULOUSE AND ENSEEIHT, France ; Ph. L. Toint : FUNDP<br />

UNIVERSITY OF NAMUR, Belgium ; A. Tröltzsch : <strong>CERFACS</strong>, France<br />

A known drawback of derivative-free optimization (DFO) methods is the difficulty to cope with higher<br />

dimensional problems. When the problem dimension exceeds a few tens of variables, a pure DFO method<br />

becomes rather expensive in terms of number of function evaluations. For this reason, using gradient<br />

information, if accessible, is highly useful in the context of efficient optimization in practice (even if it<br />

is expected to be noisy). This applies especially when working with real-life applications as in aerodynamic<br />

shape optimization. It is well known that when the gradient is known, the L-BFGS method is a very efficient<br />

method for solving bound-constrained optimization problems. Coming back to the derivation of the BFGS<br />

method, it is possible to see it as a way to correct the Hessian information using the so-called secant<br />

equation information. We would like to generate a set of Hessian updates, that would generalize the L-<br />

BFGS approach to situations where the function or the gradient are approximated. We propose a family<br />

of algorithms that will both contain the derivative-free approach and the L-BFGS method, and that would<br />

therefore be able to optimally take into account the error occurring in the cost function or gradient of the<br />

problem.<br />

7.18 How much gradient noise does a gradient-based linesearch<br />

method tolerate ?<br />

S. Gratton : INPT-IRIT, UNIVERSITY OF TOULOUSE AND ENSEEIHT, France ; Ph. L. Toint : FUNDP<br />

UNIVERSITY OF NAMUR, Belgium ; A. Tröltzsch : <strong>CERFACS</strong>, France<br />

Among numerical methods for smooth unconstrained optimization, gradient-based linesearch methods, like<br />

quasi-Newton methods, may work quite well even in the presence of relatively high amplitude noise in the<br />

gradient of the objective function. We present some properties on the amplitude of this noise which ensure<br />

a descent direction for such a method. Exploiting this bound, we also discuss conditions under which global<br />

convergence can be guaranteed. More details can be found in [ALG59].<br />

<strong>CERFACS</strong> ACTIVITY REPORT 29

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!