07.02.2013 Views

Optimization and Computational Fluid Dynamics - Department of ...

Optimization and Computational Fluid Dynamics - Department of ...

Optimization and Computational Fluid Dynamics - Department of ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

66 H.G. Bock <strong>and</strong> V. Schulz<br />

The approximation B <strong>of</strong> the Hessian <strong>of</strong> f is <strong>of</strong>ten performed by Quasi-Newton<br />

update formulas as discussed in [29]. Step (1) <strong>of</strong> this algorithm involves two<br />

costly operations which, however, can be performed in a highly modular<br />

way. First, y(p k ) has to be computed, which means basically a full nonlinear<br />

solution <strong>of</strong> the flow problem abbreviated by equation (3.19). Furthermore,<br />

the corresponding gradient, ∇pf(y(p k ),p k ), has to be determined. One <strong>of</strong> the<br />

most efficient methods for this purpose is the adjoint method which means<br />

the solution <strong>of</strong> the linear adjoint problem, since for y k = y(p k )oneobtains<br />

∇pf(y(p k ),p k )=fp(y k ,p k ) ⊤ + cp(y k ,p k ) ⊤ λ<br />

where λ solves the adjoint problem<br />

cy(y k ,p k ) ⊤ λ = −fy(y k ,p k ) ⊤ .<br />

Assuming that the CFD-model (3.19) is solved by Newton’s method, one<br />

might wonder whether it may be enough to perform only one Newton step per<br />

optimization iteration in the algorithm above. This results in the algorithmic<br />

steps<br />

⎡ ⎤⎛<br />

⎞ ⎛ ⎞<br />

(1) solve ⎣<br />

0 0 c ⊤ x<br />

0 Bc ⊤ p<br />

cx cp 0<br />

⎦⎝<br />

∆y<br />

∆p⎠<br />

= ⎝<br />

λ<br />

−f ⊤ y<br />

−f ⊤ p<br />

−c<br />

(2) update (y k+1 ,p k+1 )=(y k ,p k )+τ · (∆y k+1 ,∆p k+1 )<br />

⎠<br />

This algorithm is called a reduced SQP algorithm. The local convergence can<br />

be again <strong>of</strong> quadratic, super-linear or linear type [26, 34], depending on how<br />

well B approximates the so-called reduced Hessian <strong>of</strong> the Lagrangian<br />

0 Bc ⊤ p<br />

cx cp 0<br />

B ≈Lpp −Lpyc −1<br />

y cp − (Lpyc −1<br />

y cp) ⊤ + c ⊤ p c −⊤<br />

y Lyyc −1<br />

y cp .<br />

The vector λ produced in each step <strong>of</strong> the reduced SQP algorithm converges<br />

to the adjoint variable vector <strong>of</strong> problem (3.18, 3.19) at the solution. This<br />

iteration again can be written in the form <strong>of</strong> a Newton-type method<br />

⎡<br />

0 0 c<br />

⎣<br />

⊤ ⎤⎛<br />

x<br />

⎦⎝<br />

∆y<br />

⎞ ⎛<br />

−L<br />

∆p⎠<br />

= ⎝<br />

∆λ<br />

⊤ ⎞ ⎛<br />

y y<br />

⎠, ⎝<br />

k+1<br />

⎞ ⎛<br />

y<br />

⎠ = ⎝<br />

k<br />

⎞ ⎛<br />

⎠ + τ · ⎝ ∆y<br />

⎞<br />

∆p⎠<br />

. (3.21)<br />

∆λ<br />

−L ⊤ p<br />

−c<br />

p k+1<br />

λ k+1<br />

This iteration can be generalized to inexact linear solves with an approximate<br />

matrix A ≈ cx such that the approximate reduced SQP iteration reads as<br />

⎡<br />

0 0 A<br />

⎣<br />

⊤<br />

⎤⎛<br />

⎦⎝<br />

∆y<br />

⎞ ⎛<br />

−L<br />

∆p⎠<br />

= ⎝<br />

∆λ<br />

⊤ ⎞ ⎛<br />

y<br />

⎠ , ⎝ yk+1<br />

⎞ ⎛<br />

⎠ = ⎝ yk<br />

⎞ ⎛<br />

⎠ + τ · ⎝ ∆y<br />

⎞<br />

∆p⎠<br />

. (3.22)<br />

∆λ<br />

0 Bc ⊤ p<br />

Acp 0<br />

−L ⊤ p<br />

−c<br />

p k+1<br />

λ k+1<br />

It is shown in [23] that in this case, the use <strong>of</strong> an approximation <strong>of</strong> the<br />

consistently reduced Hessian, i.e.,<br />

p k<br />

λ k<br />

p k<br />

λ k

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!