14.02.2013 Views

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 7 / Missile Trajectory Optimisation<br />

_ _<br />

This condition ensures that the step length chosen results in a new gradient<br />

that is > 0 but < (ε2 := 0.5) with respect to the initial gradient.<br />

WOLFE CONDITION 3<br />

( X ) H ( X ) ≥ − ε ⋅ α ⋅ ( d g )<br />

H •<br />

k<br />

− k+<br />

1<br />

7-17<br />

3<br />

k<br />

k<br />

Equation 7.8-5<br />

This condition ensures that the function value is less than that its initial<br />

value using (ε3 := 0.0001) to prevent cycling. With this value the function<br />

must be reduced by 0.01% of the linear prediction. This condition is<br />

automatically satisfied in Steepest Descent if (ε1 := 1 and ε2 := 0).<br />

7.8.1 Newton - Raphson Method<br />

For Steepest Descent, [B] is replaced by the identity matrix [I]. This is the<br />

simplest of the gradient-based algorithms and is used to initialise many<br />

other more complex algorithms. Minimisation in function space using an<br />

inexact line search is conceptually simple, computationally efficient, robust<br />

away from an optimum solution. Improvement at each step is generally<br />

assured and accommodating control constraints is simple. Detrimentally,<br />

convergence close to an optimal solution is slow and prone to “zigzagging”,<br />

although this should not be a problem given continually changing<br />

boundary conditions.<br />

7.8.2 Conjugate Gradient Method<br />

Conjugate Gradient methods are often selected as an alternative to Newton-<br />

Raphson methods if Hessians are unavailable. The method searches along<br />

“n-1” conjugate directions following a steepest-decent step,<br />

T<br />

( ≠ j ) ∧ ( k , j ) ∈ [ 1(<br />

1 ) n ] ⇒ d ⋅ [ G ] ⋅ d : = 0<br />

k k<br />

j<br />

[ 1 ( 1 ) n − 1 ] ⇒ d : = g + βk<br />

dk<br />

-1<br />

k ∈<br />

⋅<br />

k<br />

k<br />

Equation 7.8-6<br />

Equation 7.8-7<br />

The use of inexact line searches can lead to slow convergence. For nonquadratic<br />

functions convergence takes more than (n) iterations and the<br />

process must be restarted when (k := n), generally using a steepest-decent<br />

step. CONJ_GRADIENT uses an implementation of Shanno’s Conjugate<br />

Gradient method recommended by Vorley [V.4] , a memoryless quasi-Newton<br />

algorithm modified for constraints in conjunction with a Fibonacci search.<br />

This algorithm is used to examine alternative directions when the cost<br />

reduction using the steepest-descent method begins to slow.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!