14.02.2013 Views

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

Thesis - Leigh Moody.pdf - Bad Request - Cranfield University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 7 / Missile Trajectory Optimisation<br />

_ _<br />

7.8 Search Direction<br />

Rapid initial convergence from the PN trajectory and in response to<br />

changing boundary conditions is required. Slow convergence to an optimal<br />

trajectory thereafter is less import compared with the ability to respond<br />

rapidly to target manoeuvres. The selection of the search direction is a<br />

crucial factor in the stability and efficiency of optimisation techniques for<br />

minimising the Hamiltonian function. A review of the development of<br />

optimisation algorithms is provided by Bazaraa [B.4] , including an extensive<br />

bibliography.<br />

Techniques for selecting the search direction using derivative information<br />

are better suited to the current application, although direct searches are more<br />

robust in an off-line capacity. Gradient-based techniques update the system<br />

states and controls by taking a step of length (α) in the direction (d) defined<br />

by the inverse Hessian [B] -1 and gradient vector (g),<br />

X<br />

k + 1<br />

X k 1 : = X k − d<br />

+<br />

: =<br />

7-16<br />

X<br />

k<br />

− α ⋅<br />

−1<br />

[ B ] ⋅ g<br />

Equation 7.8-1<br />

Equation 7.8-2<br />

Techniques requiring Hessians such as the class of Variable Metric Methods<br />

converge more rapidly than gradient based alternatives. However, they<br />

involve numerical differencing of non-linear aerodynamic functions, and<br />

possible ill-conditioning if penalty or barrier functions are used. Early<br />

optimisation algorithms were prone to premature termination, divergence<br />

and cycling. Wolfe introduced three rules associated with the length<br />

travelled along the search direction to prevent these effects, conditions that<br />

ensure that the magnitude of the gradient (g) reduces for twice differentiable<br />

convex functions.<br />

WOLFE CONDITION 1<br />

T<br />

−1<br />

[ B ] ⋅ g > - ε ⋅ g ⋅ d<br />

g • d = - g ⋅<br />

1<br />

Equation 7.8-3<br />

This condition ensures that the direction chosen (d) is close to the initial<br />

gradient (g) by selecting a small value, typically (ε1 := 0.001).<br />

WOLFE CONDITION 2<br />

g • d ≤ ε ⋅ g • d<br />

k + 1<br />

k<br />

2<br />

k<br />

k<br />

Equation 7.8-4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!