Download - NASA
Download - NASA
Download - NASA
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
44 Solution Procedures<br />
initialize<br />
evaluate h<br />
test convergence: error = |hj − htargetj| ≤tolerance × weight j<br />
initialize derivative matrix D to input matrix<br />
calculate gain matrix: C = λD −1<br />
iteration<br />
identify derivative matrix<br />
optional perturbation identification<br />
perturb each element of x: δxi =Δ× weight i<br />
evaluate h<br />
calculate D<br />
calculate gain matrix: C = λD −1<br />
increment solution: δx = −C(h − htarget)<br />
evaluate h<br />
test convergence: error = |hj − htargetj| ≤tolerance × weight j<br />
Figure 5-4. Outline of Newton-Raphson method.<br />
5-2.3 Secant Method<br />
The secant method (with relaxation) is developed from the Newton–Raphson method. The modified<br />
Newton–Raphson iteration is:<br />
xn+1 = xn − Cf(xn) =xn − λD −1 f(xn)<br />
where the derivative matrix D is an estimate of f ′ . In the secant method, the derivative of f is evaluated<br />
numerically at each step:<br />
f ′ (xn) ∼ = f(xn) − f(xn−1)<br />
xn − xn−1<br />
It can be shown that then the error reduces during the iteration according to:<br />
|ɛn+1| ∼ = |f ′′ /2f ′ ||ɛn||ɛn−1| ∼ = |f ′′ /2f ′ | .62 |ɛn| 1.62<br />
which is slower than the quadratic convergence of the Newton–Raphson method (ɛ 2 n), but still better than<br />
linear convergence. In practical problems, whether the iteration converges at all is often more important<br />
than the rate of convergence. Limiting the maximum amplitude of the derivative estimate may also be<br />
appropriate. Note that with f = x − G(x), the derivative f ′ is dimensionless, so a universal limit (say<br />
maximum |f ′ | =0.3) can be specified. A limit on the maximum increment of x (as a fraction of the x<br />
value) can also be imposed. The process for the secant method is shown in Figure 5-5.<br />
5-2.4 Method of False Position<br />
The method of false position is a derivative of the secant method, based on calculating the derivative<br />
with values that bracket the solution. The iteration starts with values of x0 and x1 such that f(x0) and<br />
f(x1) have opposite signs. Then the derivative f ′ and new estimate xn+1 are<br />
f ′ (xn) ∼ = f(xn) − f(xk)<br />
xn − xk<br />
xn+1 = xn − λD −1 f(xn)