21.07.2013 Views

nr. 477 - 2011 - Institut for Natur, Systemer og Modeller (NSM)

nr. 477 - 2011 - Institut for Natur, Systemer og Modeller (NSM)

nr. 477 - 2011 - Institut for Natur, Systemer og Modeller (NSM)

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A.5 Newton-Raphson Method 135<br />

system, ˙x = 0. The method uses a simple and a fast convergent algorithm provided<br />

that the initial guess value is close to the root (Eldén et al. (2004)). When we walk<br />

through the derivation and the applications it will be evident that the method is based<br />

on differential calculus and the simple idea of linear approximation.<br />

In order to derive it we start with the Taylor series expansion and since we applied the<br />

method on a system of differential equations the derivation will be based here on.<br />

Now let<br />

˙x = f(x) (A.19)<br />

And then let f(x) be a n-dimensional vector f = (f1, f2, ..., fn) T , x = (x1, x2, . . . , xn).<br />

Then starting with the Taylor series expansion of the function fi(x) about the point x<br />

fi(x + ∆x) = fi(x) +<br />

n ∂fi<br />

∆xj + O(∆x<br />

∂xj<br />

2 j) (A.20)<br />

j=1<br />

where the “big O” notation is an abbreviated way of writing the next terms in the series,<br />

i.e. the quadratic and higher order terms, and denotes that they are small, since ∆x is<br />

expected to be small. So by ignoring them we have the linearized version of equation<br />

A.20, which we write using vector notation<br />

f(x + ∆x) ≈ f(x) + J(f(x))∆x (A.21)<br />

here J(f(x)) is the jacobian matrix. Since MATLAB is build to process calculations<br />

numerically the elements in the Jacobian can not be derived analytically. Instead<br />

we rewrite equation A.20 (again without the quadratic terms) to the <strong>for</strong>m of a finite<br />

difference which then can be used as an approximation of the elements in the Jacobian<br />

∂fi<br />

∂xj<br />

≈ fi(x + hˆxj) − fi(x)<br />

h<br />

(A.22)<br />

where ˆxj is a unit vector pointing in the direction of xj and h is the small increment of<br />

x. Now, the purpose remains to find the roots of the function f(x) and the way of doing<br />

it will be through iterative steps. Lets assume that x is the initial approximation of the<br />

root x ∗ and hence solution to f(x) = 0. Then let x + ∆x be a better approximation.<br />

To find the increment ∆x, and thereby the improved approximation <strong>for</strong> the root, we<br />

set f(x + ∆x) = 0 in equation A.21<br />

0 = f(x) + J(f(x))∆x (A.23)<br />

Since ∆x = x k+1 − x k , where k is the number of iterations, we can rewrite equation<br />

A.23 in the following way<br />

x k+1 = x k − J(f(x)) −1 f(x) k<br />

(A.24)<br />

the procedure now repeats itself by setting x k+1 equal to x k until the increment |∆x| <<br />

ε, where ε is a user defined error tolerance on the estimated value of x ∗ compared to<br />

the analytical value. This is the essence in the Newton-Raphson method.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!