Numerical Methods in Quantum Mechanics - Dipartimento di Fisica
Numerical Methods in Quantum Mechanics - Dipartimento di Fisica
Numerical Methods in Quantum Mechanics - Dipartimento di Fisica
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
• Relative error: |a − b| < ɛa. This may run <strong>in</strong>to trouble close to x = 0.<br />
E.1.2<br />
• If the slope of f(x) close to zero is very small, there might be a f<strong>in</strong>ite<br />
<strong>in</strong>terval <strong>in</strong> which f(x) is <strong>in</strong><strong>di</strong>st<strong>in</strong>guishable from zero <strong>in</strong> mach<strong>in</strong>e representation.<br />
Newton-Raphson method<br />
The function is l<strong>in</strong>early approximated at each iteration <strong>in</strong> order to obta<strong>in</strong> a<br />
better estimate of the zero. Let us assume that we know both f(x) and f ′ (x).<br />
Then, close to x,<br />
f(x + δ) ≃ f(x) + f ′ (x)δ<br />
(E.1)<br />
and thus, to first order,<br />
δ = − f(x)<br />
f ′ (x)<br />
(E.2)<br />
would yield f(x + δ) = 0. We proceed by iterat<strong>in</strong>g <strong>in</strong> this way. It is possible to<br />
show that the rate of convergence is quadratic, i.e. the number of significant<br />
figures approximately doubles at each iteration (while with bisection method it<br />
grows l<strong>in</strong>early).<br />
The problem of this method is that convergence is not guaranteed, notably<br />
when f ′ (x) varies a lot close to the zero. Moreover, the method assumes that<br />
f ′ (x) can be <strong>di</strong>rectly calculated for any given x. In cases <strong>in</strong> which this snot<br />
true and the derivative must be calculated via f<strong>in</strong>ite <strong>di</strong>fferences, it is preferable<br />
to use the secant method described below.<br />
E.1.3<br />
Secant method<br />
This is based on a l<strong>in</strong>ear expansion of f(x) between two successive po<strong>in</strong>ts, x n e<br />
x n+1 , of an iteration:<br />
f(x) = f(x n−1 ) + x − x n−1<br />
x n − x n−1<br />
[f(x n ) − f(x n−1 )]<br />
(E.3)<br />
that provides as an estimate for the zero<br />
x n − x n−1<br />
x n+1 = x n − f(x n )<br />
f(x n ) − f(x n−1 )<br />
(E.4)<br />
The procedure consists <strong>in</strong> iterat<strong>in</strong>g the above step. It is not necessary that<br />
the zero is conta<strong>in</strong>ed <strong>in</strong>side the considered <strong>in</strong>terval. This however may lead to<br />
non-convergence <strong>in</strong> some pathological cases. In regular cases, the speed of convergence<br />
is much better than for the bisection method, although slightly slower<br />
that the Newton-Raphson method (which however requires the knowledge of<br />
the derivative).<br />
In most <strong>di</strong>fficult cases, it is convenient to split the search <strong>in</strong>to two step: first<br />
bisection, with a sure and safe bracket<strong>in</strong>g of the zero; then the secant method<br />
that quickly stabilizes the searched value up to the required precision.<br />
97