12.07.2015 Views

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

40 chapter 2Let us assume that an algorithm takes N steps to find a good answer. As a ruleof thumb, the approximation (algorithmic) error decreases rapidly, often as theinverse power of the number of terms used:ɛ approx ≃ αN β . (2.27)Here α and β are empirical constants that change for different algorithms and maybe only approximately constant, and even then only as N →∞. The fact that theerror must fall off for large N is just a statement that the algorithm converges.In contrast to this algorithmic error, round-off error tends to grow slowly andsomewhat randomly with N. If the round-off errors in each step of the algorithmare not correlated, then we know from previous discussion that we can modelthe accumulation of error as a random walk with step size equal to the machineprecision ɛ m :ɛ ro ≃ √ Nɛ m . (2.28)This is the slow growth with N that we expect from round-off error. The total errorin a computation is the sum of the two types of errors:ɛ tot = ɛ approx + ɛ ro (2.29)ɛ tot ≃ αN β + √ Nɛ m . (2.30)For small N we expect the first term to be the larger of the two but ultimately to beovercome by the slowly growing round-off error.As an example, in Figure 2.2 we present a log-log plot of the relativeerror in numerical integration using the Simpson integration rule (Chapter 6,“Integration”). We use the log 10 of the relative error because its negative tells us thenumber of decimal places of precision obtained. 1 As a case in point, let us assumeA is the exact answer and A(N) the computed answer. IfA−A(N)A≃ 10 −9 , then log 10∣ ∣∣∣ A−A(N)A∣ ≃−9. (2.31)We see in Figure 2.2 that the error does show a rapid decrease for small N, consistentwith an inverse power law (2.27). In this region the algorithm is converging. As Nis increased, the error starts to look somewhat erratic, with a slow increase on theaverage. In accordance with (2.29), in this region round-off error has grown largerthan the approximation error and will continue to grow for increasing N. Clearly1 Most computer languages use ln x = log e x. Yet since x = a log a x , we have log 10 x =ln x/ ln 10.−101<strong>COPYRIGHT</strong> <strong>2008</strong>, PRINCET O N UNIVE R S I T Y P R E S SEVALUATION COPY ONLY. NOT FOR USE IN COURSES.ALLpup_06.04 — <strong>2008</strong>/2/15 — Page 40

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!