05.04.2013 Views

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Hybrid</strong> <strong>Methods</strong> <strong>for</strong> <strong>Initial</strong> <strong>Value</strong> <strong>Problems</strong> <strong>in</strong> Ord<strong>in</strong>ary <strong>Differential</strong> Equations<br />

C. W. Gear<br />

Journal of the Society <strong>for</strong> Industrial and Applied Mathematics: Series B, Numerical Analysis, Vol.<br />

2, No. 1. (1965), pp. 69-86.<br />

Stable URL:<br />

http://l<strong>in</strong>ks.jstor.org/sici?sici=0887-459X%281965%292%3A1%3C69%3AHMFIVP%3E2.0.CO%3B2-X<br />

Journal of the Society <strong>for</strong> Industrial and Applied Mathematics: Series B, Numerical Analysis is currently published by Society<br />

<strong>for</strong> Industrial and Applied Mathematics.<br />

Your use of the JSTOR archive <strong>in</strong>dicates your acceptance of JSTOR's Terms and Conditions of Use, available at<br />

http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, <strong>in</strong> part, that unless you have obta<strong>in</strong>ed<br />

prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content <strong>in</strong><br />

the JSTOR archive only <strong>for</strong> your personal, non-commercial use.<br />

Please contact the publisher regard<strong>in</strong>g any further use of this work. Publisher contact <strong>in</strong><strong>for</strong>mation may be obta<strong>in</strong>ed at<br />

http://www.jstor.org/journals/siam.html.<br />

Each copy of any part of a JSTOR transmission must conta<strong>in</strong> the same copyright notice that appears on the screen or pr<strong>in</strong>ted<br />

page of such transmission.<br />

The JSTOR Archive is a trusted digital repository provid<strong>in</strong>g <strong>for</strong> long-term preservation and access to lead<strong>in</strong>g academic<br />

journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers,<br />

and foundations. It is an <strong>in</strong>itiative of JSTOR, a not-<strong>for</strong>-profit organization with a mission to help the scholarly community take<br />

advantage of advances <strong>in</strong> technology. For more <strong>in</strong><strong>for</strong>mation regard<strong>in</strong>g JSTOR, please contact support@jstor.org.<br />

http://www.jstor.org<br />

Mon Feb 18 14:15:35 2008


J. SIAM NUMER. ANAL.<br />

Scr. B, Vol. 2, No. 1<br />

Pr<strong>in</strong>ted <strong>in</strong> U.S.A.,1964<br />

HYBRID METHODS FOR INITIAL VALUE PROBLEMS IN<br />

ORDINARY DIFFERENTIAL EQUATIONS*<br />

Abstract. <strong>Methods</strong> <strong>for</strong> the <strong>in</strong>tegration of <strong>in</strong>itial value problems <strong>for</strong> the ord<strong>in</strong>ary<br />

differential equation dyldx = f(x, y) which are a comb<strong>in</strong>ation of one step procedures<br />

(e.g., Runge-Kutta) and multistep procedures (e.g., Adams' method) are discussed.<br />

A generalization of a theorem from Henrici [3] proves that these methods converge<br />

under suitable conditions of stability and consistency. This, <strong>in</strong>cidentally, is also a<br />

proof that predictor-corrector methods us<strong>in</strong>g a f<strong>in</strong>ite number of iterations converge.<br />

Four specific methods of order 4,6,8 and 10 have been found. Numerical comparisons<br />

of the first three of these have been made with Adams', Nordsieck's and the Runge-<br />

Kutta methods.<br />

1. Introduction. The orig<strong>in</strong>al motivation <strong>for</strong> this work was a desire to<br />

achieve some of the flexibility of Runge-Kutta type methods, which allow<br />

easy start<strong>in</strong>g and step size chang<strong>in</strong>g procedures, and at the same time<br />

achieve the <strong>in</strong>creased speed due to the fewer function evaluations and the<br />

higher order possible with nlultistep nlethods such as Adams' method.<br />

Multistep predictor-corrector methods are also characterized by readily<br />

yield<strong>in</strong>g a number (the difference between the predictor and the corrector)<br />

that is a reasonable estimate of the local truncation error. This number can<br />

be used to autoniatically control the step size. Runge-Kutta methods<br />

typically require a further function evaluation to get such an estimate<br />

(e.g., see the Kutta-&lerson process [I,p. 241).<br />

One approach to this problem has been made by Nordsieck [4] who recasts<br />

Adams' method <strong>in</strong> such a way that the derivatives of the approximat<strong>in</strong>g<br />

polynomial, rat,her t,han <strong>in</strong><strong>for</strong>mat,ion at several funct,ion po<strong>in</strong>ts, are<br />

reta<strong>in</strong>ed. These are <strong>in</strong>dependent of step size so that it can be changed at<br />

will. The orig<strong>in</strong>al approach made by this author was the follow<strong>in</strong>g:<br />

(a) Reduce the amount of <strong>in</strong><strong>for</strong>mation (the number of po<strong>in</strong>ts) be<strong>in</strong>g<br />

reta<strong>in</strong>ed to m<strong>in</strong>imize the start-up and step change problems.<br />

(b) Calculate <strong>in</strong><strong>for</strong>mation at the midpo<strong>in</strong>ts as part of the process. Doubl<strong>in</strong>g<br />

the step size requires no <strong>in</strong>terpolation. If the midpo<strong>in</strong>ts are available,<br />

halv<strong>in</strong>g would also require no <strong>in</strong>terpolation (although it is true that <strong>in</strong>terpolation<br />

requires very little arithmetic <strong>in</strong> con~parison to the rest of the job).<br />

There<strong>for</strong>e, the <strong>in</strong>vestigation started with the class of methods described<br />

by<br />

* Received by the editors July 15, 1964.<br />

t Digital Computer Laboratory, University of Ill<strong>in</strong>ois, Urbana, Ill<strong>in</strong>ois.<br />

69


70 C. W. GEAR<br />

applied to the differential equation<br />

The derivative at x,+~is not recalculated from the corrected value of y,+,<br />

<strong>in</strong> order to save a function evaluation.<br />

These methods require knowledge about k po<strong>in</strong>ts <strong>in</strong> addition to x, .<br />

They have been <strong>in</strong>vestigated <strong>in</strong> detail <strong>for</strong> k = 1 and 2 and partially <strong>for</strong> k<br />

up to 9. The ma<strong>in</strong> numerical result is that it is possible to achieve a high<br />

order accuracy of (2k + 2) <strong>for</strong> k up to 4, but not <strong>for</strong> k from 5 to 9. S<strong>in</strong>ce<br />

this order accuracy compares favorably with the k + 2 (k + 3 if k odd)<br />

accuracy obta<strong>in</strong>able from multistep niethods, the emphasis was shifted<br />

from the problem of mak<strong>in</strong>g life easier <strong>for</strong> chang<strong>in</strong>g the step size to that of<br />

achiev<strong>in</strong>g high order niethods us<strong>in</strong>g few po<strong>in</strong>ts.<br />

Four specific methods of the class (1.1) are given. One has a fourth order<br />

corrector <strong>for</strong>mula with k = 1. It has one free parameter <strong>in</strong> the corrector<br />

which iliust be chosen to make the method stable. In $3, the extraneous<br />

roots of the l<strong>in</strong>ear difference equations <strong>for</strong> the error are discussed. There<br />

are 2k + 1nonpr<strong>in</strong>cipal roots, of which k + 1are zero if h(af/ay) = 0. For<br />

k = 1the other nonpr<strong>in</strong>cipal root can also be made equal to zero by a suit-<br />

able choice of the parameter.<br />

When k = 2, a 6th order method can be found. It also has one free param-<br />

eter <strong>in</strong> the corrector which can be chosen to m<strong>in</strong>imize the extraneous roots<br />

of the difference equation. Zero roots can only be achieved by return<strong>in</strong>g to a<br />

5th order method. The predictor y;+l also has one free parameter if it is<br />

5th order (which will only cause a 7th order error <strong>in</strong> the corrector). This<br />

parameter can be chosen to get 6th order <strong>in</strong> the predictor, but it then has<br />

undesirable stability effects on the method. In $4, various choices of these<br />

parameters are discussed, and the method is conipared with the Adams 6th<br />

order method, Nordsieck's method and the Runge-Kutta method on the<br />

<strong>in</strong>tegration of J16(x) <strong>for</strong> x = 6(h)6138, with h = &, & and i.<br />

$5 discusses higher order niethods that can be achieved. Coefficients of<br />

8th and 10th order iliethods which are stable are given. The 8th order<br />

method was also used to <strong>in</strong>tegrate Jls(x), x = 6(&)6138, with good


HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

accuracy. The 10th order niethod was not used as it almost certa<strong>in</strong>ly has<br />

undesirable stability properties when is nonzero.<br />

A recent paper by Gragg and Stelter [2] deals with a generalization of<br />

this method where the non-mesh-po<strong>in</strong>t niay be any po<strong>in</strong>t fixed <strong>in</strong> relation<br />

to x, . The generalization taken <strong>in</strong> 82 of this paper allows a number of<br />

non-mesh-po<strong>in</strong>ts to be used. A theoren1 is proved giv<strong>in</strong>g sufficient condi-<br />

tions <strong>for</strong> the convergence of these methods. These conditions are that each<br />

of the predictors (the estimates of the solution at a set of po<strong>in</strong>ts) is at<br />

least of order zero, that the corrector (the f<strong>in</strong>al estimate of y and the next<br />

mesh po<strong>in</strong>t) is at least of order 1 and that the corrector is stable. This<br />

general <strong>for</strong>mulation conta<strong>in</strong>s Runge-Kutta and nlultistep predictorcorrector<br />

methods as subsets. The <strong>for</strong>mulation is explicit s<strong>in</strong>ce, <strong>in</strong> practice,<br />

only a f<strong>in</strong>ite number of iterations of the corrector can be used. All but the<br />

last can be viewed as a set of predictors.<br />

2. A general <strong>for</strong>mulation and proof of convergence. Consider the sequence<br />

of operations<br />

71<br />

j-1<br />

+ hCyjifpi , <strong>for</strong> j = 1, 2, -. , J,<br />

i=O<br />

where h is the step size (x, = a + nh) and fpi = f(pi,,+l). (The quantities<br />

y, f and p are assumed to be vectors represent<strong>in</strong>g both the <strong>in</strong>dependent<br />

variable x and the one or more components of the dependent variable.)<br />

p~-~,,+~ will be taken as the predicted value of yn+l, and p,,,+, will be<br />

taken as the corrected value. To avoid a f<strong>in</strong>al evaluation of the derivative<br />

f, we def<strong>in</strong>e fn+lby fn+l = f(p~-l,n+l).<br />

If k = 0, this method is the general explicit Runge-Kutta niethod. On<br />

the other hand, if the coefficients are such that<br />

p..- p. .<br />

39% - ,<br />

yj,j-1 = yj-1,j-2, <strong>for</strong> i= 0, 1, .-.,k and j = 2,3, ..., J,<br />

and all other yii are zero, then this nlethod represents the use of a multi-<br />

step predictor followed by J applications of a corrector <strong>for</strong>mula.<br />

To discuss the error and convergence of this method we <strong>in</strong>troduce the<br />

usual notation en = yn - y(xn), where y(x) is the solution of the differen-


72 C. W. GEAR<br />

tial equation, and y, is the value at x = x, calculated by (2.1) from a<br />

suitable set of start<strong>in</strong>g values yo , yl , . . . , y~ . We use the def<strong>in</strong>ition of<br />

convergence from Henrici [3]; that is, the method is convergent if, <strong>for</strong> every<br />

problem dy/dx = f(x, y), x E [a, b], y(a) = A, satisfy<strong>in</strong>g Lipshitz and<br />

cont<strong>in</strong>uity conditions <strong>for</strong> x E [a, b] and y E (- co + co), en -+ 0 as<br />

2, -+ x E [a, b] and yi -+ A <strong>for</strong> i = 0, . . . , k as h -+ 0.<br />

The method is said to be stable if each of the roots of the polynomial<br />

equation<br />

is either <strong>in</strong>side the unit circle or on the unit circle and is simple.<br />

The method is consistent if the corrector is of order I or greater and if<br />

all of the predictors, , i = 0, . . . , J - I, are of order zero or greater.<br />

Formally this nieans that<br />

<strong>for</strong> j = J - 1, J and <strong>for</strong> those j such that y,j # 0 (those pj,,+l that are<br />

not used <strong>in</strong> the corrector do not even have to be of order 0 and if the<br />

PJ,~,i = 0, -.. , k, andy~,~-~ are 0, (2.2a) need not hold <strong>for</strong> j = J - I),<br />

and<br />

The theorem to be proved is: The method converges if it is stable and con-<br />

sistent. The proof closely parallels the proof of a similar theorem <strong>in</strong> Henrici<br />

[3, Theorem 5.101.<br />

Proof. Def<strong>in</strong>e<br />

Let<br />

<strong>for</strong> j = 0, I, . , J.<br />

Thus LhP and Lh are the truncation errors <strong>in</strong> the f<strong>in</strong>al predictor and cor-<br />

rector respectively. We will first show that consistency implies the Lhp<br />

and Lh/h -+ 0 as h + 0 uni<strong>for</strong>mly <strong>for</strong> x E [a, b], and then show that these<br />

conditions and stability imply convergence.


Def<strong>in</strong>e<br />

HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

X(6) = lnax / y'(x*) - y1(z)1.<br />

1 s-s* 116<br />

x,s*€ [cbl<br />

X(6) exists and -+ 0 as 6 4 0, s<strong>in</strong>ce dy/& = f(x, y(x)) exists and is con-<br />

t<strong>in</strong>uous <strong>in</strong> the closed <strong>in</strong>terval [a,b]. Hence<br />

and<br />

where<br />

yl(xn-i) = y '(~n)+ OiX(ih),<br />

y(~-i)= y(xn) - ih[yl(xn)+ 6i1z(ih)],<br />

[ 0 1 1, 1 0,' [ 5 1, and x, , xn-i E [a,bl.<br />

Now<br />

LhP(xn, y) = ~Ll,n+l - Y(xn+l)<br />

But<br />

There<strong>for</strong>e<br />

Similarly<br />

+ hX(kh)B,<br />

where B is bounded as h -+ 0.<br />

- y(xn> - h[y(xn)+ ~ LIx(~)].<br />

LhP(x,, y) = O(h)4 0 as h -+ 0.<br />

73


74 C. W. GEAR<br />

If yl,i f 0, then Cko ai,, = 1, and f satisfies a Lipshitz condition, so<br />

that the last term can be replaced by<br />

Now us<strong>in</strong>g<br />

and<br />

1 + C icuJ,i- C PJ.i - C YJ,i = k + 1 - C (k - i)aJ,,<br />

we get<br />

There<strong>for</strong>e,<br />

- C PJ,i - C yJ,i - k(l - C a~,i)= 0,<br />

Lh(xn, y) = hBX(kh) + 0(h2).<br />

Def<strong>in</strong>e the error <strong>in</strong> the predictor as E,", that is, en* = p ~ - -~ , y(xn). ~<br />

Then<br />

and<br />

When we substitute <strong>for</strong> the p - pT terms <strong>in</strong> (2.5)and (2.6), terms of the<br />

<strong>for</strong>m hy(f(pJ,%)- f(p1;))will be <strong>in</strong>cluded, which, by the Lipschitz condition,<br />

can be replaced by hrL(pj,i - pIi) where L is bounded. S<strong>in</strong>ce the<br />

<strong>for</strong>mula is explicit, the pJSi- $,i terms can be substituted <strong>for</strong> repeatedly.<br />

The process stops after a lnaximum of J steps, and results <strong>in</strong> a polynonzial<br />

<strong>in</strong> h with bounded coefficients. Thus (2.5)and (2.6)can be rewritten as


HYBRID METHODS FOR INITIAL VALUE PROBLEMS 75<br />

where the Pli ,P2i ,Pad, and P di are polynomials <strong>in</strong> h with bounded coefi-<br />

cients.<br />

Henrici [3, Lemma 5.51 shows that if<br />

is the polynomial of a stable method, and if<br />

then there exists a constant I? < w such that / do I 5 r, p = 0, 1, -.. .<br />

Multiply (2.7) by aN-n-l and sum from n = k to N -1to get<br />

N-1 k<br />

(2.10) = x6N-n-1Lh(~n, h) + x (bounded multiples of E; and ~i*)<br />

n-k i=O<br />

From (2.9), 60 = 1and terms like<br />

There<strong>for</strong>e, the left hand side of (2.10) equals EN. The last term of the<br />

right hand side <strong>in</strong>volves each ei and ei* no more than k + 1times. There-<br />

<strong>for</strong>e (2.10) gives the bound<br />

where M and C are bounded <strong>for</strong> all h less than or equal to some ho . Now<br />

N - k 5 (b - a)/h, and


76 C. W. GEAR<br />

there<strong>for</strong>e,<br />

where S(h) is a function of the error <strong>in</strong> the <strong>in</strong>itial conditions which approaches<br />

0 as h -+ 0, and C is bounded <strong>for</strong> h 5 ho . Substitute (2.11) <strong>in</strong> the<br />

term x ar-l,iEn-i of (2.8) to get<br />

*<br />

where C* is bounded <strong>for</strong> h 5 ho. Let EN = <strong>in</strong>ax [I e~ 1, 1 en I] and co<strong>in</strong>pare<br />

(2.11) and (2.12) (not<strong>in</strong>g that LhP(xn-l, h) -+ 0 as h -+ 0) to get<br />

where B is bounded and v(h) -+ 0 as h -+ 0. <strong>Initial</strong>ly Ei = Ei* = E~ <strong>for</strong><br />

i = 0,1, . - , k. Let these be bounded by K +0 as h -+ 0; then<br />

There<strong>for</strong>e stability and consistency imply convergence.<br />

As <strong>in</strong> most discussions of convergence, the error bound given by (2.13)<br />

is of little practical use. Techniques similar to these <strong>in</strong> [3, $5.31 can be used<br />

to get better asymptotic expressions <strong>for</strong> the error <strong>in</strong> particular cases.<br />

3. A fourth order method. If 1c = 1 is used <strong>in</strong> (1.1) and the coefficients<br />

are chosen to give the rnax<strong>in</strong>~uln stable order possible, the equations beconle<br />

where 0 < or 5 4<strong>for</strong> stability.<br />

If it is assumed that df/dy = X is constant (iff and y are vectors, X is a<br />

matrix, <strong>in</strong> which case this analysis must be viewed <strong>for</strong>mally), the equa-<br />

tions <strong>for</strong> the error growth are, <strong>in</strong> the case of general k,


HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

These are a pair of simultaneous difference equations with constant<br />

coefficients, each of degree k + 1. Look<strong>in</strong>g <strong>for</strong> a solution of the <strong>for</strong>m<br />

en = tn, en* = Atn, a (2k + 2)th degree polynomial equation <strong>for</strong> is ob-<br />

ta<strong>in</strong>ed. At X = 0, k + 1 of these roots are 0 and the rema<strong>in</strong>der are the<br />

roots of the stability polynomial<br />

In the fourth order method given above, these roots are 1 (the pr<strong>in</strong>cipal<br />

root) and 1 - 6a.<br />

This method does not have sufficient accuracy to justify its use <strong>in</strong> most!<br />

circumstances, but a simple comparison was made between it with a = 3,<br />

t'he 4th order Runge-Kutta and the 4th order Adams-Bash<strong>for</strong>th-Adanis-<br />

Nloulton predictor-corrector method. The equations<br />

were <strong>in</strong>tegrated <strong>for</strong> n: = 0(.1)50, and it was found that the method lies<br />

between Runge-Kutta and Adams method <strong>in</strong> accuracy. This is to be ex-<br />

pected s<strong>in</strong>ce Adams used the largest spread of po<strong>in</strong>ts and Runge-Kutta<br />

the smallest, and s<strong>in</strong>ce the coefficient <strong>in</strong> the error terms is proportional to<br />

the distance of the various po<strong>in</strong>ts used from the <strong>in</strong>terpolated po<strong>in</strong>t. The 5<br />

digit results <strong>for</strong> y(50) are shown <strong>in</strong> Table 1.<br />

TABLE1<br />

<strong>Value</strong> at x = 50 us<strong>in</strong>g po<strong>in</strong>ts x = 0(.1)50<strong>in</strong> (3.1) calculated on the IBM 7094<br />

Method I r<strong>in</strong> x 1 Error 1 cos x 1 Error 1 1022 exp x 1 Error 1 10-21 cxp / Error<br />

FORTRAN Library - ,26237 .96497<br />

Runge-Kutta - ,26241 -4 ,96495<br />

<strong>Hybrid</strong> method - .26245 -8 .96494<br />

Adams - .26228 +9 .96507<br />

77


78 C. W. GEAR<br />

4. A sixth order method. Us<strong>in</strong>g (1.1) with k = 2, and choos<strong>in</strong>g coeffi-<br />

cients to get a 5th order method, we arrive at the equations<br />

where<br />

and orl, a2 , are constants that must satisfy additional conditions <strong>for</strong><br />

stability. Each of these equations is 5th order, and the truncation error <strong>in</strong><br />

y,+~,assum<strong>in</strong>g that y, ,y,-l and yn-2 are exact, is<br />

Thus along the l<strong>in</strong>e 5040 - 23a2 = 58 <strong>in</strong> the at-0 plane, the corrector is<br />

6th order.<br />

If we assume that X = df/dy is constant, as <strong>in</strong> $2,then the error satisfies<br />

a difference equation of order 2k + 2 = 6. At X = 0, three of these roots<br />

are 0, the pr<strong>in</strong>cipal root is 1, and the rema<strong>in</strong><strong>in</strong>g two can be seen to satisfy<br />

For the method to be stable, these roots must either lie <strong>in</strong> the unit circle,<br />

or lie on the unit circle and be mutually unequal and unequal to 1. Fig. 1<br />

shows the region of stability <strong>in</strong> the a2-0-plane. Inside the triangle with<br />

vertices A = (-2, + $), B = (+2, +&)and C = (+2, ++),the method<br />

is stable. The l<strong>in</strong>es AC and BC, except <strong>for</strong> the po<strong>in</strong>ts A and B, correspond<br />

to s<strong>in</strong>iple roots on the unit circle. The po<strong>in</strong>t D = (1, Agn)is the po<strong>in</strong>t at<br />

which both roots of (4.3) are 0, the po<strong>in</strong>t of "n~aximum stability." Also<br />

shown <strong>in</strong> Fig. 1is the l<strong>in</strong>e of 6th order accuracy. This crosses the <strong>in</strong>terior<br />

of the triangle ABC, mean<strong>in</strong>g that po<strong>in</strong>ts on t,he l<strong>in</strong>e segment PQ corre-<br />

spond to 6th order stable methods.<br />

In choos<strong>in</strong>g a method from the l<strong>in</strong>e PQ, one could m<strong>in</strong>imize the largest<br />

nonpr<strong>in</strong>cipal root (which then has the value .04). Because many of the<br />

prelim<strong>in</strong>ary calculations were be<strong>in</strong>g done by hand, the po<strong>in</strong>t E = (1, &),<br />

which is only a short distance from the m<strong>in</strong>imum, was chosen. This leads


HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

FIG. 1. Region of stability and 6th order l<strong>in</strong>e<br />

to simpler numbers <strong>for</strong> the human computer. At this po<strong>in</strong>t the largest non-<br />

pr<strong>in</strong>cipal root is .19.<br />

The choice of al does not affect the results when X = 0. However, when<br />

X # 0, al does modify both the error and the stability properties. In order<br />

to decide on a suitable al , the 6 roots of the difference equations <strong>for</strong> E,<br />

*<br />

and en were calculated <strong>for</strong> various values of orl and Ah with a2 = 1 and<br />

/? = &. The difference between the pr<strong>in</strong>cipal root and the exp (Ah) is a<br />

crude measure of the truncation error <strong>in</strong> one step. hXmi, , the smallest value<br />

of hX <strong>for</strong> which the largest nonpr<strong>in</strong>cipal root was smaller than the pr<strong>in</strong>cipal<br />

root, and hXm,, , the largest value of hX <strong>for</strong> which the largest nonpr<strong>in</strong>cipal<br />

root was less than one, were calculated <strong>for</strong> various al . Also the differences<br />

between the pr<strong>in</strong>cipal root and exp (Ah) <strong>for</strong> Xh = +.l and -.I, A+ and<br />

79


C. W. GEAR<br />

FIG. 2. A+ and hh,,, aga<strong>in</strong>st on<br />

FIG. 3. A- and -hi,;, aga<strong>in</strong>st cul<br />

A-, respectively, were calculated <strong>for</strong> values of a1 . (Note that at al = -@,<br />

the predictor is also of 6th order, so one expects better accuracy-but not<br />

~tabilit~y-<strong>in</strong> this neighborhood.)<br />

The results are shown <strong>in</strong> Figs. 2 and 3. As was expected, the error de-<br />

creases as al moves down towards -$8, but the range of Xh <strong>for</strong> which the


HYBRID RIETHODS FOR INITIAL VALUE PROBLEMS 81<br />

TABLE 2<br />

Comparison of k = 2 methods, x = 0(.1)200<br />

x = 100 x = 200<br />

---<br />

s<strong>in</strong> Error cos Error s<strong>in</strong> Error cos Error<br />

---<br />

"TRUE" (IBM -5063 656 8623 186 -8732 973 4871 871<br />

7094 FORTRAN)<br />

6th order (E), 506 150 051 135 575 398 795 76<br />

a1 = 1<br />

6th order (E), 557 99 059 127 671 302 757' 114<br />

a1 = 0<br />

5th order (D), 489 167 042 144 528 445 794 77<br />

a1 = 1<br />

5th order (D)! 543 113 052 134 641 332 760 111<br />

a1 = 0<br />

equation is stable decreases. Any al between -.5 and +.4 would appear<br />

to be a reasonable choice, the <strong>for</strong>mer be<strong>in</strong>g preferred if the equation is<br />

unstable, the latter <strong>for</strong> stable equations.<br />

In order to compare different parameter comb<strong>in</strong>ations <strong>for</strong> k = 2, the<br />

equations<br />

were <strong>in</strong>tegrated <strong>for</strong> x = 0(.1)200 on an IBM 7094. Four parameter com-<br />

b<strong>in</strong>ations were used, correspond<strong>in</strong>g to the po<strong>in</strong>ts D (5th order, "maximally<br />

stable") and E (a 6th order method) with a1 = 0 and 1<strong>in</strong> each case. The<br />

results are shown <strong>in</strong> Table 2. As expected, the 6th order method us<strong>in</strong>g<br />

a1 = 0 is slightly superior.<br />

Nordsieck [4] gives the results of <strong>in</strong>tegrat<strong>in</strong>g JIG(%) from x - 6 to<br />

x = 6138 by his method which is a variant of the Adams 6th order process.<br />

There<strong>for</strong>e, this equation has been <strong>in</strong>tegrated fro<strong>in</strong> the same start<strong>in</strong>g values<br />

that Nordsieck used. (Jl~(6) = .000 001 201 950; dJ1~(6)/dx = .000 002<br />

986 480.) The <strong>in</strong>tegration was carried out by Runge-Kutta, Adams-Bash-<br />

<strong>for</strong>th-Adams-Moulton 6th order [with s<strong>in</strong>gle correction where it was stable<br />

(h = &) and double correction where s<strong>in</strong>gle correction was unstable<br />

(h = & and h = Q)], by the 6th order hybrid with k = 2 correspond<strong>in</strong>g to<br />

the po<strong>in</strong>t E <strong>in</strong> Fig. 1 with al = 0, and by an 8th order method discussed <strong>in</strong><br />

the next section. The step sizes Q, & and & were used <strong>in</strong> order t)o make<br />

mean<strong>in</strong>gful comparisons with Nordsieck's method. The results that he<br />

quotes, although <strong>for</strong> variable step size, took average step sizes of Q and &.<br />

S<strong>in</strong>ce this method <strong>in</strong>volves two evaluations of the derivative, it seemed<br />

pert<strong>in</strong>ent to use Adams s<strong>in</strong>gle corrector with one derivative evaluation <strong>for</strong>


82 C. W. GEAR<br />

109J10 (6136) by <strong>in</strong>tegration of yN + -31 - (-- l)y=O<strong>for</strong>z=B(h)6136<br />

Step size h 1 Adnms* I 6th order hybrid I Nordsieck Runge-Kutta / 8th order hybrid<br />

- 3 1 2 -9745831.7 -9745831.8 I -974 5671.7<br />

(Error) 1 -1 1 5 9<br />

-1<br />

16 -974 5809.0 -974 5839.0 -974 5792 -974 3089.3 -974 5831.0<br />

(Error) 22 -8 39 2742 0<br />

-1<br />

8 -974 3394.7 -974 6398.0 -974 1657 -969 5556.4 Unstable<br />

(Error) 2436 -567 4174 5 0275<br />

* S<strong>in</strong>gle correction <strong>for</strong> h = &. Double correction <strong>for</strong> h = & and B.<br />

TABLE4<br />

109J16 (6138) by <strong>in</strong>tegralion of yN + Y' -- (-- - 1)1~=O<strong>for</strong>z=6(h)6138<br />

Step size h<br />

&<br />

(Error)<br />

I Adams' I 6th order hybrid / Nordsieck<br />

.I<br />

Runge-Kutta 8th order hybrid<br />

2 16<br />

136 2485.4<br />

(Error) 0<br />

-I<br />

8<br />

(Error)<br />

* Adams was s<strong>in</strong>gle corrector h = A.Double corrector h = and 4.<br />

Unstable<br />

h = &. All of the methods (except <strong>for</strong> R-K) were started by an R-K<br />

<strong>in</strong>tegration on + the step size <strong>in</strong> order not to <strong>in</strong>troduce start<strong>in</strong>g errors <strong>in</strong>to<br />

the comparison. The results are summarized <strong>in</strong> Tables 3 and 4.<br />

The values of J10(6138) and Jls(6136) quoted by Nordsieck are lo-'<br />

X 1362485 and -lov9 X 9745831, respectively. The errors shown <strong>in</strong> the<br />

tables are based on these values and the calculated values rounded to 7<br />

significant digits and thus can be <strong>in</strong> error by k1.The results were obta<strong>in</strong>ed<br />

by s<strong>in</strong>gle precision float<strong>in</strong>g po<strong>in</strong>t arithmetic on ILLIAC 11, which represents<br />

about 13 decimal digits (22 base 4 digits). There<strong>for</strong>e the roundoff <strong>in</strong> the<br />

<strong>in</strong>tegration should not be significant <strong>in</strong> the con~parisons.<br />

5. Higher order methods. For k = 1 and k = 2 it was possible to achieve<br />

(2k + 2)th order accuracy with (1.1). This relation can also be seen to


HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

hold <strong>for</strong> k = 0 when the equations represent the second order Runge-Kutta<br />

method. The question of whether this is true <strong>for</strong> all k naturally arises. Un-<br />

<strong>for</strong>tunately the answer is no, but it is true <strong>for</strong> k = 3 and 4, giv<strong>in</strong>g rise to<br />

8th and 10th order methods respectively. Parameters <strong>for</strong> both cases are<br />

given below, but the 10th order method has undesirable stability proper-<br />

ties due to large roots of the stability polynomial and hence it is not recom-<br />

mended. For k = 5, 6, 7, 8, and 9, methods of order 2k + 2 are not stable.<br />

These results were obta<strong>in</strong>ed by a numerical <strong>in</strong>vestigation of methods<br />

<strong>for</strong> k = 3, 4, -.. , 9 on ILLIAC 11. For each k, the coefficients of the<br />

method were calculated from the 2k + 3 l<strong>in</strong>ear simultaneous equations<br />

obta<strong>in</strong>ed by a Taylor series expansion to 2k + 3 terms. S<strong>in</strong>ce there are<br />

2k + 4 parameters <strong>in</strong> the corrector, these coefficients are l<strong>in</strong>ear functions<br />

of one parameter, which was taken as yzl, the coefficient of f,+l. The roots<br />

of the stability polynomial,<br />

ffj0<br />

Pjo<br />

ffjl<br />

Pjl<br />

ffj2<br />

Pj2<br />

ffj3<br />

014<br />

Y$0<br />

3jl<br />

1<br />

TABLE5<br />

Stable methods <strong>for</strong> k = 3 and 4<br />

Half po<strong>in</strong>t ( j = 0) Predictor j = 1 I<br />

Corrector (j = 2)<br />

k = 3<br />

8th Order<br />

83<br />

k = 4<br />

10th Order<br />

(Rounded to<br />

9 digits)


84 C. W. GEAR<br />

were then studied. If azo(yzl) goes to m, as I 721 I goes to m, the largest root<br />

is asymptotic to a2o(yzl). The rema<strong>in</strong><strong>in</strong>g roots are O(1) as yzl becomes <strong>in</strong>-<br />

f<strong>in</strong>ite. There<strong>for</strong>e, it is only necessary to exam<strong>in</strong>e values of yzl <strong>in</strong> a region<br />

near the orig<strong>in</strong> to try to f<strong>in</strong>d stable methods. The search<strong>in</strong>g method was<br />

crude, consist<strong>in</strong>g of a series of runs, each calculat<strong>in</strong>g the roots of the sta-<br />

bility polynomial <strong>for</strong> a regular mesh of po<strong>in</strong>ts on the yzl axis. The first run<br />

took a fairly large <strong>in</strong>terval between mesh po<strong>in</strong>ts <strong>in</strong> order to locate the region<br />

where the dom<strong>in</strong>ant root would be small; successive mesh ref<strong>in</strong>ements nar-<br />

rowed <strong>in</strong> on this area until it was possible to plot the roots of the equation.<br />

When k is larger than 4, the largest root of the equation fails to get below<br />

+1 as yzl is <strong>in</strong>creased from - m be<strong>for</strong>e a second root of the equation ex-<br />

ceeds +l. These two roots coalesce and become complex conjugates, re-<br />

ma<strong>in</strong><strong>in</strong>g outside of the unit circle, then coalesce aga<strong>in</strong> on the negative real<br />

axis and become real; one of them then goes off towards -m .<br />

Particular cases of the method <strong>for</strong> k = 3 and 4 are given <strong>in</strong> Table 5. The<br />

largest nonpr<strong>in</strong>cipal roots of these methods are .274 and .695, respectively.<br />

The latter is so large as to malie that method unstable <strong>for</strong> values of Ah<br />

much different from 0, so it has not been used <strong>in</strong> any further work. The<br />

8th order method was used to <strong>in</strong>tegrate J16(x).The results at x = 6136<br />

and x = 6138 are shown <strong>in</strong> Tables 3 and 4 <strong>for</strong> a step size of A.The method<br />

is unstable <strong>for</strong> h = +,and h = & was not tried because of the accuracy at<br />

h = A.As <strong>in</strong> the case of k = 2, there is a degree of freedom <strong>in</strong> the predictor<br />

which is not shown <strong>in</strong> Table 5. It is possible that the 8th and 10th order<br />

methods could be made more stable <strong>for</strong> some values of Ah by a better choice<br />

of t,he predictor. For example, a multiple, al , of<br />

may be added to the k = 3 predictor without chang<strong>in</strong>g the order from 7.<br />

The error term of the predictor is proportional to


HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

Positive al would improve the stability slightly s<strong>in</strong>ce it lends to <strong>in</strong>crease<br />

a10and all froni negative values, and to decrease the other al, from positive<br />

values. However, positive al will <strong>in</strong>crease the predictor error, and thus a<br />

large error will result if ILX is not zero.<br />

6. Conclusion. The most obvious conclusion that can be drawn fro111<br />

Tables 3 and 4 is that one should not use Runge-Kutta if high order derivatives<br />

are well behaved! For the general problem, it is difficult to say<br />

much more. If the evaluation of the derivative is time consum<strong>in</strong>g, then it<br />

seems reasonable to compare Adanis' s<strong>in</strong>gle corrector h = A,and the other<br />

lliethods (<strong>in</strong>clud<strong>in</strong>g Adan~s double corrector) <strong>for</strong> h = &. Each of these<br />

niethods taltes 32 evaluations per unit step <strong>in</strong> x. Kordsiecli's <strong>in</strong>etllod differs<br />

froni Adams' niethod only <strong>in</strong> the predictor and <strong>in</strong> the effect ~vheii the step<br />

size is changed, but this apparently <strong>in</strong>troduces a co~lsicierable ai~iount of<br />

error. Automatic step size chang<strong>in</strong>g probably would pay off <strong>in</strong> cascs where<br />

there is a sudden change of behavior <strong>in</strong> the function; <strong>for</strong> those cases one<br />

expects Nordsieck's method to be superior. A step chang<strong>in</strong>g niechanislii<br />

can be programmed on the basis of the difference between the predictor and<br />

corrector of the hybrid methods presented. Although this is a term of order<br />

h(2kt+ (2A-2' + 0(hZN3),it is <strong>in</strong>dicative of the behavior of the '.average"<br />

y<br />

value of the error term which <strong>in</strong>cludes ternls <strong>in</strong> h2k+3y'2h-3) (a) and<br />

h2k+3 Xy(2a-2)<br />

(2).<br />

The hybrid 6th order has about three times the error of Adams' double<br />

corrector <strong>in</strong> one case, and about 4 <strong>in</strong> the other. It has the advantage of<br />

us<strong>in</strong>g two additional po<strong>in</strong>ts <strong>in</strong>stead of 4 (5 if Adams-Bash<strong>for</strong>th is used as a<br />

corrector). If storage space and hence the number of po<strong>in</strong>ts is a criterion<br />

<strong>in</strong> large vector problems, then the 8th order hybrid method is iiiore accurate<br />

but still requires fewer additional po<strong>in</strong>ts. If one is prepared to use more<br />

function cvaluations and/or use other po<strong>in</strong>ts besides the half po<strong>in</strong>t, prob-<br />

ably higher order stable methods exist and would pay off <strong>in</strong> special cases;<br />

but it seems unlikely that one would want to go beyond order 8 or 10, as it<br />

appears to become harder to control the larger number of extraneous roots,<br />

or to have much knowledge of the high order derivatives.<br />

Gragg and Stetter [2]consider correctors s<strong>in</strong>lilar to the corrector used<br />

<strong>in</strong> this paper with J = 2. They choose the po<strong>in</strong>t of evaluation of the first<br />

predictor to optimize the (stable) order of the corrector. This <strong>in</strong>troduces<br />

an additional degree of freedo<strong>in</strong> not exist<strong>in</strong>g <strong>in</strong> the correctors used here,<br />

allow<strong>in</strong>g a corrector of order 2k + 4 rather than 21c + 2 to be chosen. Their<br />

surpris<strong>in</strong>g result is that <strong>for</strong> k 5 3 (the k of Gragg and Stetter is one larger<br />

than this one), these optimal order methods are stable. The result <strong>for</strong> 11 2 4<br />

is not yet known. Because their corrector order is greater, the predictors<br />

must be of a correspond<strong>in</strong>gly higher ordsr, mean<strong>in</strong>g that additional <strong>in</strong>esh<br />

85


86 C. w. GEAR<br />

po<strong>in</strong>ts niust be used. S<strong>in</strong>ce these additional po<strong>in</strong>ts are needed, it would seem<br />

desirable to look <strong>for</strong> methods which malie use of them to reduce the error<br />

term or to put the additional non-mesh-po<strong>in</strong>t, or po<strong>in</strong>ts, at a desirable<br />

location.<br />

REFERENCES<br />

[I] L. Fox, Numerical Solution of Ord<strong>in</strong>ary and Partial <strong>Differential</strong> Equations, Pergamon<br />

Press, London, 1962.<br />

121 W. B. GRAGGAND H. J. STETTER,Generalized multistep predictor corrector methods,<br />

J. Assoc. Comput. Mach., 11 (1964), pp. 188-209.<br />

[3] P. HENRICI,Discrete Variable <strong>Methods</strong> <strong>in</strong> Ord<strong>in</strong>ary <strong>Differential</strong> Equations, Wiley,<br />

New York, 1962.<br />

[4] A. NORDSIECK, On numerical <strong>in</strong>tegration of ord<strong>in</strong>ary differential equations, Math.<br />

Comp., 16 (1962), pp. 2249.


http://www.jstor.org<br />

You have pr<strong>in</strong>ted the follow<strong>in</strong>g article:<br />

<strong>Hybrid</strong> <strong>Methods</strong> <strong>for</strong> <strong>Initial</strong> <strong>Value</strong> <strong>Problems</strong> <strong>in</strong> Ord<strong>in</strong>ary <strong>Differential</strong> Equations<br />

C. W. Gear<br />

Journal of the Society <strong>for</strong> Industrial and Applied Mathematics: Series B, Numerical Analysis, Vol.<br />

2, No. 1. (1965), pp. 69-86.<br />

Stable URL:<br />

http://l<strong>in</strong>ks.jstor.org/sici?sici=0887-459X%281965%292%3A1%3C69%3AHMFIVP%3E2.0.CO%3B2-X<br />

This article references the follow<strong>in</strong>g l<strong>in</strong>ked citations. If you are try<strong>in</strong>g to access articles from an<br />

off-campus location, you may be required to first logon via your library web site to access JSTOR. Please<br />

visit your library's website or contact a librarian to learn about options <strong>for</strong> remote access to JSTOR.<br />

References<br />

LINKED CITATIONS<br />

- Page 1 of 1 -<br />

4<br />

On Numerical Integration of Ord<strong>in</strong>ary <strong>Differential</strong> Equations<br />

Arnold Nordsieck<br />

Mathematics of Computation, Vol. 16, No. 77. (Jan., 1962), pp. 22-49.<br />

Stable URL:<br />

http://l<strong>in</strong>ks.jstor.org/sici?sici=0025-5718%28196201%2916%3A77%3C22%3AONIOOD%3E2.0.CO%3B2-D<br />

NOTE: The reference number<strong>in</strong>g from the orig<strong>in</strong>al has been ma<strong>in</strong>ta<strong>in</strong>ed <strong>in</strong> this citation list.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!