05.04.2013 Views

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

Hybrid Methods for Initial Value Problems in Ordinary Differential ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

HYBRID METHODS FOR INITIAL VALUE PROBLEMS<br />

accuracy. The 10th order niethod was not used as it almost certa<strong>in</strong>ly has<br />

undesirable stability properties when is nonzero.<br />

A recent paper by Gragg and Stelter [2] deals with a generalization of<br />

this method where the non-mesh-po<strong>in</strong>t niay be any po<strong>in</strong>t fixed <strong>in</strong> relation<br />

to x, . The generalization taken <strong>in</strong> 82 of this paper allows a number of<br />

non-mesh-po<strong>in</strong>ts to be used. A theoren1 is proved giv<strong>in</strong>g sufficient condi-<br />

tions <strong>for</strong> the convergence of these methods. These conditions are that each<br />

of the predictors (the estimates of the solution at a set of po<strong>in</strong>ts) is at<br />

least of order zero, that the corrector (the f<strong>in</strong>al estimate of y and the next<br />

mesh po<strong>in</strong>t) is at least of order 1 and that the corrector is stable. This<br />

general <strong>for</strong>mulation conta<strong>in</strong>s Runge-Kutta and nlultistep predictorcorrector<br />

methods as subsets. The <strong>for</strong>mulation is explicit s<strong>in</strong>ce, <strong>in</strong> practice,<br />

only a f<strong>in</strong>ite number of iterations of the corrector can be used. All but the<br />

last can be viewed as a set of predictors.<br />

2. A general <strong>for</strong>mulation and proof of convergence. Consider the sequence<br />

of operations<br />

71<br />

j-1<br />

+ hCyjifpi , <strong>for</strong> j = 1, 2, -. , J,<br />

i=O<br />

where h is the step size (x, = a + nh) and fpi = f(pi,,+l). (The quantities<br />

y, f and p are assumed to be vectors represent<strong>in</strong>g both the <strong>in</strong>dependent<br />

variable x and the one or more components of the dependent variable.)<br />

p~-~,,+~ will be taken as the predicted value of yn+l, and p,,,+, will be<br />

taken as the corrected value. To avoid a f<strong>in</strong>al evaluation of the derivative<br />

f, we def<strong>in</strong>e fn+lby fn+l = f(p~-l,n+l).<br />

If k = 0, this method is the general explicit Runge-Kutta niethod. On<br />

the other hand, if the coefficients are such that<br />

p..- p. .<br />

39% - ,<br />

yj,j-1 = yj-1,j-2, <strong>for</strong> i= 0, 1, .-.,k and j = 2,3, ..., J,<br />

and all other yii are zero, then this nlethod represents the use of a multi-<br />

step predictor followed by J applications of a corrector <strong>for</strong>mula.<br />

To discuss the error and convergence of this method we <strong>in</strong>troduce the<br />

usual notation en = yn - y(xn), where y(x) is the solution of the differen-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!