11.08.2013 Views

Untitled - Cdm.unimo.it

Untitled - Cdm.unimo.it

Untitled - Cdm.unimo.it

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

170 Polynomial Approximation of Differential Equations<br />

an exact expression is given for the precond<strong>it</strong>ioned eigenvalues. Namely, we have<br />

(8.4.3) Λn,m = m(m + 1)<br />

2 π sin 2n<br />

sin mπ<br />

2n<br />

cos π<br />

2n<br />

sin (m+1)π<br />

2n<br />

, 1 ≤ m ≤ n − 1.<br />

Thus, we obtain 1 ≤ Λn,m < π2<br />

4 , 0 ≤ m ≤ n.<br />

We apply the Richardson method to the system in (8.3.7) w<strong>it</strong>h θ := 2/(π2 /4 + 1).<br />

This choice minimizes the spectral radius of M = I − θR −1 D, which takes the value<br />

ρ = (π 2 /4 − 1)/(π 2 /4 + 1) ≈ 0.42 . Starting from the in<strong>it</strong>ial guess ¯p (0) = R −1 ¯q<br />

(see (7.6.2)), one achieves the exact solution to the system (to machine accuracy in<br />

double precision) in about twenty <strong>it</strong>erations. Though we do not have access to the<br />

explic<strong>it</strong> expression of the precond<strong>it</strong>ioned eigenvalues for different values of α and β,<br />

the behavior is more or less the same.<br />

We note that, in practice, the matrix R −1 D is not actually required. Each <strong>it</strong>eration<br />

consists of two steps. In the first step, we evaluate the matrix-vector multiplication<br />

¯p → D¯p. This can be carried out w<strong>it</strong>h the help of the FFT (see section 4.3) in the<br />

Chebyshev case. In the second step, we compute D¯p → R −1 D¯p by solving a tridiagonal<br />

linear system. In this way, we avoid storing the whole matrix R −1 . Moreover, the cost<br />

of this computation is proportional to n (see isaacson and keller(1966), p.55).<br />

Symmetric precond<strong>it</strong>ioners, based on fin<strong>it</strong>e element discretizations, have been pro-<br />

posed in deville and mund(1985) and in canuto and quarteroni(1985). They allow<br />

us to apply a variation of the conjugate gradient <strong>it</strong>erative method for solving (8.3.7).<br />

A standard trick is to accelerate the convergence by updating the parameter θ at<br />

each <strong>it</strong>eration. This can be done in several ways. For brev<strong>it</strong>y, we do not investigate this<br />

here. A survey of the most used precond<strong>it</strong>ioned <strong>it</strong>erative techniques in spectral methods<br />

is given in canuto, hussaini, quarteroni and zang(1988), p.148. Many practical<br />

suggestions are provided in boyd(1989).<br />

We note that R is ill-cond<strong>it</strong>ioned. This introduces small rounding errors when<br />

evaluating R −1 . Hence, after the precond<strong>it</strong>ioned Richardson method has reached a<br />

steady solution, we suggest that <strong>it</strong> be refined by performing a few <strong>it</strong>erations of the<br />

unprecond<strong>it</strong>ioned Richardson method to remove the rounding errors.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!