02.11.2014 Views

Gerard L.G. Sleijpen, Peter Sonneveld and Martin B. van Gijzen, Bi ...

Gerard L.G. Sleijpen, Peter Sonneveld and Martin B. van Gijzen, Bi ...

Gerard L.G. Sleijpen, Peter Sonneveld and Martin B. van Gijzen, Bi ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Author's personal copy<br />

1108 G.L.G. <strong>Sleijpen</strong> et al. / Applied Numerical Mathematics 60 (2010) 1100–1114<br />

Select an x 0<br />

Select an ˜R0<br />

Compute r 0 = b − Ax 0<br />

k =−1, σ k = I,<br />

Set U k = 0, C k = 0, ˜Rk = 0.<br />

repeat<br />

s = r k+1<br />

for j = 1,...,s<br />

u = s, Compute s = Au<br />

β k = σ −1 ( ˜R∗<br />

k k s)<br />

u = u − U k β k , U k+1 e j = u<br />

s = s − C k β k , C k+1 e j = s<br />

end for<br />

k ← k + 1<br />

σ k = ˜R∗<br />

k C k, α k = σ −1 ( ˜R∗<br />

k k r k)<br />

x k+1 = x k + U k α k , r k+1 = r k − C k α k<br />

end repeat<br />

Algorithm 3. <strong>Bi</strong>-CG. 0 is the n × s zero matrix, I is the s × s identity matrix. ˜Rk is an n × s matrix such that Span( ˜Rk ) + K k (A ∗ , ˜R0 ) equals<br />

K k+1 (A ∗ , ˜R0 ).<br />

Example 16. ˜Rk = ¯p k (A ∗ ) ˜R0 with p k (λ) = (1 − ω k λ) ···(1 − ω 1 λ).<br />

We will now explain how to construct a residual vector r k in K K (A, r 0 ) with K = ks + 1 that is orthogonal to K k (A ∗ , ˜R0 ).<br />

Let C 0 be the n × s matrix with columns Ar 0 ,...,A s r 0 .ThematrixC 0 is of the form AU 0 with U 0 explicitly available:<br />

U 0 =[r 0 ,...,A s−1 r 0 ].<br />

Now, find a vector ⃗α 0 ∈ C s such that r 1 ≡ r 0 − C 0 ⃗α 0 ⊥ ˜R0 , r 1 ∈ K s+1 (A, r 0 ), <strong>and</strong> update x 0 as x 1 = x 0 + U 0 ⃗α 0 .<br />

Note that, with σ 0 ≡ ˜R∗<br />

0 C 0 any vector of the form w − C 0 σ −1<br />

0<br />

˜R ∗ 0 w is orthogonal to ˜R0 : I − C 0 σ −1<br />

0<br />

˜R 0 isaskewprojection<br />

onto the orthogonal complement of ˜R0 . Here, for ease of explanation, we assume that σ 0 is s × s non-singular. If σ 0 is<br />

singular (or ill conditioned), then s can be reduced to overcome breakdown or loss of accuracy (see Note 4 <strong>and</strong> [10] for<br />

more details).<br />

Let v = r 1 . We construct an n × s matrix C 1 orthogonal to ˜R0 as follows:<br />

s = Av,<br />

s = s − C 0 (σ −1<br />

0<br />

˜R ∗ 0 s), C 1e j = s, v = s for j = 1,...,s.<br />

˜R ∗ 0s). Note that more stable approaches as GCR (or<br />

1 C 1, the vector<br />

Here, C 1 e j = s indicates that the j-column of C 1 is set to the vector s: e j is the jth (s-dimensional) st<strong>and</strong>ard basis<br />

vector. Then C 1 is orthogonal to ˜R0 <strong>and</strong> its columns form a basis for the Krylov subspace of order s generated by<br />

A 1 ≡ (I − C 0 σ −1<br />

0<br />

˜R ∗ 0 )A <strong>and</strong> A 1r 1 . Note that there is a matrix U 1 such that C 1 = AU 1 . The columns of U 1 can be computed<br />

simultaneously with the columns of C 1 : U 1 e j = v − U 0 (σ −1<br />

0<br />

Arnoldi) for computing a basis of this Krylov subspace could have been used as well. Now, with σ 1 ≡ ˜R∗<br />

r 2 ≡ r 1 − C 1 (σ −1<br />

1<br />

˜R ∗ 1 r 1) is orthogonal to ˜R0 as well as to ˜R1 ,itbelongstoK 2s+1 (A, r 0 ), <strong>and</strong> x 2 = x 1 + U 1 (σ −1<br />

1<br />

˜R ∗ 1 r 1) is the<br />

associated approximate solution. Repeating the procedure<br />

r k+1 = r k − C k ⃗α k ⊥ ˜Rk<br />

v = r k+1<br />

for j = 1,...,s<br />

s = Av<br />

C k+1 e j = s − C k<br />

⃗β j ⊥ ˜Rk<br />

v = C k+1 e j<br />

end for (11)<br />

leads to the residuals as announced: r k ∈ K ks+1 (A, r 0 ), r k ⊥ K k (A ∗ , ˜R0 ). Withσ k ≡ ˜R∗<br />

k C k, the⃗α k <strong>and</strong> ⃗β j = ⃗β (k) can be<br />

j<br />

computed as ⃗α k = σ −1 ( ˜R∗<br />

k k r k), ⃗β j = σ −1 ( ˜R∗<br />

k k s). Simultaneously with the computation of the columns of C k+1, the columns<br />

of a matrix U k+1 can be computed. A similar remark applies to the update of x k . The resulting algorithm can be found in<br />

Algorithm 3.<br />

The columns of C k+1 form a Krylov basis of the Krylov subspace generated by A k+1 ≡ (I − C k σ −1 ˜R ∗ k k )A <strong>and</strong> A k+1r k .<br />

The generalization of <strong>Bi</strong>-CG that is presented above is related to the method ML(k)<strong>Bi</strong>CG [13]. The orthogonalization<br />

conditions on the ML(k)<strong>Bi</strong>CG residuals, however, are more strict.<br />

Note 4. If σ k = ˜R∗<br />

k C k is (close to) singular, then reduce s. We can take the following approach (which has not been included<br />

in Algorithm 3).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!