15.05.2015 Views

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4<br />

To utilize the above theorem, let<br />

A =<br />

n∑<br />

a i e T i , (2.4)<br />

i=1<br />

where a i is the ith column of A. Further let A i = [a 1 , . . . , a i , 0, . . . , 0] ∈ R m×n . Hence<br />

we have<br />

i∑<br />

A i = a k e T k , i = 1, . . . , n, (2.5)<br />

and if we denote A 0 = 0 m×n ,<br />

k=1<br />

A i = A i−1 + a i e T i , i = 1, . . . , n. (2.6)<br />

Thus every A i , i = 1, . . . , n is a rank-one update of A i−1 . Noticing that A † 0 = 0 n×m,<br />

we can utilize Theorem 1 to compute the Moore-Penrose inverse of A step by step and<br />

have A † = A † n in the end.<br />

In Theorem 1, substituting a i into c and e i into d, we can rewrite Equation (2.2)<br />

as follows.<br />

a i ∈ R(A i−1 ) ⇔ u = (I − A i−1 A † i−1 )a i = 0 (2.7)<br />

No matter whether a i is linearly dependent on the previous columns or not, we have<br />

<strong>for</strong> any i = 1, 2, . . . , n<br />

β = 1 + e T i A † i−1 a i = 1. (2.8)<br />

The vector v in equation (2.3) can be written as<br />

v = e T i (I − A † i−1 A i−1) ≠ 0, (2.9)<br />

which is nonzero <strong>for</strong> any i = 1, . . . , n. Hence, we can use Case 1 and Case 3 <strong>for</strong> case<br />

a i ∉ R(A i−1 ) and case a i ∈ R(A i−1 ), respectively.<br />

Then from Theorem 1, denoting A 0 = 0 m×n , we obtain a method to compte A † i<br />

based on A † i−1 as<br />

A † i = { A<br />

†<br />

i−1 + (e i − A † i−1 a i)((I − A i−1 A † i−1 )a i) † if a i ∉ R(A i−1 )<br />

A † i−1 + 1 σ i<br />

(e i − A † i−1 a i)(A † i−1 a i) T A † i−1 if a i ∈ R(A i−1 ) (2.10)<br />

where σ i = 1 + ‖A † i−1 a i‖ 2 2. This method was proposed by Greville in the 1960s[13].<br />

3 <strong>Preconditioning</strong> Algorithm<br />

In this section, we construct our preconditioning algorithm according to the Greville’s<br />

method in section 2. First of all, we notice that in Equation (2.10) the difference<br />

between case a i ∉ R(A i−1 ) and case a i ∈ R(A i−1 ) lies in the second term. If we define<br />

vectors k i , u i , v i and scalar f i <strong>for</strong> i = 1, . . . , n as<br />

k i = A † i−1 a i, (3.1)<br />

u i = a i − A i−1 k i = (I − A i−1 A † i−1 )a i, (3.2)<br />

{ ‖ui ‖ 2 2 if a<br />

f i =<br />

i ∉ R(A i−1 )<br />

1 + ‖k i ‖ 2 2 if a i ∈ R(A i−1 ) , (3.3)<br />

{<br />

u i if a i ∉ R(A i−1 )<br />

v i =<br />

(A † i−1 )T k i if a i ∈ R(A i−1 ) , (3.4)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!