Greville's Method for Preconditioning Least Squares ... - Projects
Greville's Method for Preconditioning Least Squares ... - Projects
Greville's Method for Preconditioning Least Squares ... - Projects
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
12<br />
Proof If Assumption 1 holds, we have<br />
R(A) = R(M T ). (5.10)<br />
Then there exists a nonsingular matrix C ∈ R n×n such that A = M T C. Hence,<br />
R(M T MA) = R(M T MA) (5.11)<br />
= R(M T MM T C) (5.12)<br />
= R(M T MM T ) (5.13)<br />
= R(M T M) (5.14)<br />
= R(M T ) (5.15)<br />
= R(A). (5.16)<br />
In the above equalities we used the relationship R(MM T ) = R(M).<br />
By Theorem 1 we complete the proof. ✷<br />
6 Breakdown-free Condition<br />
In this section, we assume without losing generality that the first r columns of A are<br />
linearly independent. Hence,<br />
R(A) = span{a 1 , . . . , a r}, (6.1)<br />
where rank(A) = r, and a i (i = 1, . . . , r) is the i-th column of A. The reason is that we<br />
can incorporate a column pivoting in Algorithm 1 easily. Then we have,<br />
a i ∈ span{a 1 , . . . , a r}, i = r + 1, . . . , n. (6.2)<br />
In this case, after per<strong>for</strong>ming Algorithm 1 with numerical dropping, matrix V can be<br />
written in the <strong>for</strong>m<br />
V = [v 1 , . . . , v r, v r+1 , . . . , v n]. (6.3)<br />
If we denote [v 1 , . . . , v r] as U r, then<br />
U r = A(I − K)I r, I r =<br />
There exists a matrix H ∈ R r×n−r such that<br />
H can be rank deficient. Then, V is given by<br />
[ ] Ir×r<br />
. (6.4)<br />
0<br />
[v r+1 , . . . , v n] = U rH = A(I − K)I rH. (6.5)<br />
V = [v 1 , . . . , v r, v r+1 , . . . , v n] (6.6)<br />
= [U r, U rH] (6.7)<br />
= U r [ I r×r H ] (6.8)<br />
[ ] Ir×r<br />
= A(I − K) [ I r×r H ] (6.9)<br />
0<br />
[ ] Ir×r H<br />
= A(I − K)<br />
. (6.10)<br />
0 0