15.05.2015 Views

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

16<br />

An appropriate switching tolerance τ s is important to the efficiency of the preconditioner.<br />

When we choose a small τ s, e.g. 10 −8 , we have less chance to detect the<br />

linearly dependent columns. On the other hand, when a relatively large τ s, e.g. 10 −3 , is<br />

chosen, the preconditioning algorithm tends to find more linearly dependent columns.<br />

When a large τ s is taken, many linearly dependent columns can be found. However, it<br />

is also possible that Assumption 1 is violated so that the original least squares problem<br />

can not be solved. When a relatively small τ s is taken, it is possible that less or no<br />

linearly dependent columns can be found. Hence, the preconditioner can loose some<br />

efficiency. When τ s is fixed, a larger dropping tolerance τ d results in less possibility<br />

to detect linearly dependent columns, and a smaller τ d results in larger possibility to<br />

detect linearly dependent columns. Choosing optimal τ s and the dropping tolerance τ d<br />

to achieve the balance between them remains a problem.<br />

Another issue related with the dropping tolerance τ d is how much memory is needed<br />

to store the preconditioner. From the previous discussion, we know that R(M T ) =<br />

span{V }. When numerical droppings is per<strong>for</strong>med on V , the relation may not hold.<br />

Hence, in Algorithm 2 we only per<strong>for</strong>m numerical droppings on the columns of K. In<br />

this way, as long as Assumption 1 holds, we have R(M T ) = span{V }. However, on<br />

the other hand, we cannot control the sparsity of V directly. We will discuss this issue<br />

again at the end of this section.<br />

7.2 Right-<strong>Preconditioning</strong><br />

So far we assumed A ∈ R m×n , m ≥ n, and only discussed left-preconditioning. When<br />

m ≥ n, it is better to per<strong>for</strong>m left-preconditioning since the size of the preconditioned<br />

problem will be smaller. When m ≤ n, right-preconditioning will be better, i.e, solving<br />

the preconditioned problem<br />

min ‖b − ABy‖ 2. (7.17)<br />

y∈R m<br />

When m ≤ n, Theorem 3 and Theorem 5 still hold, since in the proof of these two<br />

theorems we did not refer to the fact that m ≥ n. Then, according to the following<br />

lemma from [14], it is easy to obtain similar conclusions <strong>for</strong> right-preconditioning.<br />

Lemma 2 min ‖b − Ax‖ x∈R n 2 = min ‖b − ABy‖ y∈R m 2 holds <strong>for</strong> any b ∈ R m if and only if<br />

R(A) = R(AB).<br />

Theorem 9 Let A ∈ R m×n . If Assumption 1 holds, then <strong>for</strong> any b ∈ R m we have<br />

min ‖b − Ax‖ x∈R n 2 = min ‖b − AMy‖ 2, where M is a preconditioner constructed by Algorithm<br />

y∈R m<br />

1.<br />

Proof From Theorem 5, we know that R(M) = R(A T ), which implies that there exists<br />

a nonsingular matrix C ∈ R m×m such that M = A T C. Hence,<br />

R(AM) = R(AA T C) (7.18)<br />

= R(AA T ) (7.19)<br />

= R(A). (7.20)<br />

Using Theorem 2 we complete the proof.<br />

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!