15.05.2015 Views

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

Greville's Method for Preconditioning Least Squares ... - Projects

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2<br />

1 Introduction<br />

Consider the least squares problem,<br />

min ‖b − Ax‖ 2, (1.1)<br />

x∈R n<br />

where A ∈ R m×n , b ∈ R m .<br />

To solve least squares problems, there are two main kinds of methods : direct<br />

methods and iterative methods [4]. When A is large and sparse, iterative methods,<br />

especially Krylov subspace methods are preferred <strong>for</strong> solving (1.1), since they require<br />

less memory and computational work compared to direct methods. In [14], Hayami<br />

et al. proposed using GMRES [16] to solve least squares problems by using some<br />

preconditioners. By using a preconditioner B ∈ R n×m preconditioning (1.1) from the<br />

left, we can trans<strong>for</strong>m problem (1.1) to<br />

min ‖Bb − BAx‖ 2. (1.2)<br />

x∈R n<br />

On the other hand, we can also precondition problem (1.1) from the right and trans<strong>for</strong>m<br />

the problem (1.1) to<br />

min ‖b − ABy‖ 2. (1.3)<br />

y∈R m<br />

In [14], necessary and sufficient conditions <strong>for</strong> B such that (1.2) and (1.3) are equivalent<br />

to (1.1) were given, respectively.<br />

There are different methods to construct the preconditioner B, the simplest choice<br />

of B is to choose B = A T . If rank(A) = n, one may use B = CA T where C =<br />

diag(A T A) −1 or better, use Robust Incomplete Factorization(RIF) [2,3] to construct<br />

C[14]. Another possibility is to use the incomplete Givens orthogonalization method to<br />

construct B [19]. However, so far no efficient preconditioners has been proposed <strong>for</strong> the<br />

rank deficient case. In this paper, we will construct a preconditioner which also works<br />

when A is rank deficient. The idea is to use the approximate inverse of the coefficient<br />

matrix of a linear system<br />

Ax = b (1.4)<br />

with A ∈ R n×n as a preconditioner. This kind of preconditioners were originally developed<br />

<strong>for</strong> solving nonsingular linear systems [10,15]. Since now A is a general matrix, we<br />

construct matrix M ∈ R n×m which is an approximation to the Moore-Penrose inverse<br />

[9,17] of A, and use M to precondition the least squares problem (1.1). When A is rank<br />

deficient, the preconditioner M can also be rank deficient. Note, in passing, that in [20]<br />

it was shown that <strong>for</strong> singular linear systems, singular preconditioners are sometimes<br />

better than nonsingular preconditioners.<br />

The main contribution of this paper is to give a new way to precondition general<br />

least squares problems from the perspective of the approximate Moore-Penrose<br />

inverse. Similar to the RIF preconditioner[2,3] our method also includes an A T A-<br />

orthogonalization process when the coefficient matrix is full column rank. When A<br />

is rank deficient, our method tries to orthogonalize the linearly independent part in<br />

A. We also give a theoretical analysis on the equivalence between the preconditioned<br />

problem and the original problem, and discuss the possibility of breakdown when using<br />

GMRES to solve the preconditioned system.<br />

The rest of the paper is organized as follows. In Section 2, we first introduce some<br />

properties of the Moore-Penrose inverse of a rank-one updated matrix and the original

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!