24.03.2013 Views

Iterative Methods for Sparse Linear Systems Second Edition

Iterative Methods for Sparse Linear Systems Second Edition

Iterative Methods for Sparse Linear Systems Second Edition

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

26 CHAPTER 1. BACKGROUND IN LINEAR ALGEBRA<br />

is not reduced to {0}, i.e., there is vector x in S Sk. For this x = k<br />

i=1 ξiqi, we<br />

have<br />

(Ax, x)<br />

(x, x) =<br />

k i=1 λi|ξi| 2<br />

≥ λk<br />

k<br />

i=1 |ξi| 2<br />

so that µ(S) ≥ λk.<br />

Consider, on the other hand, the particular subspace S0 of dimension n − k + 1<br />

which is spanned by qk, . . .,qn. For each vector x in this subspace, we have<br />

(Ax, x)<br />

(x, x) =<br />

n i=k λi|ξi| 2<br />

n ≤ λk<br />

i=k |ξi| 2<br />

so that µ(S0) ≤ λk. In other words, as S runs over all the (n − k + 1)-dimensional<br />

subspaces, µ(S) is always ≥ λk and there is at least one subspace S0 <strong>for</strong> which<br />

µ(S0) ≤ λk. This shows the desired result.<br />

The above result is often called the Courant-Fisher min-max principle or theorem.<br />

As a particular case, the largest eigenvalue of A satisfies<br />

λ1 = max<br />

x=0<br />

(Ax, x)<br />

. (1.38)<br />

(x, x)<br />

Actually, there are four different ways of rewriting the above characterization.<br />

The second <strong>for</strong>mulation is<br />

λk = max<br />

S, dim (S)=k<br />

min<br />

x∈S,x=0<br />

(Ax, x)<br />

(x, x)<br />

(1.39)<br />

and the two other ones can be obtained from (1.37) and (1.39) by simply relabeling<br />

the eigenvalues increasingly instead of decreasingly. Thus, with our labeling of the<br />

eigenvalues in descending order, (1.39) tells us that the smallest eigenvalue satisfies<br />

λn = min<br />

x=0<br />

(Ax, x)<br />

, (1.40)<br />

(x, x)<br />

with λn replaced by λ1 if the eigenvalues are relabeled increasingly.<br />

In order <strong>for</strong> all the eigenvalues of a Hermitian matrix to be positive, it is necessary<br />

and sufficient that<br />

(Ax, x) > 0, ∀ x ∈ C n , x = 0.<br />

Such a matrix is called positive definite. A matrix which satisfies (Ax, x) ≥ 0 <strong>for</strong> any<br />

x is said to be positive semidefinite. In particular, the matrix A H A is semipositive<br />

definite <strong>for</strong> any rectangular matrix, since<br />

(A H Ax, x) = (Ax, Ax) ≥ 0, ∀ x.<br />

Similarly, AA H is also a Hermitian semipositive definite matrix. The square roots<br />

of the eigenvalues of A H A <strong>for</strong> a general rectangular matrix A are called the singular<br />

values of A and are denoted by σi. In Section 1.5, we have stated without proof that

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!