28.08.2013 Views

Linear Algebra II (pdf, 500 kB)

Linear Algebra II (pdf, 500 kB)

Linear Algebra II (pdf, 500 kB)

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2<br />

1. Review of Eigenvalues, Eigenvectors and Characteristic<br />

Polynomial<br />

Recall the topics we finished <strong>Linear</strong> <strong>Algebra</strong> I with. We were discussing eigenvalues<br />

and eigenvectors of endomorphisms and square matrices, and the question when<br />

they are diagonalizable. For your convenience, I will repeat here the most relevant<br />

definitions and results.<br />

Let V be a finite-dimensional F -vector space, dim V = n, and let f : V → V be<br />

an endomorphism. Then for λ ∈ F , the λ-eigenspace of f was defined to be<br />

Eλ(f) = {v ∈ V : f(v) = λv} = ker(f − λ idV ) .<br />

λ is an eigenvalue of f if Eλ(f) = {0}, i.e., if there is 0 = v ∈ V such that<br />

f(v) = λv. Such a vector v is called an eigenvector of f for the eigenvalue λ.<br />

The eigenvalues are exactly the roots (in F ) of the characteristic polynomial of f,<br />

Pf(x) = det(x idV −f) ,<br />

which is a monic polynomial of degree n with coefficients in F .<br />

The geometric multiplicity of λ as an eigenvalue of f is defined to be the dimension<br />

of the λ-eigenspace, whereas the algebraic multiplicity of λ as an eigenvalue of f<br />

is defined to be its multiplicity as a root of the characteristic polynomial.<br />

The endomorphism f is said to be diagonalizable if there exists a basis of V<br />

consisting of eigenvectors of f. The matrix representing f relative to this basis is<br />

then a diagonal matrix, with the various eigenvalues appearing on the diagonal.<br />

Since n × n matrices can be identified with endomorphisms F n → F n , all notions<br />

and results makes sense for square matrices, too. A matrix A ∈ Mat(n, F ) is<br />

diagonalizable if and only if it is similar to a diagonal matrix, i.e., if there is an<br />

invertible matrix P ∈ Mat(n, F ) such that P −1 AP is diagonal.<br />

It is an important fact that the geometric multiplicity of an eigenvalue cannot<br />

exceed its algebraic multiplicity. An endomorphism or square matrix is diagonalizable<br />

if and only if the sum of the geometric multiplicities of all eigenvalues equals<br />

the dimension of the space. This in turn is equivalent to the two conditions (a)<br />

the characteristic polynomial is a product of linear factors, and (b) for each eigenvalue,<br />

algebraic and geometric multiplicities agree. For example, both conditions<br />

are satisfied if Pf is the product of n distinct monic linear factors.<br />

2. The Cayley-Hamilton Theorem and the Minimal Polynomial<br />

Let A ∈ Mat(n, F ). We know that Mat(n, F ) is an F -vector space of dimension n2 .<br />

Therefore, the elements I, A, A2 , . . . , An2 cannot be linearly independent (because<br />

their number exceeds the dimension). If we define p(A) in the obvious way for p<br />

a polynomial with coefficients in F , then we can deduce that there is a (non-zero)<br />

polynomial p of degree at most n2 such that p(A) = 0 (0 here is the zero matrix).<br />

In fact, much more is true.<br />

Consider a diagonal matrix D = diag(λ1, λ2, . . . , λn). (This notation is supposed<br />

to mean that λj is the (j, j) entry of D; the off-diagonal entries are zero, of course.)<br />

Its characteristic polynomial is<br />

PD(x) = (x − λ1)(x − λ2) · · · (x − λn) .<br />

Since the diagonal entries are roots of PD, we also have PD(D) = 0. More generally,<br />

consider a diagonalizable matrix A. Then there is an invertible matrix Q such

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!