28.08.2013 Views

Linear Algebra II (pdf, 500 kB)

Linear Algebra II (pdf, 500 kB)

Linear Algebra II (pdf, 500 kB)

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

12<br />

5.1. Lemma. If p(x) is a polynomial (over F ) and λ ∈ F such that p(λ) = 0,<br />

then (x − λ) m and p(x) are coprime for all m ≥ 1.<br />

Proof. First, consider m = 1. Let<br />

q(x) = p(x)<br />

− 1 ;<br />

p(λ)<br />

this is a polynomial such that q(λ) = 0. Therefore, we can write q(x) = (x−λ)r(x)<br />

with some polynomial r(x). This gives us<br />

−r(x)(x − λ) + 1<br />

p(x) = 1 .<br />

p(λ)<br />

Now, taking the mth power on both sides, we obtain an equation<br />

−r(x) m(x − λ) m + a(x)p(x) = 1 .<br />

Now we can feed this into Prop. 4.7.<br />

5.2. Theorem. Let V be a finite-dimensional vector space, and let f : V → V be<br />

an endomorphism whose characteristic polynomial splits into linear factors:<br />

Pf(x) = (x − λ1) m1 · · · (x − λk) mk ,<br />

where the λi are distinct. Then V = U1 ⊕ · · · ⊕ Uk, where Uj = ker(f − λ idV ) mj<br />

is the generalized λj-eigenspace of f.<br />

Proof. Write Pf(x) = p1(x) · · · pk(x) with pj(x) = (x − λj) mj . By Lemma 5.1, we<br />

know that the pj(x) are coprime in pairs. By the Cayley-Hamilton Theorem 2.1,<br />

we know that Pf(f) = 0. The result then follows from Prop. 4.7. <br />

5.3. Theorem (Jordan Normal Form). Let V be a finite-dimensional vector<br />

space, and let f : V → V be an endomorphism whose characteristic polynomial<br />

splits into linear factors:<br />

Pf(x) = (x − λ1) m1 · · · (x − λk) mk ,<br />

where the λi are distinct. Then there is a basis of V such that the matrix representing<br />

f with respect to that basis is a block diagonal matrix with blocks of the<br />

form<br />

⎛<br />

λ 1 0 · · · 0<br />

⎞<br />

0<br />

⎜0<br />

⎜<br />

⎜0<br />

B(λ, m) = ⎜ .<br />

⎝0<br />

λ<br />

0<br />

.<br />

0<br />

1<br />

λ<br />

.<br />

0<br />

· · ·<br />

· · ·<br />

...<br />

· · ·<br />

0<br />

0<br />

.<br />

λ<br />

0 ⎟<br />

0 ⎟<br />

.<br />

⎟ ∈ Mat(F, m)<br />

⎟<br />

1⎠<br />

0 0 0 · · · 0 λ<br />

where λ ∈ {λ1, . . . , λk}.<br />

Proof. We keep the notations of Thm. 5.2. We know that on Uj, (f −λj id) mj = 0,<br />

so f|Uj = λj idUj + gj, where g mj<br />

j = 0, i.e., gj is nilpotent. By Thm. 3.3, there<br />

is a basis of Uj such that gj is represented by a block diagonal matrix Bj with<br />

blocks of the form B(0, m) (such that the sum of the m’s is mj). Therefore, f|Uj is<br />

represented by Bj +λjIdim Uj , which is a block diagonal matrix composed of blocks<br />

B(λj, m) (with the same m’s as before). The basis of V that is given by the union<br />

of the various bases of the Uj then does what we want, compare Remark 4.4.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!