06.09.2021 Views

Linear Algebra, 2020a

Linear Algebra, 2020a

Linear Algebra, 2020a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

440 Chapter Five. Similarity<br />

IV<br />

Jordan Form<br />

This section uses material from three optional subsections: Combining<br />

Subspaces, Determinants Exist, and Laplace’s Expansion.<br />

We began this chapter by recalling that every linear map h: V → W can be<br />

represented with respect to some bases B ⊂ V and D ⊂ W by a partial identity<br />

matrix. Restated, the partial identity form is a canonical form for matrix<br />

equivalence. This chapter considers the case where the codomain equals the<br />

domain so we naturally ask what is possible when the two bases are equal, when<br />

we have Rep B,B (t). In short, we want a canonical form for matrix similarity.<br />

We noted that in the B, B case a partial identity matrix is not always possible.<br />

We therefore extended the matrix forms of interest to the natural generalization,<br />

diagonal matrices, and showed that a transformation or square matrix can be<br />

diagonalized if its eigenvalues are distinct. But we also gave an example of a<br />

square matrix that cannot be diagonalized because it is nilpotent, and thus<br />

diagonal form won’t suffice as the canonical form for matrix similarity.<br />

The prior section developed that example to get a canonical form for nilpotent<br />

matrices, subdiagonal ones.<br />

This section finishes our program by showing that for any linear transformation<br />

there is a basis B such that the matrix representation Rep B,B (t) is the sum<br />

of a diagonal matrix and a nilpotent matrix. This is Jordan canonical form.<br />

IV.1<br />

Polynomials of Maps and Matrices<br />

Recall that the set of square matrices M n×n is a vector space under entry-byentry<br />

addition and scalar multiplication, and that this space has dimension n 2 .<br />

Thus for any n×n matrix T the n 2 + 1-member set {I, T, T 2 ,...,T n2 } is linearly<br />

dependent and so there are scalars c 0 ,...,c n 2, not all zero, such that<br />

c n 2T n2 + ···+ c 1 T + c 0 I<br />

is the zero matrix. Therefore every transformation has a sort of generalized<br />

nilpotency — the powers of a square matrix cannot climb forever without a kind<br />

of repeat.<br />

1.1 Definition Let t be a linear transformation of a vector space V. Where<br />

f(x) =c n x n + ···+ c 1 x + c 0 is a polynomial, f(t) is the transformation c n t n +<br />

···+ c 1 t + c 0 (id) on V. In the same way, if T is a square matrix then f(T) is<br />

the matrix c n T n + ···+ c 1 T + c 0 I.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!