23.07.2012 Views

Linear Algebra

Linear Algebra

Linear Algebra

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

218 Chapter 3. Maps Between Spaces<br />

Except for the lack of commutativity, matrix multiplication is algebraically<br />

well-behaved. Below are some nice properties and more are in Exercise 23 and<br />

Exercise 24.<br />

2.12 Theorem If F , G, andH are matrices, and the matrix products are<br />

defined, then the product is associative (FG)H = F (GH) and distributes over<br />

matrix addition F (G + H) =FG+ FH and (G + H)F = GF + HF.<br />

Proof. Associativity holds because matrix multiplication represents function<br />

composition, which is associative: the maps (f ◦ g) ◦ h and f ◦ (g ◦ h) are equal<br />

as both send �v to f(g(h(�v))).<br />

Distributivity is similar. For instance, the first one goes f ◦ (g + h)(�v) =<br />

f � (g + h)(�v) � = f � g(�v)+h(�v) � = f(g(�v)) + f(h(�v)) = f ◦ g(�v)+f ◦ h(�v) (the<br />

third equality uses the linearity of f). QED<br />

2.13 Remark We could alternatively prove that result by slogging through<br />

the indices. For example, associativity goes: the i, j-th entry of (FG)H is<br />

(fi,1g1,1 + fi,2g2,1 + ···+ fi,rgr,1)h1,j<br />

+(fi,1g1,2 + fi,2g2,2 + ···+ fi,rgr,2)h2,j<br />

.<br />

+(fi,1g1,s + fi,2g2,s + ···+ fi,rgr,s)hs,j<br />

(where F , G, andH are m×r, r×s, ands×n matrices), distribute<br />

and regroup around the f’s<br />

fi,1g1,1h1,j + fi,2g2,1h1,j + ···+ fi,rgr,1h1,j<br />

+ fi,1g1,2h2,j + fi,2g2,2h2,j + ···+ fi,rgr,2h2,j<br />

.<br />

+ fi,1g1,shs,j + fi,2g2,shs,j + ···+ fi,rgr,shs,j<br />

fi,1(g1,1h1,j + g1,2h2,j + ···+ g1,shs,j)<br />

+ fi,2(g2,1h1,j + g2,2h2,j + ···+ g2,shs,j)<br />

.<br />

+ fi,r(gr,1h1,j + gr,2h2,j + ···+ gr,shs,j)<br />

to get the i, j entry of F (GH).<br />

Contrast these two ways of verifying associativity, the one in the proof and<br />

the one just above. The argument just above is hard to understand in the sense<br />

that, while the calculations are easy to check, the arithmetic seems unconnected<br />

to any idea (it also essentially repeats the proof of Theorem 2.6 and so is inefficient).<br />

The argument in the proof is shorter, clearer, and says why this property<br />

“really” holds. This illustrates the comments made in the preamble to the chapter<br />

on vector spaces—at least some of the time an argument from higher-level<br />

constructs is clearer.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!