02.08.2013 Views

CHAPTER II DIMENSION In the present chapter we investigate ...

CHAPTER II DIMENSION In the present chapter we investigate ...

CHAPTER II DIMENSION In the present chapter we investigate ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Applying T to this identity, <strong>we</strong> have T (au + bv) = 0, or aT u + bT v = 0. From<br />

T u = 2u and T v = 3v <strong>we</strong> get<br />

2a u + 3b v = 0. (1.2.4)<br />

Multiply (1.2.3) by 2 and <strong>the</strong>n substract (1.2.4), <strong>we</strong> get bv = 0. Since v is nonzero, <strong>we</strong><br />

must have b = 0. Thus (1.2.3) becomes au = 0. Since u is nonzero, <strong>we</strong> must have a = 0.<br />

We have shown a = b = 0. Hence u, v are linearly independent.<br />

Example 1.2.4. Let c1, c2, . . . , cn, cn+ 1 be n + 1 distinct complex numbers and con-<br />

) for k = 0, 1, 2, . . . , n:<br />

sider vectors in C n+ 1 : vk = (c k 1 , ck 2 , . . . , ck n , ck n+ 1<br />

v0 = (1, 1, 1, . . . , 1, 1)<br />

v1 = (c1, c2, . . . , cn, cn+ 1)<br />

v2 = (c 2 1 , c2 2 , . . . , c2 n , c2 n+ 1 )<br />

.<br />

vn = (c n 1 , cn 2 , . . . , cn n , cn n+ 1 )<br />

Prove that v0, v1, v2, . . . , vn are linearly independent.<br />

Proof: Assume a0v0 + a1v1 + · · · + anvn = 0. For each j with 1 ≤ j ≤ n + 1, write<br />

out <strong>the</strong> jth component of this vector identity:<br />

a0 + a1cj + a2c 2 j + · · · + anc n j<br />

This shows that <strong>the</strong> polynomial a0 +a1x+a2x 2 +· · ·+anx n has n+1 roots c1, c2, . . . , cn+ 1.<br />

But a polynomial of degree at most n cannot have n + 1 roots, unless that polynomial is<br />

<strong>the</strong> zero polynomial. Therefore a0, a1, . . . , an must vanish. This shows v0, v1, v2, . . . , vn<br />

are linearly independent.<br />

1.3. Given a finite set of vectors v1, v2, . . . , vr in V , <strong>we</strong> can construct a subspace<br />

containing <strong>the</strong>se vectors by collecting all linear combinations of <strong>the</strong>se vectors. Here, by a<br />

linear combination of v1, v2, . . . , vr <strong>we</strong> mean a vector of <strong>the</strong> form<br />

α1v1 + α2v2 + · · · · · · + αrvr<br />

= 0.<br />

(1.3.1)<br />

for certain scalars α1, α2, . . . , αr. A quick example: 4 + 2x − x 2 is a linear combination of<br />

2 + 3x and 4x + x 2 because 4 + 2x − x 2 = 2(2 + 3x) + (−1)(4x + x 2 ), which can be easily<br />

checked. (For <strong>the</strong> moment let us not worry about how to figure out this identity.) One<br />

can think of a linear combination of <strong>the</strong> form (1.3.1) as a “combo soup” with vk as its<br />

kth ingredient and ak as <strong>the</strong> amount of this ingredient (1 ≤ k ≤ r).<br />

4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!