02.08.2013 Views

CHAPTER II DIMENSION In the present chapter we investigate ...

CHAPTER II DIMENSION In the present chapter we investigate ...

CHAPTER II DIMENSION In the present chapter we investigate ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A2 = 2A1, A5 = A1 + 3A3 + 2A4. Hence, among <strong>the</strong> given “vectors” in P3, <strong>the</strong> leading<br />

vectors are p1(x), p3(x), p4(x) and p2(x) = 2p1(x), p5(x) = p1(x) + 3p3(x) + 2p4(x).<br />

Remark. As <strong>we</strong> know, any matrix A can always be row reduced to a reduced row<br />

echelon matrix, say B. Ho<strong>we</strong>ver, row reduction can be performed in many different ways.<br />

Naturally a question is raised: is B uniquely determined by A? The ans<strong>we</strong>r is Yes. The<br />

reason is, first, those columns in B containing leading ones correspond to leading column<br />

vectors of A, second, <strong>the</strong> entries of any o<strong>the</strong>r column in B is determined by <strong>the</strong> way it is<br />

written as a linear combination of columns containing leading ones, <strong>the</strong> same way as <strong>the</strong><br />

corresponding column in A expressed as a linear combination of leading vectors, and such<br />

a linear combination is unique because <strong>the</strong> leading vectors are linearly independent. The<br />

uniqueness of reduced row echelon form is considered by some people as a “hard <strong>the</strong>orem”.<br />

<strong>In</strong> fact, from our view it is easy to understand.<br />

1.6. Now <strong>we</strong> return to <strong>the</strong> proof of Theorem 1.4.1. Let <strong>the</strong> leading vectors be<br />

vk1 , vk2 , . . . , vkp , where 1 ≤ k1 < k2 < · · · < kp ≤ r. Suppose <strong>we</strong> have<br />

a1vk1 + a2vk2 + · · · + apvkp = 0. (1.6.1)<br />

If all coefficients a1, a2, . . . , ap in (1.6.1)are zeroes, <strong>the</strong>n <strong>the</strong>re is nothing to prove. So<br />

<strong>we</strong> assume that one of <strong>the</strong>m is nonzero. Let q be <strong>the</strong> largest positive number for which aq<br />

is nonzero. Then (1.6.1) becomes<br />

with aq = 0. We can rewrite (1.6.2) as<br />

vkq =<br />

a1vk1 + a2vk2 + · · · + aqvkq = 0 (1.6.2)<br />

<br />

− a1<br />

<br />

vk1<br />

aq<br />

+<br />

<br />

− a2<br />

<br />

<br />

vk2 + · · · + −<br />

aq<br />

aq−1<br />

<br />

vkq− 1<br />

aq<br />

.<br />

Since <strong>the</strong> vectors vk1 , vk2 , . . . , vkq− 1 are in Vkq−1 (since kq−1 < kq, <strong>we</strong> have kq−1 ≤ kq −1<br />

and hence all vectors in Vkq− 1 are also in Vkq−1), this identity shows that vkq is in Vkq−1,<br />

contradicting <strong>the</strong> fact that vkq is a leading vector. This shows that all coefficients<br />

a1, a2, . . . , ap must be zeroes.<br />

To prove part (b) of <strong>the</strong> <strong>the</strong>orem, take any non-leading vector vj from <strong>the</strong> list and<br />

let vk1, vk2, . . . , vks be all leading vectors preceding vj. Then <strong>we</strong> have ks < j < ks+ 1.<br />

Now, for all i with ks < i < ks+ 1, <strong>we</strong> have Vi = Vks . This is because vi is not a leading<br />

vector and vks+1 is <strong>the</strong> first leading vector after vks . <strong>In</strong> particular, Vj = Vks and<br />

hence vj is in Vks . We can repeat <strong>the</strong> argument for proving Corollary 1.4.2 to prove that<br />

vk1 , vk2 , . . . , vks form a basis of Vks . Now part (b) of Theorem 1.4.1 is clear.<br />

10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!