23.07.2012 Views

Linear Algebra

Linear Algebra

Linear Algebra

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Section II. <strong>Linear</strong> Independence 105<br />

the set S = {�v1,�v2,�v3} is linearly dependent because this is a relationship<br />

0 · �v1 +2· �v2 − 1 · �v3 = �0<br />

where not all of the scalars are zero (the fact that some scalars are zero is<br />

irrelevant).<br />

1.9 Remark That example shows why, although Definition 1.3 is a clearer<br />

statement of what independence is, Lemma 1.4 is more useful for computations.<br />

Working straight from the definition, someone trying to compute whether S is<br />

linearly independent would start by setting �v1 = c2�v2+c3�v3 and concluding that<br />

there are no such c2 and c3. But knowing that the first vector is not dependent<br />

on the other two is not enough. Working straight from the definition, this<br />

personwouldhavetogoontotry�v2 = c3�v3 in order to find the dependence<br />

c3 =1/2. Similarly, working straight from the definition, a set with four vectors<br />

would require checking three vector equations. Lemma 1.4 makes the job easier<br />

because it allows us to get the conclusion with only one computation.<br />

1.10 Example The empty subset of a vector space is linearly independent.<br />

There is no nontrivial linear relationship among its members as it has no members.<br />

1.11 Example In any vector space, any subset containing the zero vector is<br />

linearly dependent. For example, in the space P2 of quadratic polynomials,<br />

consider the subset {1+x, x + x 2 , 0}.<br />

One way to see that this subset is linearly dependent is to use Lemma 1.4: we<br />

have 0 ·�v1 +0·�v2 +1·�0 =�0, and this is a nontrivial relationship as not all of the<br />

coefficients are zero. Another way to see that this subset is linearly dependent<br />

is to go straight to Definition 1.3: we can express the third member of the subset<br />

as a linear combination of the first two, namely, c1�v1 + c2�v2 = �0 is satisfied by<br />

taking c1 =0andc2 = 0 (in contrast to the lemma, the definition allows all of<br />

the coefficients to be zero).<br />

(There is still another way to see this that is somewhat trickier. The zero<br />

vector is equal to the trivial sum, that is, it is the sum of no vectors. So in<br />

a set containing the zero vector, there is an element that can be written as a<br />

combination of a collection of other vectors from the set, specifically, the zero<br />

vector can be written as a combination of the empty collection.)<br />

Lemma 1.1 suggests how to turn a spanning set into a spanning set that is<br />

minimal. Given a finite spanning set, we can repeatedly pull out vectors that<br />

are a linear combination of the others, until there aren’t any more such vectors<br />

left.<br />

1.12 Example This set spans R3 .<br />

⎛<br />

S0 = { ⎝ 1<br />

⎞ ⎛<br />

0⎠<br />

, ⎝<br />

0<br />

0<br />

⎞ ⎛<br />

2⎠<br />

, ⎝<br />

0<br />

1<br />

⎞ ⎛<br />

2⎠<br />

, ⎝<br />

0<br />

0<br />

⎞ ⎛<br />

−1⎠<br />

, ⎝<br />

1<br />

3<br />

⎞<br />

3⎠}<br />

0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!