Definition 3.3. Let r ∈ N and V be a vector space. Then r (V ) is the quotient of V ⊗ · · · ⊗ V (with r factors) by the subspace spanned by all tensors v1 ⊗ · · · ⊗ vn for which two of the vi are the equal. The exterior <strong>algebra</strong> (V ) is the direct sum r(V r ). We think of an element of r (V ) as some sort of “r-dimensional volume vector”. Note that 0(V ) = k and 1 (V ) = V , the former because the “empty” tensor product is k (since V ⊗ k = V for any V ). The same argument as for r = 2 shows the following. Theorem 3.4. Suppose {ei} is a basis for V . Then {ei1 ∧ ei2 ∧ · · · ∧ eir}i1
4 More on Determinants; Duality Let’s see that this definition of determinant we gave agrees with other definitions you may have seen. If V = k2 a b is 2-dimensional (with basis {e1, e2} and we represent T as a matrix , then det(T ) is supposed c d to be ad − bc. Now T (e1) = ae1 + ce2 and T (e2) = be1 + de2, so we have 2 T (e1∧e2) = T (e1)∧T (e2) = (ae1+ce2)∧(be1+de2) = ab(e1∧e1)+ad(e1∧e2)+bc(e2∧e1)+bd(e2∧e2) = (ad−bc)(e1∧e2). In general, a similar calculation shows that the determinant of a matrix (aij) is given by a sum of products ± a iσ(i) where σ is a permutation of the numbers 1, . . . , n and the sign is given by the sign of the permutation σ. Many important properties of determinants are easy to understand from this definition. First of all, it is manifestly independent of any choice of basis. Second, it is clear that det(ST ) = det(S) det(T ), since n (ST ) is just the composition n (S) n(T ), which is just multiplication by det(T ) followed by multiplication by det(T ). We can then use this to prove the other extremely useful property of determinants: Theorem 4.1. Let V be finite-dimensional and T : V → V be linear. Then T is invertible iff det(T ) = 0. Proof. First, if T T −1 = I, then det(T ) det(T −1 ) = det(I) = 1 so det(T ) = 0. Conversely, suppose T is not invertible. Then for some nonzero v, T (v) = 0. Pick a basis {ei} for V such that e1 = v. Then so det(T ) = 0. det(T )e1 ∧ · · · ∧ en = n T (e1 ∧ · · · ∧ en) = T (e1) ∧ · · · ∧ T (en) = 0 ∧ · · · ∧ T (en) = 0, Let’s now change gears and look at another construction, closely related to tensor products. We will assume all vector spaces are finite-dimensional from now on. Definition 4.2. Let V and W be vector spaces. Then we write Hom(V, W ) for the set of linear maps from V to W . This is a vector space by defining addition and scalar multiplication pointwise. More concretely, if we pick bases for V and W , we can think of Hom(V, W ) as a vector space of matrices: if V is n-dimensional and W is m-dimensional, a linear map V → W can be represented as an n × m matrix (assuming we pick bases). It follows that the dimension of Hom(V, W ) is nm. We also know another way to form a vector space from V and W that is nm-dimensional, namely the tensor product V ⊗W ! It follows that Hom(V, W ) is isomorphic to V ⊗W . However, we could ask whether it is natural to think of them as being isomorphic–that is, whether we can write down an isomorphism between them without choosing bases. To understand this, we will specialize to the case W = k. In that case, V ⊗ W = V ⊗ k can naturally be identified with V , and Hom(V, W ) = Hom(V, k) is called the dual of V and written V ∗ . An element of V ∗ is a linear function α that eats a vector and V and gives you a scalar. Given a basis {ei} for V , we get a basis {α i } (the dual basis) for V ∗ as follows: let α i be the linear map such that α i (ei) = 1 and α j (ei) = 0 if j = i. This completely defines αi since {ei} is a basis, and we can easily see that these are linearly independent (because ( ciα i )(ej) = cj, so if ciα i = 0 then ci = 0 for all i). Finally, the α i span all of V ∗ since given any α ∈ V ∗ , we can show that α = α(ei)αi. Thus given a basis {ei} for V , we obtain an isomorphism T : V → V ∗ by defining T (ei) = α i . However, for this to be a “natural” isomorphism, it should not depend on which basis we chose. Unfortunately, it does! Example 4.3. Let V = k 2 , and let {e1, e2} be the standard basis, with dual basis {α 1 , α 2 }. A different basis for V is given by f1 = e1 and f2 = e1 +e2; call the dual basis to this {β 1 , β 2 }. Now β 1 (e1) = β 1 (f1) = 1 and β 1 (e2) = β 1 (f2 − f1) = −1, so β 1 = α 1 − α 2 . Similarly, β 2 = α 2 . 9