20.07.2013 Views

Multilinear algebra

Multilinear algebra

Multilinear algebra

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5 Traces and TQFT<br />

Another way that Hom(V, W ) is related to tensor products is the following. Given f ∈ Hom(V, W ) and v ∈ V ,<br />

we can evaluate f on v to get an element f(v) ∈ W . This is a bilinear map Hom(V, W )×V → W , so it gives a<br />

linear map ev : Hom(V, W )⊗V → W . In particular, for W = k, we get a map tr : V ⊗V ∗ → k. Furthermore,<br />

if we identify Hom(V, W ) with V ∗ ⊗W = W ⊗V ∗ as in Theorem 4.5, ev : Hom(V, W )⊗V = W ⊗V ∗ ⊗V → W<br />

is just given by pairing up the V ∗ and V using tr. That is ev(w ⊗ v ∗ ⊗ v) = v ∗ (v)w (for v ∗ ∈ V ∗ ). This can<br />

be seen by the map used in Theorem 4.5: an element of W ⊗ V ∗ maps V to W by pairing the element of v<br />

with the V ∗ part of the tensor product.<br />

More generally, there is a composition map Hom(U, V ) ⊗ Hom(V, W ) → Hom(U, W ) which composes two<br />

linear maps. Writing these Hom’s in terms of tensor products and duals, this is a map U ∗ ⊗ V ⊗ V ∗ ⊗ W →<br />

U ∗ ⊗ W . A similar analysis to the analysis above shows that this map is just given by pairing up the V and<br />

V ∗ in the middle: we compose linear maps by sending u ∗ ⊗ v ⊗ v ∗ ⊗ w to v ∗ (v)u ∗ ⊗ w.<br />

Let’s now see what happens when we let W = V . We then have Hom(V, V ) = V ⊗ V ∗ , and there is a<br />

map tr : Hom(V, V ) = V ⊗ V ∗ → k, a map which takes a linear map T : V → V and gives a scalar. This is<br />

called the trace tr(T ) of T .<br />

You may have seen the trace of a matrix defined as the sum of its diagonal entries. This definition is really<br />

quite mystifying: why on earth would you take the diagonal entries (as opposed to some other collection of<br />

entries) and add them up? Why on earth would that be independent of a choice of basis?<br />

On the other hand, our definition is manifestly independent of a choice of basis and is very naturally<br />

defined: it’s just the natural evaluation map on V ⊗ V ∗ . We can furthermore check that this agrees with<br />

the “sum of the diagonal entries” definition.<br />

Example 5.1. Let V have basis {e1, e2} and let {α 1 , α 2 } be the dual basis of V ∗ . Let T : V → V have<br />

<br />

a b<br />

matrix<br />

c d<br />

matrix with ij entry 1 and all other entries 0. Thus T is given by<br />

. We want to write down T as an element of V ⊗ V ∗ . Recall that ei ⊗ α j corresponds to the<br />

ae1 ⊗ α 1 + be1 ⊗ α 2 + ce2 ⊗ α 1 + de2 ⊗ α 2 .<br />

Now the trace just takes these tensors and evaluates them together, so ei ⊗ α j goes to 1 if i = j and 0<br />

otherwise. Thus tr(T ) ends up being exactly a + d, the sum of the diagonal entries. The same argument<br />

would generalize to n × n matrices for any n.<br />

Besides basis-invariance, one of the most important properties of traces is the following.<br />

Theorem 5.2. Let S, T : V → V be linear maps. Then tr(ST ) = tr(T S).<br />

Proof. Write S and T as elements of V ∗ ⊗V , say S = v ∗ ⊗v and T = w ∗ ⊗w (actually, S and T will be linear<br />

combinations of things of this form, but by linearity of everything we can treat each term separately). Then<br />

recall that we compose linear maps by “pairing up the vectors in the middle”, so ST will be v ∗ (w)w ∗ ⊗ v, so<br />

tr(ST ) = v ∗ (w)w ∗ (v). On the other hand, T S = w ∗ (v)v ∗ ⊗ w and tr(T S) = w ∗ (v)v ∗ (w). But multiplication<br />

of scalars is commutative, so these are the same!<br />

Let’s now look more closely at how duals and tensor products interact with linear maps. First, note that<br />

an element u of a vector space U is just the same thing as a linear map T : k → U, namely the linear map<br />

such that T (1) = u. Thus in showing that elements of V ∗ ⊗ W = Hom(V, W ) are the same as maps V → W ,<br />

we’ve shown that maps V → W are the same as maps k → V ∗ ⊗ W . We can think of this as “moving V to<br />

the other side” by dualizing it. In fact, this works more generally.<br />

Theorem 5.3. Let U, V , and W be (finite-dimensional) vector spaces. Then linear maps U ⊗ V → W are<br />

naturally in bijection with linear maps U → V ∗ ⊗ W = Hom(V, W ).<br />

Proof. Given T : U ⊗ V → W , we can define ˜ T : U → Hom(V, W ) by ˜ (T )(u)(v) = T (u ⊗ v). Conversely,<br />

given ˜ T : U → Hom(V, W ), we can define T : U ⊗ V → W by T (u ⊗ v) = ˜ T (u)(v). Clearly these two<br />

operations are inverse to each other.<br />

12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!