28.07.2013 Views

Master Dissertation

Master Dissertation

Master Dissertation

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Master</strong> <strong>Dissertation</strong><br />

The Method of Epstein and Glaser<br />

Author: Asger Jacobsen Supervisor: J.P. Solovej<br />

21st August 2005 Temp. version


Contents<br />

1 Introductory Theory 6<br />

1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

1.2 Formal Power Series . . . . . . . . . . . . . . . . . . . . . . . 8<br />

1.3 Affine Transformations of Distributions . . . . . . . . . . . . 12<br />

2 The Poincaré Group 14<br />

2.1 The Lorentz Group . . . . . . . . . . . . . . . . . . . . . . . . 14<br />

2.2 The Poincaré Group . . . . . . . . . . . . . . . . . . . . . . . 16<br />

2.3 Spinor Representations of the Lorentz Group . . . . . . . . . 17<br />

3 The Scattering Matrix 19<br />

4 The Mathematical Setting of QFT 22<br />

4.1 The Fock Space . . . . . . . . . . . . . . . . . . . . . . . . . . 22<br />

4.2 The Wightman Axioms . . . . . . . . . . . . . . . . . . . . . 25<br />

5 The Method of Epstein and Glaser 27<br />

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

5.2 Example - In the Hilbert Space Setting of Quantum Mechanics 36<br />

5.3 Splitting of Distributions . . . . . . . . . . . . . . . . . . . . . 38<br />

6 Regularly Varying Functions 41<br />

7 Splitting of Numerical Distributions 49<br />

7.1 The Singular Order of a Distribution . . . . . . . . . . . . . . 49<br />

7.2 Case I: Negative Singular Order . . . . . . . . . . . . . . . . . 54<br />

7.2.1 Existence . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />

7.2.2 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

7.3 Case II: Positive Singular Order . . . . . . . . . . . . . . . . . 64<br />

8 Application to QED 66<br />

8.1 Using the Game Plan . . . . . . . . . . . . . . . . . . . . . . 66<br />

8.2 The Adiabatic Limit . . . . . . . . . . . . . . . . . . . . . . . 69<br />

ii


9 The Microlocal Approach - A Condition on the Wave Front<br />

Set. 71<br />

iii


Abstract<br />

The main purpose of this thesis is to prove that the causality axiom allows<br />

a well-defined perturbation theory of quantum electrodynamics. This<br />

will be done by using the method of Epstein and Glaser. The thesis introduces<br />

the theory of formal power series and regularly varying functions, both<br />

tools needed in the main proofs. Basic theory of the scattering matrix, the<br />

Poincaré Group and its representations will be introduced. We will use the<br />

concept of singular order to investigate the splitting of distributions into an<br />

advanced and a retarded part and show when they exist and when they are<br />

unique. Applications and the adiabatic limit will be discussed. The thesis<br />

is concluded with a consideration of the microlocal approach to the method<br />

of Epstein and Glaser and by showing that the translations invariance can<br />

be substituted by a condition on the wave front set.<br />

Sammenføjning<br />

Hovedform˚alet med dette speciale er at vise at man ved hjælp af kausalitetsaxiomet<br />

kan indføre en veldefineret pertubationsteori for kvanteelektrodynamik.<br />

Dette gøres ved hjælp af Epstein og Glasers metode. Specialet introducerer<br />

teorierne for formelle potensrækker og regulært varierende funktioner,<br />

som begge skal bruges under hovedbeviserne. Basal teori for spredningsmatrice,<br />

Poincaré gruppen og dens repræsentationer vil blive introducerede.<br />

Vi vil bruge singulær orden til at undersøge opdelingen af distributioner<br />

i en fremtids og fortids del og vise, hvorn˚ar de eksisterer, og hvorn˚ar<br />

de er entydige. Anvendelser og den adiabatiske grænse vil blive diskuteret.<br />

Specialet afsluttes med at overveje den mikrolokale tilgangsvinkel til Epstein<br />

og Glasers metode og med at vise at translations varians kan afløses af en<br />

betingelse p˚a bølgefront mængden. Specialet er skrevet p˚a engelsk.<br />

1


Preface<br />

Overview<br />

This dissertation is about the causal approach to finite quantum electrodynamics<br />

(QED). The word causal refers to the main assumption on which the<br />

method we will introduce is based. In short it is the statement that what<br />

happens at time t > s doesn’t influence what happens at earlier times t < s.<br />

The word finite means that there do not arise any ultraviolet divergences.<br />

The traditional formalism of quantum field theory (QFT) is known to<br />

have problems with ultraviolet divergences. These occur because of misuse<br />

of the mathematics. The problem lies in the fact that QFT deals with distributions<br />

instead of functions and that the rules of calculus for these are<br />

not the same. One cannot simply multiply a distribution by a discontinuous<br />

step-function and other simple rules of calculus like multiplication of distributions<br />

can be a very complicated matter. In fact, this is the main problem<br />

of [4].<br />

In practice the ultraviolet divergencies are dealt with by regularization<br />

and renormalization FIXME: explain these. But instead of using these firstaid<br />

features on an ill-defined theory one could introduce QFT using the<br />

mathematics in the right way from the beginning. This is the basic aim of<br />

this dissertation.<br />

Structure<br />

Chapter 1<br />

In Chapter 1 I introduce the tools needed in our later work. Section 1.2<br />

is about formal power series which is an abstraction of the usual power<br />

series used in calculus. A generalization which lets us use most of the wellknown<br />

machinery from calculus to settings which have no natural notion<br />

of convergence. This will become useful later on when we introduce the<br />

scattering matrix as a formal power series.<br />

In section 1.3 we will introduce Affine Transformations of distributions.<br />

2


Chapter 2<br />

The Poincaré Group will be introduced in Sections 2.1 and 2.2 in Chapter 2.<br />

Further, in Section 2.3 the spinor representation of the Lorentz Group will<br />

be presented.<br />

Chapter 3<br />

In Chapter 3 we will take a look at the key player of this dissertation: The<br />

Scattering Matrix. In this chapter we introduce it in the well-known setting<br />

of quantum mechanics.<br />

Chapter 4<br />

In Chapter 4 we consider the setting of QFT. In Section 4.1 we introduce<br />

the Fock space and in Section 4.2 we take a look at the Wightman axioms.<br />

Chapter 5<br />

In Chapter 5 we will finally introduce the method of Epstein and Glaser.<br />

Section 5.1 is about the main ideas and a game plan is sketched. In Section<br />

5.2 the method is applied to the of quantum mechanics which is based<br />

on the well-known theory of Hilbert spaces. In Section 5.3 we will look at<br />

the splitting of distributions into an advanced and a retarded part suggested<br />

in the game plan. We focus on the support properties.<br />

Chapter 6<br />

In Chapter 6 we will leave the theory for a while introducing an important<br />

tool for the splitting of numerical distributions in Chapter 7.<br />

Chapter 7<br />

In Section 7.1 of this chapter we develop some tools for studying the splitting<br />

of numerical distributions. The notion of singular order of a distribution will<br />

be a tool to categorize the type of splitting. Thus we will in Sections 7.2<br />

and 7.3 look at the splitting of of distributions of positive and negative<br />

singular order, separately. We will find out that the splitting may be done<br />

by multiplication of a step-function (this will be done in a technical manner)<br />

in the case of negative order. Further in this case the splitting is unique,<br />

which will be the topic of Subsection 7.2.2. The splitting in the case of<br />

positive singular order turns out to be a more complicated matter and only<br />

works for a special class of distributions. Further the splitting is not unique.<br />

3


Chapter 8<br />

In Chapter 8 we will turn our attention to applications. In Section 8.1 we<br />

find an expression for the difference distribution which is essential in the<br />

splitting process. The expression can be considered a correspondent to the<br />

Feynman Rules. We find the the first term describes photon exchange.<br />

In Section 8.2 we will give an idea of how to take the adiabatic limit.<br />

Chapter 9<br />

In this chapter we consider the microlocal approach to the method of Epstein<br />

and Glaser. The idea is to generalize the theory to manifolds. As<br />

translations are substituted by parallel transport on manifolds the property<br />

of translational invariance has to be substituted by a condition on the<br />

smoothness. We find a condition on the wave front set which makes the<br />

terms of the scattering matrix well-defined.<br />

Thanks<br />

I would like to thank my supervisor Jan Philip Solovej for his help and<br />

support during the writing of this project. Further I would like the numerous<br />

people who’s work and research this dissertation is based on; especially<br />

Scharf, Fredenhagen, and of course Epstein and Glaser. Finally, I will also<br />

thank my family who has had to bear over with my during the final work<br />

under time pressure.<br />

4


Prerequisites<br />

Fixme: Reevaluate the needs as the theory develops.<br />

This is a dissertation for the masters degree in mathematics at the University<br />

of Copenhagen. It is thus the conclusion of 5 years of study. The<br />

thesis is meant to rest on the mathematical fundament the student has obtained<br />

during those years. Clearly, I will not use all the mathematical tools<br />

I am equipped with from the wide variety of disciplines, I been acquainted<br />

with through my studies. It would be more correct to say that it is a narrow<br />

specialization of some of the knowledge I have gained. It is thus difficult<br />

to point out what the prerequisites are, exactly. I will never the less try to<br />

narrow down the fundament on which this thesis rests.<br />

First of all I will emphasize that this thesis is concerned with quantum<br />

electrodynamics. That is, it is concerned with problems arising from both<br />

quantum mechanics and general relativity. Although I will only be concerned<br />

with a few principles of the two, the whole idea and machinery of them are<br />

luring in the background. I have through my studies read several books on<br />

the subjects most important probably being the basic theory to be found in<br />

[7], [8] and [1]. Note that Stone’s Theorem will be taken for granted and <br />

will be normalized.<br />

Also a good deal of functional analysis will be taken for granted. At least<br />

the content of [5]. Especially the theory of distributions will be imperative<br />

the thesis. I will, in some sense, consider the project [4], which I wrote with<br />

my fellow student Morten Bakkedal, as a part of this thesis. The project<br />

covers a good deal of the theory of distributions and wavefront sets resting<br />

on the lecture notes on distributions by Gerd Grubb [2].<br />

5


Chapter 1<br />

Introductory Theory<br />

1.1 Notation<br />

I will mostly adopt the notation used in [4], but for convenience there will be<br />

some differences in the notation of a distribution. Whereas we in [4] used to<br />

write a distribution u acting on a test-function φ in the functional manner<br />

u(φ) I will in this project use the “inner product” notation 〈u, φ〉.<br />

Also note that I will use the definition of the fourier transform found in<br />

[4] rather than the one found in [6]. This will only result in some differences<br />

in the factors.<br />

I will occasionally use Einstein notation along with the convention that<br />

indices from the Greek alphabet take the values 0, 1, 2, 3 and indices from<br />

the Latin alphabet take the values 1, 2, 3.<br />

By a t-dependent operator O(t) : E → F we mean an indexed family of<br />

operators {O(t)}t∈R all from E to F .<br />

Given a t-dependent operator O(t) : E → F . We write O = s- limt→∞ O(t)<br />

if it converges in the strong operator topology to an operator O. That is, if<br />

Oφ = lim<br />

t→∞ (O(t)φ) for all φ ∈ E.<br />

Beware of the many meanings of the symbol δ. Usually it means the<br />

Dirac delta-distribution, but it is also used as a small real, the Kronecker<br />

symbol δi,j and the diagonal map. It should be transparent from the context<br />

which meaning the symbol has.<br />

For convenience I have listed some general notation on the next page:<br />

6


General notation.<br />

Let α = (α1, . . . , αn) ∈ N n 0 be a multi-index and x = (x1, . . . , xn) ∈ R n a<br />

point, then<br />

|α| = α1 + α2 + . . . + αn.<br />

α! = (α1, α2, . . . , αn)! = α1!α2! · · · αn!.<br />

x α = x α1<br />

1<br />

· · · xαn<br />

n<br />

∂j = ∂/∂xj for j = 1, . . . , n.<br />

∂ α = ∂ α1<br />

1<br />

· · · ∂αn<br />

n .<br />

Dj = −i∂j for j = 1, . . . n.<br />

f (α) (x) = ∂ α f(x).<br />

7


1.2 Formal Power Series<br />

The concept of formal power series will be of much importance to this<br />

project. They make it possible to employ much of the analytical machinery<br />

of usual power series but in settings which have no natural notion of<br />

convergence.<br />

Let R be a commutative ring with identity. We call the elements of R the<br />

scalars. Let R N be the set of all infinite sequences in R. Now we define<br />

addition and multiplication of two such sequences in the usual way, that is,<br />

and the Cauchy product<br />

(an) + (bn) = (an + bn),<br />

n<br />

(an) × (bn) =<br />

k=0<br />

akbn−k<br />

<br />

= <br />

h+k=n<br />

ahbk,<br />

which in a sense is discrete convolution. Note that addition and<br />

multiplication are well-defined operations because each term only depends<br />

on a finite number of terms from the original sequences.<br />

Theorem 1.1. (R N , +, ×) is a commutative ring with the multiplicative<br />

identity 1 := (1, 0, 0, . . .) and additive identity 0 := (0, 0, . . .).<br />

Proof. The only non-trivial property is multiplicative associativity. We<br />

need to prove that<br />

((a × b) × cn) = (a × (b × c)n).<br />

This follows from<br />

<br />

((a × b) × cn) =<br />

<br />

(a × b)hck =<br />

<br />

and<br />

h+k=n<br />

<br />

(a × (b × c)n) =<br />

r+h=n<br />

r+s+k=n<br />

<br />

ar(b × c)h =<br />

r+s+k=n<br />

arbsck<br />

arbsck<br />

It is easy to see from the definition of the Cauchy product that if we define<br />

X = (0, 1, 0, 0, . . .) then every element of R N on the form<br />

(a0, . . . , aN, 0, . . .) can be written on the form<br />

N<br />

anX n .<br />

n≥0<br />

8<br />

<br />

<br />

.


Theorem 1.2. Let d : R N × R N → R be defined by<br />

d((an), (bn)) = 2 −k , where k ∈ N0<br />

is the smallest index such that ak = bk. If no such k exists we define<br />

d((an), (bn)) = 0. Then (R N , d) is a metric space.<br />

Proof. The only non-trivial part is the triangle inequality.<br />

Let d(a, b) = 2−k1 −k2 , d(b, c) = 2<br />

exist.<br />

−k3 and d(a, c) = 2 assuming k1, k2 and k3<br />

Say, k3 ≥ k1, then 2−k3 −k1 −k1 −k2 ≤ 2 ≤ 2 + 2 , hence we may assume<br />

= bk3 and bk3 = ck3 thus ak3 = ck3 ,<br />

k3 < ki for i = 1, 2. Then ak3<br />

contradicting the definition of k3.<br />

If k3 doesn’t exist the inequality is obvious. Say, k1 doesn’t exist, then<br />

(an) = (bn) hence k2 = k3 if they exist. If not, then (an) = (cn).<br />

Theorem 1.3. The operations of addition and multiplication are<br />

continuous.<br />

Proof. Say, fn → f and gn → g then we need to prove that<br />

fn + gn → f + g, that is d(fn + gn, f + g) → 0 for n → ∞.<br />

We need to prove that given ɛ > 0 there exists a δ such that<br />

max{d(fn, f), d(gn, g)} < δ ⇒ d(fn + gn, f + g) < ɛ.<br />

Given ɛ > 0 find k such that 2 −k < ɛ. Let δ = 2 −k /2 = 2 −(k+1) , then<br />

|(f − fn)h| + |(g − gn)h| = 0, for all h ≤ k + 1. (1.1)<br />

Hence since 2 −(k+1) + 2 −(k+1) = 2 −k<br />

d(fn + gn, f + g) < 2 −k < ɛ.<br />

Note that equation (1.1) also implies |(fngn)h| − |(fg)h|=0 for h ≤ k + 1,<br />

hence multiplication is also continuous.<br />

The cases where no k exists are obvious.<br />

This leads to the following result.<br />

Corollary 1.4. (RN , d) is a topological ring and<br />

(an) = <br />

anX n .<br />

n≥0<br />

Note that in this ring convergence is absolute, in fact any rearrangements<br />

of the series converges to the same limit.<br />

Definition 1.5. We denote the topological ring (R N , d) by R[[X]] and call<br />

it the ring of formal power series.<br />

9


Theorem 1.6. Let anX n be a formal power series. Then anX n has<br />

a unique inverse if and only if a0 has a multiplicative inverse in R.<br />

Proof. Say, bnX n is the inverse of anX n . Then by the definition of<br />

multiplication we must have<br />

(i) a0b0 = 1<br />

(ii) a0b1 + a1b0 = 0<br />

(iii) a0b2 + a1b1 + a2b0 = 0.<br />

and so on. . .<br />

Now it is clear from (i) that if the inverse exists then a0 must also be<br />

invertible.<br />

If on the other hand a0 ∈ R ∗ , the invertible elements of R, then the inverse<br />

of anX n is given by bnX n , where b0 = a −1<br />

0 and<br />

bn = a −1<br />

0 (−a1bn−1 − . . . − anb0), for n ≥ 1.<br />

Say, both bnX n and cnX n are inverse to anX n . Then a0b0 = a0c0,<br />

that is b0 = c0, since a0 = 0. Assume bk = ck, for all k < n, then<br />

a0bn + a1bn−1 + · · · + anb0 = a0cn + a1cn−1 + · · · + anc0<br />

By the induction assumption bk = ck for all k < n hence a0bn = a0cn and<br />

thus bn = cn.<br />

The geometric series formula is valid in R[[X]].<br />

Theorem 1.7. Let anX n = 1 be a formal power series, then<br />

<br />

anX n m<br />

= (1 − anX n ) −1 .<br />

Proof. Let g = <br />

anX n<br />

m . Then<br />

<br />

1 − anx n<br />

<br />

g = 1 − anx n<br />

as wanted.<br />

lim<br />

M→∞<br />

m=0<br />

M <br />

anx n ) m<br />

<br />

= lim 1 −<br />

M→∞<br />

anx n M<br />

m=0<br />

<br />

= 1 − anx n<br />

<br />

+ anx n −<br />

= 1.<br />

anx n ) m<br />

Setting an = δ1,n we get the usual geometric series formula.<br />

10<br />

anx n 2<br />

+ · · ·


Corollary 1.8.<br />

∞<br />

n=0<br />

X n = (1 − X) −1 .<br />

Note the following nice properties of R[[X]].<br />

Lemma 1.9. The topology on R[X] is equal to the product topology on<br />

R N , where R is equipped with the discrete topology.<br />

Proof. Let B(a, ɛ) be an open ball in R[[X]], then<br />

B(a, ɛ) = {b ∈ R[[X]]|d(a, b) < ɛ}<br />

= {b ∈ R[[X]]|d(a, b) < 2 −k , where 2 −k ≤ ɛ < 2 −k+1 }<br />

= {b ∈ R[[X]]|ah = bh for all 0 ≤ h < k}<br />

=<br />

k−1 <br />

h=0<br />

π −1<br />

h ({ah}),<br />

where {ah} is open in the discrete topology.<br />

If on the other hand<br />

B = π −1<br />

β1<br />

(Uβ1 ) ∩ π−1(Uβ2<br />

) ∩ · · · ∩ π−1(Uβn<br />

),<br />

β2<br />

is a basis element of R N . Let a ∈ B. Define m = max{β1, . . . , βn}. Then<br />

a ∈ B a, 2 −(m+1) which is clearly a subset of B.<br />

Proposition 1.10. The metric space (R[[X]], d) is<br />

1. Complete.<br />

2. Compact if and only if R is finite.<br />

Proof. Completeness: Let (an) be a cauchy sequence in R[[X]]. Then we<br />

know<br />

∀k∃N ∈ N∀n, m ≥ N : d(an, am) < 2 −k .<br />

Hence for each h, πh(an) converges to, say, bh. Then (an) converges to the<br />

formal power series b = <br />

h bhX h .<br />

Compactness: If R is finite R[[X]] is compact by Tychonoff’s theorem.<br />

The topology of R is the discrete topology, hence if it is infinite, <br />

a∈R a is<br />

an open cover that doesn’t contain a finite subcollection covering R.<br />

11<br />

βn


1.3 Affine Transformations of Distributions<br />

Let A = (aij) be an invertible n × n matrix and f a continuous function on<br />

R n . We define a linear continuous composition ⋆ by<br />

A ⋆ f(x) = f(Ax), x ∈ R n .<br />

Let’s look at the distribution induced by f acting on a test-function φ.<br />

Letting T (x) = A −1 x, where A −1 = (ãij), by change of variables theorem<br />

<br />

〈A ⋆ f, φ〉 =<br />

<br />

=<br />

<br />

=<br />

R n<br />

since (A −1 x)i = n<br />

j=1 ãijxj and<br />

T ′ (x) =<br />

⎛<br />

⎜<br />

⎝<br />

∂(A −1 x)1<br />

∂x1<br />

.<br />

∂(A −1 x)n<br />

∂x1<br />

f(Ax)φ(x)dx<br />

f(T (Ax))φ(T (x))| det T ′ (x)|dx<br />

Rn f(x)φ(A −1 x)| det A −1 |dx<br />

= | det A| −1 〈f, A −1 ⋆ φ〉,<br />

. . .<br />

. ..<br />

. . .<br />

∂(A −1 x)1<br />

∂xn<br />

.<br />

∂(A −1 x)n<br />

∂xn<br />

⎞<br />

⎟<br />

⎠ =<br />

⎛<br />

⎞<br />

ã11 . . . ã1n<br />

⎜<br />

⎝<br />

.<br />

. ..<br />

⎟<br />

. ⎠ = A −1 .<br />

ãn1 . . . ãnn<br />

Definition 1.11. Let u ∈ D ′ (R n ) and let A be a real invertible n × n<br />

matrix. Then the distribution A ⋆ u is defined by<br />

for φ ∈ C ∞ 0 (Rn ).<br />

〈A ⋆ u, φ〉 = | det A| −1 〈u, A −1 ⋆ φ〉, (1.2)<br />

We should check that this actually defines a distribution.<br />

Linearity:<br />

〈A ⋆ u, rφ + sψ〉 = | det A| −1 〈u, A −1 ⋆ (rφ + sψ)〉<br />

Continuity: Assume φn → φ, then<br />

= | det A| −1 〈u, rA −1 ⋆ φ + sA −1 ⋆ ψ〉<br />

= r| det A| −1 〈u, A −1 ⋆ φ〉 + s| det A| −1 〈u, A −1 ⋆ φ〉<br />

〈A ⋆ u, φn〉 = | det A| −1 〈u, A −1 ⋆ φn〉 → | det A| −1 〈u, A −1 ⋆ φ〉,<br />

since u and ⋆ are continuous.<br />

12


Proposition 1.12. We state some often occurring affine transformations.<br />

In the following u ∈ D ′ (R n ) and φ ∈ C(R n ).<br />

1. Reflection: 〈u(−x), φ(x)〉 = 〈u(x), φ(−x)〉.<br />

2. Scaling: 〈u(tx), φ(x)〉 = t −n 〈u(x), φ(x/t)〉 = t −n 〈u(x), λtφ(x)〉, letting<br />

λtφ(x) := φ(x/t).<br />

Proof. 1. and 2. follow directly from (1.2) by choosing A = −I and A = tI<br />

respectively.<br />

Note that λt : S(R n ) → S(R n ) is continuous. This follows since<br />

λtφα,β = sup |x α D β (λt(φ(x)))| = sup |x α t −|β| (D β φ)(x/t)|<br />

Further we define<br />

= t |α|−|β| sup |(x/t) α (D β φ)(x/t)|<br />

= t |α|−|β| φα,β.<br />

Definition 1.13. Let u ∈ D ′ (R n ), h ∈ R n and φ ∈ C(R n ). We define the<br />

translation τh of u by<br />

〈u(x − h), φ(x)〉 = 〈τhu, φ〉 = 〈u, τ−hφ〉 = 〈u(x), φ(x + h)〉.<br />

Clearly this defines a distribution.<br />

Recall that a function f is said to be homogeneous of order λ ∈ C if<br />

f(tx) = t λ f(x) for all t > 0 and x ∈ R n . We need to define the similar<br />

property for distributions.<br />

Definition 1.14. A distribution u is said to be homogeneous of degree<br />

λ ∈ C if<br />

〈u(tx), φ(x)〉 = 〈t λ u(x), φ(x)〉,<br />

for all t > 0 and φ ∈ C ∞ 0 .<br />

13


Chapter 2<br />

The Poincaré Group<br />

2.1 The Lorentz Group<br />

We define the Lorentz Metric as the bilinear form on R4 seen as a vector<br />

space:<br />

〈x, y〉 = x 0 y 0 − x 1 y 1 − x 2 y 2 − x 3 y 3<br />

for all x, y ∈ R 4 . (2.1)<br />

Obviously it is symmetric and non-degenerate in the sense that there for<br />

all x = 0 exists a y such that 〈x, y〉 = 0. But it is not positive definite.<br />

Letting g be the Metric Tensor i.e.<br />

<br />

−1 0T g = (gµν) =<br />

, where 0<br />

0 I3<br />

T = (0, 0, 0).<br />

Equation (2.1) becomes<br />

〈x, y〉 = x T gy = gµνx µ y ν<br />

The Minkowski Space M is the vector space R 4 endowed with the Lorentz<br />

metric.<br />

While y µ denotes the components of the vector y in M, yµ denotes the<br />

components of the corresponding linear form y ′ given by y ′ (x) = 〈y, x〉, for<br />

all x. In the dual space with the canonical basis the components of the<br />

linear form y ′ are (y 0 , −y 1 , −y 2 , −y 3 ).<br />

Definition 2.1. A Lorentz transformation of R 4 is a linear map<br />

Λ : R 4 → R 4 satisfying<br />

〈Λx, Λy〉 = 〈x, y〉 for all x, y ∈ R 4 . (2.2)<br />

Since e T i Aej = Aij for a matrix A it is clear that (2.2) is equivalent to<br />

Λ T gΛ = g. (2.3)<br />

14


Clearly the composition of Lorentz transformations is itself a Lorentz<br />

transformation. And since<br />

1 = − det g = − det(Λ T gΛ) = − det Λ det g det Λ = (det Λ) 2 ,<br />

we must have det Λ = ±1. Thus all Lorentz transformations are invertible<br />

and the set of all Lorentz transformations form group called the Lorentz<br />

Group L, also denoted O(3, 1).<br />

Further we denote the group of Lorentz transformations Λ with det Λ = 1<br />

by L+ = SO(1, 3). From equation (2.3) we get 10 independent quadratic<br />

equations for the components of Λ. The first is<br />

Which implies that either<br />

When Λ 0 0<br />

(Λ 0 0) 2 − (Λ 1 0) 2 − (Λ 2 0) 2 − (Λ 3 0) 2 = 1. (2.4)<br />

Λ 0 0 ≥ 1 or Λ 0 0 ≤ −1. (2.5)<br />

≥ 1 time is not reversed and we call the subgroup defined by<br />

the Proper Lorentz Group.<br />

L ↑<br />

+ := {Λ ∈ L+|Λ 0 0 ≥ 1},<br />

The Lorentz group is normally split into 4 disjoint classes. Namely, L ↑<br />

+<br />

and the 3 following<br />

Name det Λ Λ 0 0<br />

L ↑<br />

− −1 ≥ +1<br />

L ↓<br />

− −1 ≤ −1<br />

L ↓<br />

+ +1 ≤ −1<br />

Further we mention 3 important discrete Lorentz transformations, namely:<br />

I, the identity, P = g, space inversion or the parity transform, and<br />

T = −g, time inversion.<br />

There are 3 types of vectors in M. A vector x is said to be time-like if<br />

x 2 > 0, space-like if x 2 < 0 and light-like if x 2 = 0. Since x 2 is constant<br />

under Lorentz transformation the categories stay the same under the<br />

transformation.<br />

Information cannot propagate faster than the speed of light. Equivalently,<br />

two events at x µ and y ν cannot influence each other if they are separated<br />

by a space-like distance,<br />

(x − y) 2 < 0.<br />

They have no causal relation. Causality plays a key role in this thesis.<br />

15


Definition 2.2. Given some point x the forward cone of x is defined by<br />

and the backwards cone of x is<br />

V + (x) = {y|(y − x) 2 ≥ 0 and y 0 ≥ x 0 },<br />

V − (x) = {y|(y − x) 2 ≥ 0 and y 0 ≤ x 0 }.<br />

The n-dimensional generalizations are<br />

Γ ± n (x) = {(x1, . . . , xn)|xj ∈ V ± (x) and ∀j = 1 = 1, . . . , n}.<br />

That (y − x) 2 ≥ 0 just means that they have to be causally connected.<br />

Later we will need a symbol to distinguish points in time.<br />

Definition 2.3. Let A, B ⊂ M. If for all x ∈ A and y ∈ B we have<br />

x 0 < y 0 , we write A < B.<br />

2.2 The Poincaré Group<br />

Definition 2.4. Let Λ ∈ L and a ∈ R 4 . By a Poincaré transformation<br />

Π : R 4 → R 4 we mean Π(x) = Λx + a, and we write Π = (a, Λ).<br />

The set P of all Poincaré transformations form a group under the<br />

composition law<br />

(a1, Λ1)(a2, Λ2) = (a1 + Λ1a2, Λ1Λ2).<br />

We see that the Poincaré Group P is the semidirect product of L and the<br />

group of space-time translations (R 4 , +), that is<br />

P = R 4 ⊙ L<br />

Further we define the Proper Poincaré Group as<br />

P ↑<br />

+ = R4 ⊙ L ↑<br />

+<br />

16


2.3 Spinor Representations of the Lorentz Group<br />

We want to extend the spinor representation to four dimensions. 1 Doing<br />

this we add the unit matrix<br />

<br />

1 0<br />

σ0 = , (2.6)<br />

0 1<br />

to the usual Pauli matrices:<br />

σ1 =<br />

0 1<br />

1 0<br />

<br />

, σ2 =<br />

0 −i<br />

i 0<br />

<br />

1 0<br />

, σ3 =<br />

0 −1<br />

<br />

. (2.7)<br />

These form a basis of H(2), the vector space over R of all complex<br />

Hermitian 2 × 2 matrices. Actually, given a 2 × 2 matrix H ∈ H(2) we get<br />

by direct calculation that H can be represented by a vector x ∈ R4 in the<br />

following way:<br />

H := σ(x) = x µ σµ = 1<br />

3<br />

tr(Hσµ)σµ<br />

(2.8)<br />

2<br />

The map x ↦→ σ(x) defined in this way is indeed an isomorphism of R 4<br />

onto H(2).<br />

Fixme: should this be written out?<br />

Further we can define another isomorphism by<br />

µ=0<br />

x ↦→ σ ′ (x) = x 0 σ0 − x k σk =<br />

3<br />

xµσµ.<br />

Now consider the transform of σ(x) with a complex 2 × 2 matrix A:<br />

σ(x) ↦→ Aσ(y)A ∗ .<br />

Clearly σ(y) := Aσ(x)A ∗ ∈ H(2). And from equation (2.8) we see that its<br />

components in the basis of Pauli Matrices are given by<br />

y ν = 1<br />

2 tr(σ(y)σν) = 1<br />

2 tr(Aσ(x)A∗σν) = 1<br />

2<br />

µ=0<br />

3<br />

µ=0<br />

tr(AσµA ∗ σν)x µ .<br />

In this way we get a linear map ΛA : x → y of R 4 into it self. Note that<br />

Hence if det A = 1, then<br />

σ(x) = x µ <br />

x0 + x3 x1 − ix2 σµ =<br />

x 1 + ix 2 x 0 − x 3<br />

<br />

.<br />

1 The two-component spinor formalism is treated in [8] chapter 3.2-3.3.<br />

17


〈y, y〉 = det σ(y) = det A det σ(x) det A ∗ = det σ(x) = 〈x, x〉.<br />

Clearly the parallelogram identity holds for the Lorentz metric 2 , thus<br />

〈x, y〉 =<br />

= det σ(x) + det σ(y) − det σ(x) − det σ(y)<br />

〈x + y, x + y〉 − 〈x, x〉 − 〈y, y〉<br />

2<br />

.<br />

2<br />

From this we see that 〈ΛAx, ΛAy〉 = 〈x, y〉, thus it is a Lorentz<br />

transformation.<br />

Since<br />

fixme: make sure the following is correct<br />

Hence<br />

ΛB(x) = y ⇔ σ(y) = Bσ(x)B ∗ and ΛA(y) = z ⇔ σ(z) = Aσ(y)A ∗ ,<br />

ΛAB(x) = z ⇔ ABσ(x)B ∗ A ∗ = Aσ(y) = σ(z).<br />

ΛAB = ΛAΛB.<br />

In this way we can construct a representation Λ : A ↦→ ΛA of SL(2, C), the<br />

group of all complex 2 × 2 matrices with determinant 1, onto L ↑<br />

+ . We call<br />

SL(2, C) the spinor representation. The vectors in the representation space<br />

C2 are called spinors.<br />

Note that the kernel of the representation Λ is<br />

ker Λ = Λ −1 ({I4}) = {A ∈ SL(2, C)|AHA ∗ for all H ∈ H(2)}.<br />

Letting H = I2 we see that A ∈ ker Λ must be unitary and A ∈ ker Λ if and<br />

only if AH = HA for all H ∈ H(2). Hence A = −I2 or A = I2 and<br />

L ↑<br />

+ ∼ = SL(2)/{I2, −I2}.<br />

Hence Λ(A) = Λ(−A) and SL(2, C) is a two-valued representation of L ↑<br />

+ .<br />

2 The proof is exactly the same as the usual one for inner product spaces.<br />

18


Chapter 3<br />

The Scattering Matrix<br />

As the notion of the scattering matrix, or the S-matrix for short, plays the<br />

central role in scattering theory I will give a brief introduction to the<br />

development of it in the case of quantum mechanics. Later our main aim<br />

will be to construct the S-matrix of quantum electrodynamics, QED, by<br />

perturbation theory. First a definition:<br />

Definition 3.1. Let H be a Hilbert space. Then a two-parameter family<br />

U(s, t), (s, t) ∈ R 2 of bounded operators on H is called a unitary<br />

propagator, if the following is satisfied:<br />

(1) U(s, t) is unitary for all s, t ∈ R,<br />

(2) U(t, t) = 1 for all t ∈ R,<br />

(3) U(r, s)U(s, t) = U(r, t) for all r, s, t ∈ R,<br />

(4) The map (s, t) ↦→ U(s, t)ψ is continuous for all ψ ∈ H.<br />

Consider a quantum mechanical system described by the Hamiltonian<br />

H = H0 + V (t), where H0 is the free Hamiltonian and V is the<br />

time-dependent interaction. Then the time evolution is given by a unitary<br />

propagator in the sense that<br />

ψ(t) = U(t, s)ψ(s) 1 . (3.1)<br />

Definition 3.2. Let U(s, t) be a unitary propagator then we define the<br />

wave operators as follows<br />

Win = s- lim<br />

t→−∞ U(t, 0)∗ e −iH0t = s- lim<br />

t→−∞ U(0, t)e−iH0t .<br />

Wout = s- lim<br />

t→∞ U(t, 0) ∗ e −iH0t = s- lim<br />

t→∞ U(0, t)e −iH0t .<br />

Provided the strong limits exist.<br />

1 We remember that if H is time independent then then time evolution is given by the<br />

unitary transform ψ(t) = e −iHt ψ(t0).<br />

19


Note that<br />

〈Woutφ, ψ〉 = 〈 lim<br />

t→∞ U(0, t)e −iH0t φ, ψ〉<br />

= lim<br />

t→∞ 〈U(0, t)e −iH0t φ, ψ〉<br />

= lim<br />

t→∞ 〈φ, e iH0t U(0, t) ∗ ψ〉<br />

= 〈φ, lim<br />

t→∞ e iH0t U(t, 0)ψ〉.<br />

Hence W ∗ out = s- limt→∞ e iH0t U(t, 0). Further<br />

W ∗ outWinψ = lim<br />

t→∞ lim<br />

s→−∞ eiH0t U(t, 0)U(0, s)e −iH0s ψ<br />

This leads us to the following definition.<br />

= lim<br />

t→∞ lim<br />

s→−∞ eiH0t U(t, s)e −iH0s ψ.<br />

Definition 3.3. The scattering matrix is defined as follows<br />

S = W ∗ outWin = s- lim<br />

t→∞ s- lim<br />

s→−∞ eiH0t U(t, s)e −iH0s .<br />

The physical meaning of the scattering matrix is now apparent. A<br />

normalized initial asymptotic state ψ considered at time t = 0, say, is first<br />

transformed to s = −∞ by free dynamics, then it is evolved from −∞ to<br />

t = ∞ by full interacting dynamics and finally it is transformed back from<br />

∞ to t = 0 again by free dynamics. Thus Sψ is in fact the outgoing<br />

scattering state transformed to t = 0 by free dynamics. The probability for<br />

a transition form ψ to φ is given by<br />

P (ψ → φ) = |〈φ, Sψ〉| 2 .<br />

Now ψ(t) as given by equation (3.1) is the solution to the Schrödinger<br />

equation<br />

i d<br />

dt ψ(t) = (H0 + V (t))ψ(t).<br />

If we go over to the interaction picture 2 by substituting φ = e iH0t ψ, we get<br />

i d d<br />

φ(t) = i<br />

dt dt (eiH0tψ(t)) = −H0e iH0t iH0t<br />

ψ(t) + e (H0 + V (t))ψ(t)<br />

= e iH0t −iH0t<br />

V e φ(t)<br />

= ˜ V (t)φ(t).<br />

FIXME: Make sure you can differentiate like this.<br />

2 See [8] page 318.<br />

20


Thus we note that the scattering matrix is in fact just the limit of the time<br />

evolution in the interaction picture. That is<br />

S = lim lim Ũ(t, s).<br />

t→∞ s→−∞<br />

If the interaction V (t) is a bounded operator we may write the unitary<br />

operator in terms of the Dyson series 3<br />

Ũ(t, s) = 1 +<br />

∞<br />

(−i) n<br />

n=1<br />

Claim. t<br />

s<br />

s<br />

s<br />

t<br />

s<br />

dt1<br />

t1<br />

s<br />

tn−1<br />

dt2 · · · dtn<br />

s<br />

˜ V (t1) · · · ˜ V (tn). (3.2)<br />

tn−1<br />

dt1 . . . dtn =<br />

s<br />

1<br />

(t − s)n<br />

n!<br />

Proof. It is obvious for n = 1.<br />

Assume it is right for n = k − 1 then<br />

t tk−1<br />

t<br />

t1 1<br />

dt1 · · · dtk = dt1<br />

(k − 1)!<br />

=<br />

This proves the claim by induction.<br />

s<br />

1<br />

(k − 1)!<br />

t<br />

1<br />

=<br />

(k − 1)!<br />

= 1<br />

(t − s)k<br />

k!<br />

s<br />

t−s<br />

0<br />

s<br />

k−1 dτ<br />

(t1 − s) k−1 dt1<br />

h k−1 dt1<br />

It follows that the sum (3.2) converges in the operator-norm since each<br />

term is bounded by<br />

t<br />

s<br />

dt1 . . .<br />

tn−1<br />

s<br />

dtn ˜ V (t1) . . . ˜ V (tn) = 1<br />

n! (t − s)n ˜ V ) n<br />

If the the interaction decreases for large times such that<br />

then the S-matrix is given by<br />

S =<br />

∞<br />

(−i) n<br />

n=0<br />

+∞<br />

−∞<br />

+∞<br />

−∞<br />

dt1<br />

which is also norm-convergent.<br />

3 See [8] page 326.<br />

t1<br />

dsV (s) < ∞,<br />

dt2 · · ·<br />

−∞<br />

21<br />

tn−1<br />

−∞<br />

dtn ˜ V (t1) · · · ˜ V (tn),


Chapter 4<br />

The Mathematical Setting of<br />

QFT<br />

4.1 The Fock Space<br />

Let H n = H ⊗ · · · ⊗ H be the tensor product space of n single-particle<br />

Hilbert spaces H. Remembering that identical particles must obey either<br />

Bose or Fermi statistics 1 , we symmetrize the states φn ∈ H n<br />

S + n φn = 1<br />

n!<br />

<br />

π<br />

φn(xπ1 , . . . , xπn)<br />

in the case of bosons and antisymmetrize<br />

<br />

(−1) π (xπ1 , . . . , xπn)<br />

S − n φn = 1<br />

n!<br />

π<br />

in the case of fermions. We then form a Hilbert space from the direct sum<br />

of tensor products.<br />

Definition 4.1. The Fock space F is defined as<br />

F ± = ⊕ ∞ n=0S ± n H n<br />

with H0 := {λ|0〉|λ ∈ C} where |0〉 is called the vacuum.<br />

It is easy to prove that F is indeed a Hilbert space. It follows from the<br />

continuity of the inner products. We simply state it here.<br />

Proposition 4.2. The Fock space F is a Hilbert space with the inner<br />

product<br />

∞<br />

〈Φ, Ψ〉 = 〈Φn, Ψn〉n,<br />

n=0<br />

where 〈·, ·〉n is the inner product on Hn.<br />

1 See [8] chapter 6<br />

22


We site Corollary 1.3 of [6] here 2 .<br />

Corollary 4.3. Any (bounded) operator A ∈ F can be expressed as<br />

A = A (−) + A (+)<br />

where A (−) contain annihilation and A (+) creation operators, only.<br />

Definition 4.4. We say that an operator A ∈ F is normally ordered if all<br />

annihilation operators are placed to the right of the creation operators.<br />

The normal ordering of A is expressed by : A :.<br />

In the light of Corollary 4.3 we may define.<br />

Definition 4.5. A contraction between two field operators A and B is<br />

defined by<br />

| AB | := [A (−) , B (+) ]±,<br />

where [·, ·]± is the commutator in the case of Bose fields and<br />

anti-commutator in the case of Fermi fields.<br />

The following theorem tells us how to order field operators normally. We<br />

state it without its proof. 3<br />

Theorem 4.6 (Theorem of Wick). A product of n field operators can be<br />

normally ordered as follows:<br />

A1A2 · · · An =: A1A2 · · · An : + | A1A1 | · · · An + permutations<br />

+ : | A1 | A2 · · · Aj | . . .An : + · · ·<br />

+ : | A1A2 | | A3A4 | · · · : +permutations,<br />

where the sum contains all normal products with all possible pairings of<br />

contractions.<br />

FIXME: think about proving it<br />

We assume the contractions are complex numbers, which therefore can be<br />

taken out of the normal products. Though we have to remember to take<br />

the sign into account in the case of Fermi operators.<br />

In QED there exists only three contractions.<br />

| ψa(x)ψb(y) | := {ψ (−)<br />

a (x), ψ (+)<br />

b (y)} =: 1<br />

i S(+)<br />

ab<br />

| ψa(x)ψb(y) | := {ψ (−)<br />

a (x), ψ (+)<br />

b (y)} =: 1<br />

i S(−)<br />

ba<br />

(x − y) (4.1)<br />

(y − x) (4.2)<br />

| Aµ(x)Aν(y) | := [A (−)<br />

µ (x), A (+)<br />

ν (y)] = gµνiD (+)<br />

0 (x − y), (4.3)<br />

2 I have taken the liberty of reformulating the corollary such that its meaning will be<br />

more apparent for my application. I have not altered the meaning of it, though.<br />

3 See [6]<br />

23


where ψ is the time-dependent Dirac field 4 and ψ = ψ (+) γ 0 is the Dirac<br />

adjoint. The gamma matrices are<br />

γ µ <br />

0 σµ<br />

=<br />

σµ 0<br />

with σµ given by (2.6) and (2.7). We here regard (4.1), (4.2) and (4.3) as<br />

definitions of the operators S = S (−) + S (+) and D (+)<br />

0 , with Aµ given by [6]<br />

page 148.<br />

4 Se [6] page 83<br />

24<br />

<br />

,


4.2 The Wightman Axioms<br />

We will treat a quantum field theory as a tuple<br />

(H, U, φ, D, |0〉),<br />

where H is a separable Hilbert space, U a unitary representation of the<br />

proper Poincaré group P ↑<br />

+ , φ are field operators, D a dense subspace of H<br />

and |0〉 the vacuum state.<br />

We expect the tuple to satisfy the Wightman axioms listed on the next<br />

page. 5<br />

FIXME: make sure the syntax is right<br />

We will not go in to the deeper meaning of the axioms. Instead we will<br />

note the most important properties for our use. We should note that the<br />

fields φ are assumed to be operator valued distributions, which are<br />

well-defined on a dense domain D ⊂ H.<br />

FIXME: note that D0 is contained in axiom 4<br />

In the construction of the scattering matrix, however, it suffices to regard<br />

the dense domain D0 ⊂ D ⊂ H given by,<br />

D0 = span{φ(g1) · · · φ(gn)|0〉|g1, . . . , gn ∈ S, n ∈ N}.<br />

5 We cite them from [9] page 103 − 104.<br />

25


The Wightman Axioms.<br />

Axiom 1 (quantum field) The operators φ1(f), . . . , φn(f) are given<br />

for each C ∞ -function f with compact support on the Minkowski space<br />

R 4 . Each φj(f) and its Hermitian conjugate operator φj(f) ∗ are defined<br />

at least on a common dense linear subset D of the Hilbert space H and<br />

D satisfies<br />

φj(f)D ⊂ D, φj(f) ∗ D ⊂ D,<br />

for any f and j = 1, . . . , n. For any Φ, Ψ ∈ D,<br />

is a complex valued distribution.<br />

f ↦→ (Φ, φj(f)Ψ),<br />

Axiom 2 (relativistic symmetry) On H there exists a unitary repre-<br />

sentation U(a, A) of ˜ P ↑<br />

+ (a ∈ R4 , A ∈ SL(2, C)), satisfying U(a, A)D =<br />

D (invariance of the common domain of the fields).<br />

U(a, A)φj(f)U(a, A) ∗ = S(A −1 )jkφk(f (a,A))<br />

f (a,A)(x) = f(Λ(A) −1 (x − a)),<br />

where the matrix (S(A)j,k) is an n-dimensional representation of A ∈<br />

SL(2C).<br />

Axiom 3 (local commutativity) If the support of f and g is spacelike<br />

separated, then for any vector Φ ∈ D<br />

[φj(f) (∗) , φk(g) (∗) ]±Φ = 0,<br />

where (∗) indicates that the equation holds for any choice.<br />

Axiom 4 (vacuum state) There exists a vector |0〉 in D satisfying the<br />

following conditions:<br />

(i) (Ua, A)|0〉 = |0〉 (invariance).<br />

(ii) The set of all vectors obtained by acting an arbitrary polynomial<br />

P of the fields on |0〉 is dense in H (cyclicity).<br />

(iii) The Spectrum of the translation group U(a, 1) on |0〉 ⊥ is contained<br />

in<br />

V m = {p|(p, p) ≥ m 2 , p 0 > 0} (m > 0)<br />

(Spectrum condition).<br />

26


Chapter 5<br />

The Method of Epstein and<br />

Glaser<br />

5.1 Introduction<br />

We want to express the S-matrix as a formal power series in R[[λ]], where<br />

λ is the coupling constant λ = e, which is the unit of charge and R is the<br />

ring<br />

R = {Υ : D0 → D0|Υ is linear}.<br />

We note that we are not concerned about the convergence of the series and<br />

it might not have any convergence radius at all.<br />

We begin from the expression.<br />

S(g) = 1 +<br />

∞<br />

n=1<br />

=: 1 + T<br />

<br />

n 1<br />

λ<br />

n!<br />

d 4 x1 · · · d 4 xnTn(x1, . . . , xn)g(x1) · · · g(xn)<br />

We shall usually omit the λ in the notation. Further may assume that<br />

Tn(x1, . . . , xn) is symmetric in x1, . . . xn. Otherwise we can always<br />

symmetrize it. This is easy to see in 2 dimensions where we given<br />

T2(x1, x2) can choose the symmetric map defined by<br />

(T2(x1, x2) + T2(x2, x1))/2. The same way we can choose a symmetric map<br />

in n-dimensions as<br />

Tn(x1, x2, . . . , xn) + Tn(x2, x1, x3, . . . , xn) + · · · + Tn(xn, xn−1, . . . , x1)<br />

.<br />

n!<br />

Since Tn(x1, . . . , xn) is symmetric we may occasionally use the short hand<br />

notation Tn(x) where x = {xj ∈ M|j = 1, . . . , n} is disordered.<br />

Along with the Wightman axioms we have to assume that each term of the<br />

S-matrix is well-defined.<br />

27


Axiom 0 (Well-definedness) Let g = g1, . . . , gn ∈ S(R n ), then<br />

〈Tn, (g1 ⊗ · · · ⊗ gn)〉 : D0 → D0,<br />

is a well-defined operator-valued distribution with<br />

<br />

〈Tn, (g1 ⊗ · · · ⊗ gn)〉 = dx1 · · · dxnTn(x1, . . . , xn)g(x1) · · · g(xn).<br />

Note that the S-matrix and Tn are operator-valued distributions, in the<br />

sense that if φ ∈ D0, then<br />

∞<br />

<br />

1<br />

S(g)φ = 1 +<br />

n!<br />

n=1<br />

∞<br />

<br />

1<br />

= φ +<br />

n!<br />

n=1<br />

d 4 x1 . . . d 4 <br />

xnTn(x1, . . . , xn)g(x1) . . . g(xn) φ<br />

d 4 x1 · · · d 4 xnTn(x1, . . . , xn)(φ(g(x1)) · · · φ(g(xn))).<br />

We can express the inverse of S(g) by a similar perturbation series as<br />

follows<br />

S(g) −1 ∞<br />

<br />

1<br />

= 1 + d<br />

n!<br />

n=1<br />

4 x1 . . . d 4 xn ˜ Tn(x1, . . . , xn)g(x1) . . . g(xn)<br />

= (1 + T ) −1 ∞<br />

= 1 + (−T ) r ,<br />

using Theorem 1.7. From<br />

=<br />

∞<br />

n=1<br />

∞<br />

r=1<br />

we see that<br />

n 1<br />

λ<br />

n!<br />

<br />

−<br />

<br />

∞<br />

n=1<br />

r=1<br />

d 4 x1 . . . d 4 xn ˜ Tn(x1, . . . , xn)g(x1) . . . g(xn) =<br />

1<br />

n! λn<br />

<br />

˜Tn(X) =<br />

∞<br />

(−T ) r<br />

r=1<br />

d 4 x1 . . . d 4 r xnTn(x1, . . . , xn)g(x1) · · · g(xn) ,<br />

n<br />

(−1)<br />

r=1<br />

r <br />

where Pr is the set of all partitions of X<br />

Pr<br />

Tn1 (X1) . . . Tnr(Xr), (5.1)<br />

X = X1 ∪ . . . ∪ Xr , |X| = n , Xj = ∅ , |Xj| = nj.<br />

28


Further,<br />

1 = S(g)S(g) −1<br />

=<br />

<br />

1 +<br />

= 1 +<br />

∞<br />

n1=1<br />

<br />

× 1 +<br />

∞<br />

λ n<br />

n=1<br />

<br />

n1 1<br />

λ<br />

n1!<br />

∞<br />

n2=1<br />

n2 1<br />

λ<br />

n1!<br />

<br />

n1+n2=n<br />

d 4 x1 · · · d 4 xn1Tn1 (x1, . . . , xn1 )g(x1) · · · g(xn1 )<br />

<br />

<br />

1<br />

n!<br />

where of course |X| = n1 and T0(∅) = 1 = ˜ T0(∅).<br />

By (5.2)<br />

0 =<br />

∞<br />

λ n<br />

n=1<br />

<br />

n1+n2=n<br />

<br />

<br />

1<br />

n!<br />

d 4 ˜x1 · · · d 4 ˜xn2 ˜ Tn2 (˜x1, . . . , ˜xn2 )g(˜x1) · · · g(˜xn2 )<br />

<br />

d 4 x1 · · · d 4 xnTn1 (X) ˜ Tn2 ( ˜ X)g(x1) · · · g(xn),<br />

d 4 x1 · · · d 4 xnTn1 (X) ˜ Tn2 ( ˜ X)g(x1) · · · g(xn)<br />

Then by the definition of the metric on formal power series each power<br />

must vanish. That is,<br />

<br />

P 0 2<br />

(5.2)<br />

Tn1 (X) ˜ Tn−n1 ( ˜ X)g(x1) · · · g(xn) = 0, (5.3)<br />

where P 0 2 is the set of all partitions {x1, . . . , xn} = X ∪ ˜ X.<br />

In constructing the scattering matrix, we should be aware of the properties<br />

we think it should have. We list such expected properties on the next page.<br />

29


Expected properties of the S-matrix.<br />

1 Unitarity i.e. S(g) −1 = S(g) ∗ . That is<br />

˜Tn(x1, . . . , xn) = Tn(x1, . . . , xn) ∗ .<br />

Fixme: this is not enough according to page 162 probably not important for<br />

this project.<br />

2 Translational invariance. Let U(a, 1) be the unitary translation operator<br />

in the Fock space F, that is<br />

Then we require<br />

(U(a, 1)Φ)j(x) = Φj(x1 + a, . . . , xj + a).<br />

U(a, 1)S(g)U(a, 1) −1 = S(ga), where ga(x) = g(x − a).<br />

Hence for the Tn’s (and of course the ˜ Tn’s as well) we get:<br />

U(a, 1)Tn(x1, . . . , xn)U(a, 1) −1 = Tn(x1 + a, . . . , xn + a).<br />

3 Lorentz covariance Letting U(0, Λ) be the representation of L ↑<br />

+ .<br />

U(0, Λ)S(g)U(0, Λ) −1 = S(gΛ), where gΛ = g(Λ −1 x).<br />

Note that 2. and 3. together form a condition of Poincaré invariance.<br />

4 Causality. Suppose there exists a reference frame in which the<br />

test-functions g1 and g2 have disjoint supports in time. Assuming<br />

supp g1 < supp g2. That is, for some r ∈ R,<br />

supp g1 ⊂ {x ∈ M|x 0 ∈ (−∞, r)} and supp g2 ⊂ {x ∈ M|x 0 ∈ (r, ∞)}.<br />

We require that<br />

S(g1 + g2) = S(g2)S(g1). (5.4)<br />

This is a statement about the fact that what happens at time t < s is not<br />

influenced by what happens at some later time t > s.<br />

30


Of these properties, causality plays a key role in the method of Epstein<br />

and Glaser. Equation (5.4) leads to the condition stated in the following<br />

theorem.<br />

Theorem 5.1. If {x1, . . . , xm} > {xm+1, . . . , xn}, then<br />

Tn(x1, . . . , xn) = Tm(x1, . . . , xm)Tn−m(xm+1, . . . , xn),<br />

Before proving this result we first note that if (5.4) is satisfied,<br />

S(g1 + g2) =<br />

=<br />

∞<br />

n=0<br />

<br />

n 1<br />

λ<br />

n!<br />

d 4 x1 · · · d 4 xnTn(x1, . . . , xn)<br />

× (g1(x1) + g2(x1)) · · · (g1(xn) + g2(xn))<br />

∞ n<br />

λ n<br />

<br />

1<br />

m!(n − m)!<br />

n=0 m=0<br />

d 4 x1 · · · d 4 xnTn(x1, . . . , xn)<br />

× (g2(x1) · · · g2(xm)) · · · (g1(xm+1) · · · g1(xn)),<br />

as there are 2n terms from the product of the test-functions and<br />

ways to pick g2 m times. On the other hand<br />

S(g2)S(g1) =<br />

=<br />

=<br />

∞<br />

m=0<br />

<br />

m 1<br />

λ<br />

m!<br />

d 4 x1 · · · d 4 xmTm(x1, . . . , xm)<br />

× g2(x1) · · · g2(xm)<br />

∞<br />

<br />

k 1<br />

× λ<br />

k!<br />

k=0<br />

∞<br />

m=0 k=0<br />

∞<br />

n=0 m=0<br />

d 4 ˜x1 · · · d 4 ˜xkTk(˜x1, . . . , ˜xk)<br />

× g1(˜x1) · · · g1(˜xk)<br />

∞<br />

<br />

m+k 1<br />

λ<br />

m!k!<br />

d 4 x1 · · · d 4 xmd 4 ˜x1 · · · d 4 ˜xk<br />

× Tm(x1, . . . , xm)Tk(˜x1, . . . , ˜xk)<br />

n!<br />

m!(n−m)!<br />

× g2(x1) · · · g2(x1) · · · g2(xm)g1(˜x1) · · · g1(˜xk)<br />

n<br />

λ n<br />

<br />

1<br />

dx1 · · · dxnTm(x1, . . . , xm)<br />

m!(n − m)!<br />

× Tn−m(xm+1, . . . , xn)g2(x1) · · · g2(xm)<br />

× g1(xm+1) · · · g1(xn).<br />

From the definition of the metric on formal power series we know that<br />

31


equal powers must have equal coefficients. We conclude that for every m, n<br />

<br />

<br />

=<br />

d 4 x1 · · · d 4 xnTn(x1, . . . , xn)g2(x1) · · · g2(xm)g1(xm+1) · · · g1(xn)<br />

d 4 x1 · · · d 4 xnTm(x1, . . . , xm)Tn−m(xm+1, . . . , xn)<br />

× g2(x1) · · · g2(xm)g1(xm+1) · · · g1(xn) (5.5)<br />

Further note that from the calculus of the tensor product<br />

〈T2, (f1 + f2) ⊗ (f1 + f2)〉 = 〈T2, f1 ⊗ f1〉 + 〈T2, f1 ⊗ f2〉<br />

+ 〈T2, f2 ⊗ f1〉 + 〈T2, f2 ⊗ f2〉,<br />

FIXME: maybe better to write it out in n dimensions<br />

and since Tm is symmetric<br />

〈Tm, f1 ⊗ f2〉 = 1<br />

2 (〈Tm, (f1 + f2) ⊗ (f1 + f2)〉 − 〈Tm, f1 ⊗ f1〉<br />

− 〈Tm, f2 ⊗ f2〉). (5.6)<br />

Now for transparency and to get a good idea of how to prove Theorem 5.1<br />

let us prove it for n = 3 and m = 2. That is,<br />

Claim. 〈T3, f1 ⊗ f2 ⊗ f3〉 = 〈T2, f1 ⊗ f2〉〈T1, f3〉.<br />

Proof. By 5.6<br />

Hence by 5.5<br />

〈T2, f1 ⊗ f2〉 = 1<br />

〈T2, (f1 + f2) ⊗ (f1 + f2)〉 − 〈T2, f1 ⊗ f1〉<br />

2<br />

− 〈T2, f2 ⊗ f2〉 <br />

〈T2, f1 ⊗ f2〉〈T1, f3〉<br />

= 1<br />

〈T3, (f1 + f2) ⊗ (f1 + f2) ⊗ f3〉 − 〈T3, f1 ⊗ f1 ⊗ f3〉<br />

2<br />

− 〈T3, f2 ⊗ f2 ⊗ f3〉 <br />

= 1<br />

〈T3, f1 ⊗ f1 ⊗ f3〉 + 〈T3, f2 ⊗ f2 ⊗ f3〉 + 2〈T3, f1 ⊗ f2 ⊗ f3〉<br />

2<br />

− 〈T3, f1 ⊗ f1 ⊗ f3〉 − 〈T3, f2 ⊗ f2 ⊗ f3〉 <br />

= 〈T3, f1 ⊗ f2 ⊗ f3〉.<br />

Which proves the claim.<br />

Now before proving this for arbitrary n and m we need the following<br />

lemma.<br />

32


Lemma 5.2. The product (x1 + x2 + . . . + xn) m has<br />

n−1 <br />

m m − k1 m − i<br />

ki<br />

· · · ·<br />

,<br />

k1<br />

terms on the form x k1<br />

1<br />

· · · xkn<br />

n .<br />

k2<br />

Proof. We note that the number of ways to pick x1, k1 times from m<br />

possibilities<br />

<br />

is given by the binomial coefficient. That is, there must be<br />

m<br />

k1<br />

terms on the form x k1<br />

1 (x2 + · · · + xn) m−k1 . Now there are m − k1<br />

multiplications left and thus m−k1 ways to pick x2, k2 times. Hence there<br />

k2<br />

are m m−k1<br />

k1<br />

terms on the form x k1 k2<br />

1 xk2 2 (x3 + · · · xn) m−k1−k2 . This way we<br />

end up with<br />

n−1 <br />

m m − k1 m − i<br />

ki<br />

· · · ·<br />

,<br />

k1<br />

terms on the form x k1<br />

1<br />

· · · xkn<br />

n .<br />

Proof of Theorem 5.1. First note that by the lemma<br />

k2<br />

〈Tm, (f1 + f2 + · · · + fm) ⊗ · · · ⊗ (f1 + f2 + · · · + fm)〉<br />

=<br />

<br />

n−1 <br />

m m − k1 m − i<br />

ki<br />

· · · ·<br />

k1+···+km=m<br />

k1<br />

since Tm is symmetric. Hence<br />

〈Tm, f1 + f2 + · · · + fm〉<br />

k2<br />

kn<br />

kn<br />

× 〈Tm, f k1<br />

1<br />

kn<br />

⊗ · · · ⊗ f km<br />

m 〉,<br />

= 〈Tm, (f1 + f2 + · · · + fm) ⊗ . . . ⊗ (f1 + f2 + · · · + fm)〉<br />

−<br />

<br />

n−1 <br />

m m − k1 m − i<br />

ki<br />

· · · ·<br />

〈Tm, f k1<br />

1<br />

k1+···+km=m<br />

(k1,...,km)=(1,...,1)<br />

Hence by 5.5<br />

k1<br />

〈Tm, f1 ⊗ · · · ⊗ fm〉〈Tn−m, fm+1 ⊗ · · · ⊗ fn〉<br />

k2<br />

kn<br />

⊗ · · · ⊗ f km<br />

m 〉.<br />

= 〈Tn, (f1 + f2 + · · · + fm) ⊗ . . . ⊗ (f1 + f2 + · · · + fm) ⊗ (fm+1 ⊗ · · · ⊗ fn)〉<br />

−<br />

<br />

n−1 <br />

m m − k1 m − i<br />

ki<br />

· · · ·<br />

k1+···+km=m<br />

(k1,...,km)=(1,...,1)<br />

× 〈Tn, f k1<br />

1<br />

k1<br />

k2<br />

⊗ f km<br />

m ⊗ (fm+1 ⊗ · · · ⊗ fn)〉<br />

= 〈Tn, f1 ⊗ f2 ⊗ · · · ⊗ fm ⊗ fm+1 ⊗ · · · ⊗ fn〉<br />

as wanted.<br />

33<br />

kn


We see that the Tn’s are time-ordered products (therefore the T ).<br />

Similarly the causality condition for S −1 (g) is that if supp g1 < supp g2,<br />

then<br />

S(g1 + g2) −1 = S(g1) −1 S(g2) −1 .<br />

This again implies that<br />

˜Tn(x1, . . . , xn) = ˜ Tm(x1, . . . , xm) ˜ Tn−m(xm+1, . . . , xn),<br />

if {x1, . . . , xm} < {xm+1, . . . , xn}.<br />

We are now ready to sketch the game plan for the inductive construction<br />

of the time ordered products Tn.<br />

34


The Inductive Construction of Tn(x1, . . . , xn).<br />

1. Assume Tm(x1, . . . , xm) for 1 ≤ m ≤ n − 1 are known.<br />

2. Construct advanced and retarded distributions as follows<br />

A ′ (x1, . . . , xn) = <br />

˜Tn1 (X)Tn−n1 (Y, xn) (5.7)<br />

P2<br />

R ′ (x1, . . . , xn) = <br />

P2<br />

Tn−n1 (Y, xn) ˜ Tn1 (X) (5.8)<br />

where P2 is the set of all partitions {x1, . . . , xn−1} = X ∪ Y , X = ∅<br />

and n1 = |X| ≥ 1.<br />

3. Include the empty set ∅ as follows<br />

A(x1, . . . , xn) = <br />

˜Tn1 (X)Tn−n1 (Y, xn)<br />

P 0 2<br />

= A ′ n(x1, . . . , xn) + Tn(x1, . . . , xn) (5.9)<br />

R(x1, . . . , xn) = <br />

Tn−n1 (Y, xn) ˜ Tn1 (X)<br />

4. The difference is<br />

5. Now<br />

P 0 2<br />

= R ′ n(x1, . . . , xn) + Tn(x1, . . . , xn). (5.10)<br />

Dn = R ′ n − A ′ n = Rn − An. (5.11)<br />

Tn = Rn − R ′ n = An − A ′ n. (5.12)<br />

We thus need to determine either Rn or An. This will be done by<br />

investigating the support properties of the distributions.<br />

35


5.2 Example - In the Hilbert Space Setting of<br />

Quantum Mechanics<br />

Before going on we illustrate the significance of the table by applying it to<br />

the more transparent theories of Hilbert spaces and quantum mechanics.<br />

In quantum mechanics we substitute distributions by functions and replace<br />

x by t.<br />

Now, assuming T1(t) is given we want to construct T2(t). We follow the<br />

rules given from the box on the previous page and construct the advanced<br />

and retarded functions.<br />

and<br />

A ′ 2(t1, t2) = ˜ T1(t1)T1(t2) = −T1(t1)T1(t2) (5.13)<br />

R ′ 2(t1, t2) = T1(t2) ˜ T1(t1) = −T1(t2)T1(t1), (5.14)<br />

where the last inequalities follow from (5.1). Then<br />

and<br />

A2(t1, t2) = A ′ 2(t1, t2) + T2(t1, t2) (5.15)<br />

R2(t1, t2) = R ′ 2(t1, t2) + T2(t2, t1)<br />

From Theorem 5.1 we know that if ti > tj, then<br />

T2(ti, tj) = T1(t1)T1(tj).<br />

Hence A2 vanishes for t1 > t2 and R2 vanishes for t1 < t2.<br />

Now<br />

D2 = R2 − A2 = R ′ 2 − A ′ 2<br />

is known from equations (5.13) and (5.14). The T2 cancel out since they<br />

are symmetric.<br />

Finally, R2 and A2 can be determined by their support properties. Let<br />

Θ(t) be the Heaviside function<br />

Then<br />

Θ(x) =<br />

<br />

1, t ≥ 0,<br />

0, t < 0.<br />

A2(t1, t2) = Θ(t2 − t1)D2(t1, t2)<br />

= Θ(t2 − t1)(T1(t1)T1(t2) − T1(t2)T1(t1)).<br />

Hence A2 is uniquely determined up to its value at t1 = t2.<br />

36


Now from equation (5.15)<br />

T2(t1, t2) = A2(t1, t2) − A ′ 2(t1, t2)<br />

= Θ(t2 − t1)(T1(t1)T1(t2) − T1(t2)T1(t1)) + T1(t1)T1(t2)<br />

= Θ(t1 − t2)T1(t1)T1(t2) − Θ(t2 − t1)T1(t2)T1(t1)<br />

=: T {T1(t1)T1(t2)}.<br />

FIXME: draw parallel to S-matrix with usage of Schrödinger equation.<br />

37


5.3 Splitting of Distributions<br />

In our previous example we saw that the splitting of Dn into advanced and<br />

retarded parts could be done by use of the Heaviside function. This was<br />

because we were dealing with operators in Hilbert space. But in our<br />

setting we are considering distributions which cannot simply be multiplied<br />

with discontinuous step-functions. In this section we consider the<br />

properties of the advanced and retarded distributions.<br />

First a technicality.<br />

Theorem 5.3. Let Y = P ∪ Q, P = ∅, P ∩ Q = ∅, |Y | = n1 ≤ n − 1 and<br />

x /∈ Y .<br />

If {Q, x} > P ,|Q| = n2, then we have<br />

If {Q, x} < P , then we have<br />

R ′ n1+1(Y, x) = −Tn2+1(Q, x)Tn1−n2 (P ). (5.16)<br />

A ′ n1+1(Y, x) = −Tn1−n2 (P )Tn2+1(Q, x). (5.17)<br />

Proof. Equation (5.16) is proved in [6]. Therefore we only prove equation<br />

(5.17).<br />

A ′ n1+1(Y, x) = <br />

˜Tn3 (X)Tn−n3 (Y ′ , x),<br />

where P2 is the set of all partitions Y = X ∪ Y ′ such that X = ∅.<br />

Now let<br />

and<br />

Now since<br />

causality implies<br />

P2<br />

Y ′ = Y1 ∪ Y2, where Y1 = Y ′ ∩ P and Y2 = Y ′ ∩ Q<br />

X = X1 ∪ X2, where X1 = X ′ ∩ P and X2 = X ∩ Q<br />

Q ∪ {x} < P, Y2 < Y1, X2 < X1 and Y1 > Y2 ∪ {x},<br />

A ′ n1+1(Y, x) = <br />

˜Tn3 (X)Tn−n3 (Y ′ , x)<br />

P2<br />

= <br />

˜T (X2) ˜ T (X1)T (Y1)T (Y2, x), (5.18)<br />

P 0 4<br />

subscripts are omitted for simplicity. P 0 4<br />

form<br />

is the set of all partitions of the<br />

P 0 4 : P = X1 ∪ Y1, Q = X2 ∪ Y2 and X1 ∪ X2 = ∅.<br />

38


However, for x2 = ∅ we can have X1 = ∅. From Equation (5.3) it follows<br />

that <br />

˜T (X1)T (Y1) = 0. (5.19)<br />

P 0 2<br />

As a consequence, in (5.18) only terms with X2 = ∅ and hence Y2 = Q<br />

remain. That is, remembering that T0(∅) = 1 = ˜ T0(∅),<br />

A ′ <br />

<br />

n1+1(Y, x) = ˜T (X1)T (Y1) T (Q, x),<br />

P2<br />

where P2 is set set of all partitions such that P = X1 ∪ Y1 and X1 = ∅.<br />

Including ∅ would give 0 according to (5.19), hence<br />

A ′ <br />

n1+1(Y, x) = ˜T (X1)T (Y1) − ˜ <br />

T (∅)T (P ) T (Q, x)<br />

as wanted.<br />

P 0 2<br />

Fixme: investigate the meaning of this.<br />

= (0 − T (P ))T (Q, x) = −T (P )T (Q, x),<br />

Corollary 5.4 (Support property 1). The supports of A and R are<br />

respectively advanced and retarded, that is<br />

and<br />

Proof. Equation (5.9) gives us<br />

An1+1(Y, x) = A ′ n1+1 + Tn(Y, x)<br />

supp An1+1(Y, x) ⊂ Γ − n1+1 (x)<br />

supp Rn1+1(Y, x) ⊂ Γ + n1+1 (x)<br />

= −Tn1−n2 (P )Tn2+1(Q, x) + Tn1+1(P ∪ Q, x)<br />

= −Tn1−n2 (P )Tn2+1(Q, x) + Tn1−n2 (P )Tn2+1(Q, x) = 0.<br />

2. follows the same way from (5.10).<br />

The corollary explains why A and R are called advanced and retarded<br />

distributions. An+1(X, x) vanishes if there exists a point x ′ ∈ X with<br />

x ′ > x, i.e if P exists. On the other hand Rn+1(X, x) vanishes if there<br />

exists a point x ′ ∈ X with x ′ < x.<br />

Theorem 5.5 (Support property 2). If n ≥ 3 then<br />

supp Dn(x1, . . . , xn−1, xn) ⊆ Γ − n (xn) ∪ Γ + n (xn). (5.20)<br />

39


A proof of this is found in [6] page 167. For n ≤ 2 this property has to be<br />

verified explicitly.<br />

It is here the difficult part of the method of Epstein and Glaser described<br />

in [6] lies, that is in decomposing<br />

such that<br />

Dn(x1, . . . , xn) = Rn(x1, . . . , xn) − An(x1, . . . , xn)<br />

supp Rn ⊆ Γ + n−1 (xn) and supp An ⊆ Γ − n−1 (xn).<br />

40


Chapter 6<br />

Regularly Varying Functions<br />

Before going on in the splitting process we need to consider some<br />

properties of a class of functions which will become important.<br />

Definition 6.1. A positive function ρ(t) is called regularly varying at<br />

t = 0, if it is measurable in (0, t0] for some t0 > 0, and there exists ω ∈ R<br />

such that the limit<br />

ρ(at)<br />

lim = aω<br />

t→0 ρ(t)<br />

exists for all a > 0. In this case ω is called the order of ρ. If ω = 0 the<br />

function is said to be slowly varying.<br />

(6.1)<br />

Note that a regularly varying function always can be reduced to be slowly<br />

varying. That is, let<br />

ρ(t) = t ω r(t). (6.2)<br />

Then (6.1) gives<br />

r(at)<br />

lim = 1, (6.3)<br />

t→0 r(t)<br />

for all a > 0.<br />

The following representation theorem will be important for our use.<br />

Theorem 6.2. Let r : (0, t0] → R+ be a slowly varying function. Then<br />

there exists some t1 ∈ (0, t0) such that<br />

t1 h(s)<br />

r(t) = exp η(t) +<br />

t s ds<br />

<br />

holds for all 0 < t < t1. Here η is bounded and measurable on (0, t1], and<br />

η → c for t → 0 (|c| < ∞). Further h(t) is continuous on (0, t1] and<br />

h(t) → 0 for t → 0.<br />

Before proving the theorem we will need some lemmas.<br />

41


Lemma 6.3. Let f : [x0, ∞) → R be a measurable function such that<br />

f(x + k) − f(x) → 0 for x → ∞ (6.4)<br />

for all k ∈ R. 1 Let I = [a, b] ⊂ R be a closed interval then<br />

Proof. Let I = [0, 1]. Assume<br />

sup |f(x + k) − f(x)| → 0 for x → ∞.<br />

k∈I<br />

sup |f(x + k) − f(x)| 0<br />

k∈I<br />

for x → ∞. (6.5)<br />

Then there exists an ɛ > 0 and a sequence (xn, kn) where xn → ∞ and<br />

kn ∈ I such that |f(xn + kn) − f(xn)| ≥ ɛ for all n ∈ N.<br />

Now consider the measurable sets<br />

Fixme: are they?<br />

and<br />

Un = {s ∈ [0, 2]||f(xm + s) − f(xm)| < ɛ/2 for all m ≥ n}<br />

Vn = {t ∈ [0, 2]||f(xm + km + t) − f(xm + km)| < ɛ/2 for all m ≥ n}.<br />

By (6.4) these sets are clearly monotonically increasing towards [0, 2]<br />

hence, letting m denote the Lebesgue measure, there exists an N ∈ N such<br />

that m(Un) > 3/2 and m(Vn) > 3/2 for n, m ≥ N.<br />

Consider the translation V ′ N = VN + kN, then since m(V ′ N ) > 3/2 and<br />

UN, V ′ N ⊂ [0, 3] the two sets must intersect in some point k. Now since<br />

k ∈ UN<br />

|f(xN + k) − f(xN)| < ɛ/2<br />

and since k − kN ∈ VN<br />

|f(xN + k) − f(xN + kN)| = |f(xN + kN + k − kN) − f(xN + kN)| < ɛ/2.<br />

Hence<br />

|f(xN + kn) − f(xN)| = |f(xN + k) − f(xN) − (f(xN + k) − f(xN + kN))|<br />

≤ |f(xN + k) − f(xN)| + |f(xN + k) − f(xN + kN)|<br />

< ɛ/2 + ɛ/2 = ɛ,<br />

contradicting (6.5). This proves the theorem for I = [a, b] where a = 0 and<br />

b = 1.<br />

1 For k < 0 we may only consider x large enough that f be defined. This is no problem,<br />

though, since we are only concerned about the limit x → ∞.<br />

42


Now we want to use this to show it for arbitrary a, b ∈ R+. Let<br />

Then for a fixed,<br />

g(x) := f((b − a)x).<br />

f(x + k) − f(x) = f(x + a + k − a) − f(x + a) + f(x + a) − f(x)<br />

<br />

x + a k − a<br />

<br />

x + a<br />

<br />

= g + − g + f(x + a) − f(x)<br />

b − a b − a b − a<br />

= g(y + h) − g(y) + f(x + a) − f(x), (6.6)<br />

where y and h are defined in the obvious way. Now since k ∈ [a, b] we see<br />

that h ∈ [0, 1]. Further y → ∞ is equivalent to x → ∞. Now by (6.4),<br />

f(x − a) − f(x) → 0 for x → ∞. And by the proof above for I = [0, 1]<br />

Hence by (6.6)<br />

as claimed.<br />

sup |g(y + h) − g(y)| → 0 for y → ∞.<br />

h∈I<br />

sup<br />

k∈[a,b]<br />

|f(x + k) − f(x)| → 0 for x → ∞.<br />

Lemma 6.4. Let f(x) = ln r(e −x ), where r is the slowly varying function<br />

defined by (6.2). Then f satisfies (6.4) and there exists a constant x0 such<br />

that f is bounded on every interval [a, b] where x0 ≤ a.<br />

FIXME: x0 comes from lemma 6.3 and tells us what the domain of f is.<br />

Proof.<br />

lim (f(x + k) − f(x)) = lim<br />

x→∞ x→∞ (ln r(e−(x+k) ) − ln r(e x ))<br />

r(e<br />

= ln lim<br />

x→∞<br />

−(x+k) )<br />

= ln lim<br />

t→0<br />

= ln 1 = 0,<br />

r(e −x )<br />

since (6.3) holds for r(t). Therefore f satisfies (6.4).<br />

By Lemma 6.3 there exists a such that<br />

for all x ≥ a and for all k ∈ [0, 1].<br />

r(at)<br />

r(t) , substituting t = e−x and a = e −k<br />

|f(x + k) − f(x)| < 1 (6.7)<br />

43


We want to show by induction that for x ∈ [a + m − 1, a + m],<br />

|f(x)| ≤ |f(a)| + m. (6.8)<br />

For m = 1, let x ∈ [a, a + 1] then we may write x = a + k for some<br />

k ∈ [0, 1]. Then<br />

|f(a + k)| = |f(a + k) − f(a) + f(a)| ≤ |f(a + k) − f(a)| + |f(a)|<br />

≤ |f(a)| + 1.<br />

In the (n + 1)th step, let x ∈ [a + n, a + n + 1]. Then there exists a<br />

kn ∈ [0, 1] such that x = a + n + kn and<br />

|f(x)| = |f(a + n + kn)| = |f(a + n + kn) − f(a + n) + f(a + n)|<br />

≤ |f(a + n + kn) − f(a + n)| + |f(a + n)|<br />

< |f(a)| + n + 1<br />

by the induction hypothesis. This concludes the proof by induction.<br />

Clearly if x satisfies (6.8), then it is also satisfied by all y ∈ [a, a + m] and<br />

we have proven that f is bounded on the interval [a, b].<br />

Corollary 6.5. Let f(x) = ln r(e −x ). Then there exists a constant x0 such<br />

that f is integrable on every interval [a, b] where x0 ≤ a.<br />

Proof. This follows directly from the definition of integrability since f is<br />

measurable and bounded on [a, b] by Lemma 6.4.<br />

Lemma 6.6. Let a ≥ x0 from Lemma 6.4. Then f(x) = ln r(e−x ) can be<br />

represented for all x ≥ a as<br />

f(x) = g(x) +<br />

x<br />

a<br />

h(s)ds, (6.9)<br />

where g and h are measurable and bounded on every interval [a, b] and<br />

g(x) → g0 (|g0| ≤ ∞) and h(x) → 0 for x → ∞.<br />

Proof. Let x ≥ a we may write f(x) on the form (6.9) by letting<br />

g(x) :=<br />

x+1<br />

FIXME: maybe write out the above<br />

By (6.4) h(x) → 0 for x → ∞.<br />

x<br />

f(x) − f(s)ds +<br />

a+1<br />

h(x) := f(x + 1) − f(x)<br />

44<br />

a<br />

f(s)ds


By Lemma 6.4 the second integral of g(x) is bounded. Consider the first<br />

integral<br />

x+1<br />

x<br />

f(x) − f(s)ds =<br />

≤<br />

1<br />

0<br />

1<br />

by (6.7). Hence g is bounded. Further<br />

lim k(x) = lim<br />

x→∞ x→∞<br />

=<br />

0<br />

1<br />

0<br />

f(x) − f(x + k)dk, substituting s = x + k<br />

|f(x) − f(x + k)|dk < 1,<br />

1<br />

0<br />

f(x) − f(x + k)dk<br />

lim (f(x) − f(x + k))dk = 0,<br />

x→∞<br />

by (6.4) using the Dominated Convergence Theorem with (6.7). We<br />

conclude that<br />

as claimed.<br />

g(x) →<br />

a+1<br />

a<br />

f(s)ds =: g0 for x → ∞, (6.10)<br />

Lemma 6.7. There exists a∗ such that for all x ≥ a∗ x<br />

f(x) = g ∗ (x) +<br />

a ∗<br />

h ∗ (s)ds<br />

where g ∗ and h ∗ are measurable and bounded on every interval [a ∗ , b] and<br />

g ∗ (x) → g ∗ 0 (|g∗ 0 | ≤ ∞) and h∗ (x) → 0 for x → ∞. In fact, h ∗ is continuous<br />

on every interval [a ∗ , b].<br />

FIXME: reformulate.<br />

Proof. Let f ∗ (x) = x<br />

a h(s)ds. Then by (6.9) and (6.10)<br />

For all k > 0,<br />

f(x) − f ∗ (x) = g(x) → g0 for x → ∞.<br />

f ∗ (x + k) + f ∗ x+k<br />

(x) =<br />

a<br />

x+k<br />

=<br />

=<br />

=<br />

x<br />

x+k<br />

x<br />

k<br />

0<br />

h(s)ds +<br />

h(s)ds<br />

a<br />

x<br />

h(s)ds<br />

f(s + 1) − f(s)ds<br />

f(t + x + 1) − f(t + x)dt,<br />

45


substituting t := s − x. We may write<br />

f(t + x + 1) − f(t + x) = [f(x + t + 1) − f(x)] − [f(x + t) − f(x)].<br />

Both terms on the right hand side converge uniformly in t towards 0 on<br />

[0, k] as x → ∞ by Lemma 6.3. Hence<br />

f ∗ (x + k) − f ∗ k<br />

(x) =<br />

0<br />

f(x + t + 1) − f(x + t)<br />

≤ k sup |f(x + t + 1) − f(x + t)| → 0<br />

t∈[0,k]<br />

for x → ∞. The same way it can be shown to hold for k < 0 as well.<br />

Hence f ∗ satisfies (6.4).<br />

Note that f ∗ is continuous in any interval [a, b], because given ɛ > 0,<br />

choose δ = ɛ/ sup [a,b] |f(s + 1) − f(s)|. Then<br />

x0<br />

|f(x0) − f(x)| = |<br />

≤<br />

x x0<br />

x<br />

f(s + 1) − f(s)ds|<br />

|f(s + 1) − f(s)|ds<br />

≤ |x0 − x| sup<br />

[a,b]<br />

|f(s + 1) − f(s)|<br />

< δ sup |f(s + 1) − f(s)| < ɛ.<br />

[a,b]<br />

Now we want to construct a representation of f ∗ .<br />

First note that since f ∗ is continuous it is measurable.<br />

Thus, by Lemma 6.3 we may conclude that<br />

sup |f<br />

[c,d]<br />

∗ (x + k) − f ∗ (x)| → 0 for x → ∞.<br />

Further, since by Lemma 6.4 f is bounded on every closed interval so is f ∗ ,<br />

because<br />

sup<br />

[c,d]<br />

|f ∗ (x)| = sup |<br />

x<br />

[c,d] a<br />

x<br />

≤ sup<br />

[c,d]<br />

a<br />

f(s + 1) − f(s)ds|<br />

|f(s + 1) − f(s)|ds<br />

≤ (d − a) sup |f(t + 1) − f(t)| < ∞.<br />

[a,d]<br />

Since f ∗ is measurable and bounded on closed intervals, it is integrable.<br />

By Lemma 6.6 f ∗ has the following representation for x ≥ a ∗ ,<br />

FIXME: what is a ∗ ?.<br />

46


where<br />

and<br />

k ∗ (x) =<br />

f ∗ (x) = k ∗ x<br />

(x) +<br />

a∗ h ∗ (s)ds,<br />

x+1<br />

f<br />

x<br />

∗ (x) − f ∗ a∗ +1<br />

(s)ds +<br />

a∗ h ∗ (x) = f ∗ (x + 1) − f ∗ (x).<br />

Note that h∗ is continuous, since f ∗ is continuous.<br />

Now (6) implies<br />

f(x) = g(x) + f ∗ (x) = g(x) + g ∗ x<br />

(x) +<br />

a∗ Finally, let g ∗ (x) := g(x) + k ∗ (x).<br />

f ∗ (s)ds<br />

h ∗ (s)ds<br />

Now we are finally ready to prove the representation theorem.<br />

Proof of Theorem 6.2. Let f(x) := ln r(e−x ). By Lemma 6.7 f has the<br />

following representation<br />

That is<br />

ln(r(e −x )) = g ∗ x<br />

(x) +<br />

a∗ ln r(t) = g ∗ (− ln t) +<br />

− ln t<br />

a ∗<br />

h ∗ (s)ds.<br />

h ∗ (s)ds.<br />

Letting ν(t) := g ∗ (− ln(t)), substituting s := − ln t ′ and h(t ′ ) := h ∗ (− ln t ′ ),<br />

Thus,<br />

where t1 = e −a∗<br />

t<br />

ln r(t) = η(t) −<br />

e−a∗ h(t ′ )<br />

dt′<br />

t ′<br />

t1 h(t<br />

r(t) = exp(η(t) +<br />

t<br />

′ )<br />

t ′ dt′ ),<br />

as claimed.<br />

Note that the representation of 6.2 only holds for t < t1 but since we are<br />

only concerned with the limit t → 0 we will usually not have any problem<br />

using the representation theorem. The following theorem will be important<br />

later when we shall split distributions.<br />

Corollary 6.8. Let ρ be a regularly varying function and ɛ > 0, then<br />

there exist some S0(ɛ) and non-negative constants C and C ′ such that<br />

C ′ t ω+ɛ ≤ ρ(t) ≤ Ct ω−ɛ , for 0 ≤ t ≤ s0(ɛ). (6.11)<br />

47


Proof. Reduce ρ to be slowly varying<br />

ρ(t) = t ω r(t) = t ω t1<br />

exp η(t) +<br />

t<br />

h(s)<br />

s ds<br />

<br />

, for 0 < t < t1,<br />

where we substitute r by the representation given by Theorem 6.2. Since<br />

h(s) → 0 for s → 0 there exists s0(ɛ) ≤ t1 such that |h(s)| ≤ ɛ for s ≤ s0(ɛ).<br />

So if 0 < t ≤ s0(ɛ), then<br />

<br />

<br />

<br />

<br />

s0(ɛ)<br />

t<br />

h(s)<br />

s ds<br />

<br />

<br />

<br />

≤<br />

s0(ɛ)<br />

t<br />

<br />

<br />

<br />

h(s) <br />

<br />

s ds ≤<br />

s0(ɛ)<br />

t<br />

ɛ<br />

s ds = ɛ ln s0(ɛ) − ɛ ln t<br />

<br />

s0(ɛ)<br />

= ɛ ln .<br />

t<br />

Hence<br />

ρ(t) = t ω s0(ɛ) h(s)<br />

exp η(t) +<br />

t s ds<br />

<br />

≤ t ω <br />

s0(ɛ)<br />

exp η(t) + ɛ ln<br />

t<br />

= t ω−ɛ e η(t) s ɛ 0(ɛ)<br />

≤ Ct ω−ɛ ,<br />

where C = s ɛ 0 (ɛ) sup 0


Chapter 7<br />

Splitting of Numerical<br />

Distributions<br />

7.1 The Singular Order of a Distribution<br />

We expect the operator-valued distributions to be expanded in terms of<br />

free fields as follows1 Dn(x1, . . . , xn) = <br />

: <br />

ψ(xj)d k n(x1, . . . , xn) <br />

ψ(xl) :: <br />

A(xm) :,<br />

k<br />

j<br />

where the numerical distributions d k n ∈ S ′ (R 4n ) have the causal support<br />

property (5.20). We have to assume them to be tempered in order to use<br />

the Fourier transformation. Our aim is to split these distributions<br />

where<br />

d k n(x) = rn(x) − an(x),<br />

supp rn ⊂ Γ + n−1 (xn) and supp an ⊂ Γ − n−1 (xn).<br />

We have assumed translational invariance thus we may assume xn = 0 and<br />

thus only consider<br />

d(x) := d(x1, . . . xn−1, 0) ∈ S ′ (R m ), m = 4(n − 1).<br />

In the Hilbert space setting of our earlier example we would do the<br />

splitting as<br />

rn(x) = χn(x)d k n(x), where χn(x) =<br />

n−1 <br />

j=1<br />

l<br />

Θ(x 0 j − x 0 n) =<br />

m<br />

n−1 <br />

1 Note that the double dots denote normal ordering. See Definition 4.4<br />

2 We write it like this to show what it would be if xn = 0.<br />

49<br />

j=1<br />

Θ(x 0 j) 2 .


In our setting this is not defined, but it might help us get an idea of what<br />

to do.<br />

The discontinuity plane of χn(x) is<br />

Pχn = {x = (x1, . . . , xn−1, 0)|x 0 j = 0 for all j}.<br />

Now if y ∈ supp d ⊂ Γ + n (0) ∪ Γ− n (0), then y2 j<br />

then y0 j = 0 for all j hence<br />

−<br />

3<br />

(y i j) 2 ≥ 0<br />

i=1<br />

≥ 0 for all j. If also y ∈ Pχn,<br />

This is only possible if yi = 0 for i = 1, 2, 3. Hence the intersection is the<br />

origin, that is, supp d ∩ Pξn = {0}.<br />

We now define some tools to investigate the singularity at the origin.<br />

Definition 7.1. The distribution d(x) ∈ S ′ (R m ) is said to have a<br />

quasi-asymptotics d0(x) at x = 0 with respect to a positive continuous<br />

function ρ(δ), δ > 0, if the limit<br />

exists in S ′ (R m ) 3 .<br />

lim<br />

δ→0 ρ(δ)δmd(δx) = d0(x) = 0 (7.1)<br />

The equivalent definition in momentum space reads<br />

Definition 7.2. The distribution ˆ d(p) ∈ S ′ (R m ) has quasi-asymptotics<br />

ˆd0(p) at p = ∞ if<br />

exists for all ˇ φ ∈ S(R m ).<br />

lim<br />

δ→0 ρ(δ)〈 ˆ d( p<br />

δ ), ˇ φ(p)〉 = 〈 ˆ d0, ˇ φ〉 = 〈0, ˇ φ〉 (7.2)<br />

We should show these definitions are indeed equivalent. This follows since<br />

δ m 〈d(δx), φ(x)〉 = 〈d(x), λδφ(x)〉<br />

= 〈 ˆ d(p), (λδφ)ˇ(p)〉<br />

= 〈 ˆ d(p), δ m (λ 1/δ ˇ φ)(p)〉, by (7.4)<br />

= 〈 ˆ d( p<br />

δ ), ˇ φ(p)〉, (7.3)<br />

by Proposition 1.12 and using<br />

(λδφ)ˇ(p) = (2π) −m<br />

<br />

e ix·p <br />

x<br />

<br />

φ dx = (2π)<br />

δ<br />

−m δ m<br />

<br />

e iy·δp φ(y)dy<br />

= δ m λ 1/δ ˇ φ(p). (7.4)<br />

3 An explanation of why we require d0 = 0 is found after Definition 7.4<br />

50


Let’s show that quasi-asymptotics do in fact say something about the<br />

origin, only.<br />

Say, d(x) = d1(x) + d2(x) where supp d1 is compact containing {0} and<br />

supp d2 is bounded away from {0}. Then<br />

lim<br />

δ→0 ρ(δ)δm 〈d2(δx), φ0(x)〉 = lim ρ(δ)〈d2(x), φ0(<br />

δ→0 x<br />

)〉 = 0,<br />

δ<br />

for all φ0 ∈ C ∞ 0 (Rm ) due to Proposition 1.12 and the support properties of<br />

d2. Since C ∞ 0 is dense in S the distribution also vanishes on S, hence<br />

lim<br />

δ→0 ρ(δ)δm 〈d1(δx), φ(x)〉 = 〈d0, φ〉 for all φ ∈ S.<br />

FIXME: why does K0 need to be compact?<br />

Lemma 7.3. Let ρ be the positive continuous function of definitions 7.1<br />

and 7.2. Then there exists ω ∈ R such that<br />

for all a > 0.<br />

Proof. By Proposition 1.12<br />

ρ(aδ)<br />

lim<br />

δ→0 ρ(δ) = aω , (7.5)<br />

lim<br />

δ→0 ρ(δ)〈 ˆ d( p<br />

δ ), (λ1/a ˇ φ)(p)〉 = a −m lim ρ(δ)〈<br />

δ→0 ˆ d( p<br />

aδ ), ˇ φ(p)〉<br />

= a −m lim<br />

δ→0<br />

= a −m lim<br />

δ→0<br />

ρ(δ)<br />

ρ(aδ) ρ(aδ)〈 ˆ d( p<br />

ρ(δ)<br />

ρ(aδ) 〈 ˆ d0(p), ˇ φ(p)〉,<br />

since the limit<br />

lim<br />

δ→0 ρ(aδ)〈 ˆ d( p<br />

aδ ), ˇ φ(p)〉 = 〈 ˆ d0(p), ˇ φ(p)〉<br />

exists. Thus the following is defined<br />

aδ ), ˇ φ(p)〉<br />

ρ(aδ)<br />

ρ0(a) := lim<br />

δ→0 ρ(δ) = a−m 〈 ˆ d0(p), ˇ φ(p)〉<br />

〈 ˆ d0(p), ˇ . (7.6)<br />

φ(ap)〉<br />

Note that ρ0 is continuous, since λ 1/a : S → S is continuous, and that<br />

ρ(aδ)<br />

ρ0(a)ρ0(b) = lim<br />

δ→0 ρ(δ) lim<br />

ρ(bδ) ρ(abδ)<br />

= lim<br />

δ→0 ρ(δ) δ→0 ρ(bδ) lim<br />

ρ(bδ) ρ(abδ)<br />

= lim<br />

δ→0 ρ(δ) δ→0 ρ(δ)<br />

Now define a function by f(s) = ln(ρ0(e s )). Then<br />

= ρ0(ab).<br />

f(s) + f(t) = ln(ρ0(e s )) + ln(ρ0(e t )) = ln(ρ0(e t )ρ0(e s )) = ln(ρ0(e s+t ))<br />

51<br />

= f(s + t).


Let n, m ∈ N then<br />

and<br />

Hence<br />

mf( n n<br />

n<br />

) = f( ) + · · · + f(<br />

m m m ) = f(m<br />

<br />

m terms<br />

n<br />

) = f(n)<br />

m<br />

f(n) = nf(1).<br />

f( n n<br />

) =<br />

m m f(1).<br />

Since ρ0 is continuous f(x) = xf(1) for all x > 0.<br />

Thus<br />

ρ0(e x ) = e f(x) = e xf(1) ,<br />

that is, substituting x = ln a,<br />

Let ω := f(1).<br />

ρ0(a) = e f(1) ln(a) ln af(1)<br />

= e = a f(1) .<br />

Note that (7.5) is the condition of a regularly varying function.<br />

We regard ω as an indication of how singular a causal distribution is near<br />

{0}. This leads us to define:<br />

Definition 7.4. The distribution d ∈ S ′ (R m ) is called singular of order ω,<br />

if it has a quasi-asymptotics d0(x) at x = 0, or its Fourier transform has<br />

quasi-asymptotics ˆ d0(p) at p = ∞, respectively, with power-counting<br />

function ρ(δ) satisfying<br />

for each a > 0.<br />

ρ(aδ)<br />

lim<br />

δ→0 ρ(δ) = aω ,<br />

Note that d0 = 0 in (7.1) and the corresponding requirement in (7.2),<br />

respectively, are required to make sure that the singular order is uniquely<br />

defined. If we skipped the requirement any ω ′ ≥ ω could be chosen as<br />

singular order.<br />

Fixme: It this explained well enough?<br />

Lemma 7.5. The distributions d0 and ˆ d0 are homogeneous of degree<br />

−(m + ω) and ω respectively.<br />

Proof. From (7.6) and (7.5),<br />

Hence by Proposition 1.12<br />

a −ω 〈 ˆ d0(p), ˇ φ(p)〉 = a m 〈 ˆ d0(p), ˇ φ(ap)〉.<br />

〈 ˆ d0( p<br />

a ), ˇ φ(p)〉 = a m 〈 ˆ d0(p), ˇ φ(ap)〉 = a −ω 〈 ˆ d0(p), ˇ φ(p)〉. (7.7)<br />

52


Thus the condition of Definition 1.14 is satisfied by ˆ d0.<br />

Further from (7.3)<br />

and<br />

hence by (7.7)<br />

a m 〈d0(ax), φ(x)〉 = 〈d0(x), φ( x<br />

a )〉 = 〈 ˆ d0( p<br />

a ), ˇ φ(p)〉.<br />

〈 ˆ d0(p), ˇ φ(p)〉 = 〈d0(x), φ(p)〉<br />

〈d0(ax), φ(x)〉 = a −(m+ω) 〈d0(x), φ(x)〉.<br />

The condition of Definition 1.14 is satisfied by d0.<br />

The singular order now gives us a way to distinguish two different<br />

situations in the splitting process. We investigate the splitting first in the<br />

case where ω < 0 and then for ω ≥ 0.<br />

53


7.2 Case I: Negative Singular Order<br />

7.2.1 Existence<br />

In this subsection we will show that there exists a decomposition of d into<br />

an advanced part a and a retarded part r.<br />

Choose ɛ > 0 such that ω + ɛ < 0 then t ω+ɛ → ∞ for t → 0 − hence by<br />

(6.11) ρ(t) → ∞ for t → 0. Hence by (7.1)<br />

lim<br />

δ→0<br />

〈d(x), φ(x<br />

δ<br />

)〉 = lim<br />

δ→0 δm 〈d0(x), φ(x)〉<br />

〈d(δx), φ(x)〉 = lim<br />

= 0. (7.8)<br />

δ→0 ρ(δ)<br />

Let v = (v1, . . . , vn−1) ∈ Γ + n−1 (0) be an arbitrary but fixed vector and<br />

define a hyperplane by<br />

PH := {x ∈ M n−1 n−1 <br />

|v · x := 〈vj, xj〉 = 0}.<br />

Lemma 7.6. All products satisfy<br />

and<br />

j=1<br />

〈vj, xj〉 ≥ 0 if x ∈ Γ + n−1 (0) (7.9)<br />

〈vj, xj〉 ≤ 0 if x ∈ Γ − n−1<br />

(0). (7.10)<br />

Proof. For all j, (v0 j )2 − 3 i=1 (vi j )2 ≥ 0 hence v0 j ≥<br />

<br />

3<br />

i=1 (vi j )2 since<br />

v0 j ≥ 0. Now if x ∈ Γ+ n−1 (0), then for all j<br />

〈vj, xj〉 = v 0 j x 0 j −<br />

3<br />

i=1<br />

v i jx i j ≥ v 0 j x 0 j −<br />

<br />

<br />

<br />

3 <br />

i=1<br />

If on the other hand x ∈ Γ − n−1 (0), then x0 j<br />

〈vj, xj〉 = v 0 j x 0 j −<br />

3<br />

i=1<br />

(v i j )2<br />

v i jx i j ≤ v 0 j x 0 j − x 0 j<br />

≤ x 0 j<br />

FIXME: read this through carefully.<br />

<br />

<br />

<br />

3 <br />

Thus PH splits causal support (5.20) of d.<br />

54<br />

i=1<br />

3<br />

i=1<br />

(x i j )2 ≥ v 0 j x 0 j − v 0 j x 0 j = 0<br />

≤ 0 for all j, hence<br />

<br />

<br />

<br />

3 <br />

i=0<br />

(v i j )2 − x 0 j<br />

(v i j )2<br />

<br />

<br />

<br />

3 <br />

i=0<br />

(v i j )2 = 0


Now choose a monotonous function χ0 ∈ C∞ (R) such that<br />

⎧<br />

⎪⎨ 0, t ≤ 0,<br />

χ0(t) = s ∈ [0, 1),<br />

⎪⎩<br />

1,<br />

0 < t < 1,<br />

t ≥ 1.<br />

The following theorem gives us a retarded distribution.<br />

Theorem 7.7. The limit<br />

exists.<br />

v · x def<br />

lim χ0( )d(x) = θ(v · x)d(x) (7.11)<br />

δ→0 δ<br />

In order to prove this we need the following lemma.<br />

Lemma 7.8. Let a1 > 1, then<br />

ψ0( x<br />

)d(x) :=<br />

δ<br />

uniformly in a ≥ a1.<br />

<br />

χ0(a<br />

v · x v · x<br />

) − χ0(<br />

δ δ )<br />

<br />

d(x) → 0 for δ → 0<br />

Proof. Let U be a neighborhood of Γ + (0) ∪ Γ − (0). Choose a function<br />

ψ1 ∈ C ∞ (R m ) such that<br />

Then, by (5.20)<br />

⎧<br />

⎪⎨<br />

ψ1(x) =<br />

⎪⎩<br />

0, Rm \ U.<br />

〈ψ0( x<br />

δ<br />

1, x ∈ Γ + (0) ∪ Γ − (0),<br />

s ∈ (0, 1], U \ Γ + (0) ∪ Γ − (0),<br />

x x<br />

)d(x), φ(x)〉 = 〈ψ0( )d(x), ψ1(<br />

δ δ )φ(x)〉<br />

= 〈φ(x)d(x), ψ0( x<br />

δ<br />

for any φ ∈ S(R m ), since ψ0ψ1 ∈ C ∞ 0 (Rm ).<br />

By (7.1)<br />

dδ def<br />

= ρ(δ)δ m d(δx) → d0<br />

for δ → 0. Further<br />

in C ∞ (R m ) since<br />

φ(δx) → φ(0) for δ → 0<br />

D α (φ(δx) − φ(0)) = δ |α| (D α φ)(δx) − D α φ(0).<br />

Therefore by [12] Theorem 6.1.8<br />

x<br />

)ψ1( )〉, (7.12)<br />

δ<br />

φ(δx)dδ → φ(0)d0 for δ → 0. (7.13)<br />

55


Therefore φ(x)d(x) also has quasi-asymptotics of order ω < 0 at x = 0<br />

with respect to ρ.<br />

Hence by (7.8) and (7.12)<br />

〈ψ0( x<br />

)d(x), φ(x)〉 → 0 for δ → 0 (7.14)<br />

δ<br />

for all φ ∈ S(R m ).<br />

Now for fixed a1 > 1 the convergence is uniform on the compact interval<br />

1 ≤ a ≤ a1.<br />

We want to show the uniformity of the limit in a ′ ≥ a1 > 1.<br />

To this end, we claim that<br />

Claim.<br />

χ0<br />

<br />

n v · x<br />

<br />

v · x<br />

n−1 <br />

a − χ0 =<br />

δ<br />

δ<br />

j=0<br />

ψ0<br />

<br />

j x<br />

<br />

a<br />

δ<br />

We prove this by induction. It holds for n = 1 by definition. Assume it<br />

holds for n = k − 1. Then,<br />

<br />

k v · x<br />

<br />

v · x<br />

<br />

χ0 a − χ0<br />

<br />

δ<br />

δ<br />

k v · x<br />

<br />

k−1 v · x<br />

<br />

k−1 v · x<br />

<br />

v · x<br />

<br />

= χ0 a − χ0 a + χ0 a − χ0<br />

δ<br />

δ<br />

δ<br />

δ<br />

<br />

k−1 x<br />

k−2<br />

<br />

j x<br />

<br />

= ψ0 a + ψ0 a .<br />

δ<br />

δ<br />

j=0<br />

Which proves the claim.<br />

We write a ′ = a n , for a suitable n and 1 ≤ a ≤ a1. Now the problem is<br />

translated into showing uniformity in n. By (7.13) we may choose δ0 such<br />

that for δ < δ0<br />

|ρ(δ)〈φ(x)d(x), ψ1(x/δ)ψ0(x/δ)〉| ≤ |〈φ(0)d0(x), ψ1(x)ψ0(x)〉| + 1 = K.<br />

Note that δ/a j ≤ δ < δ0. Thus<br />

From (7.5) we know that<br />

Now since a −ω > a −ω<br />

1<br />

Since δ/a q < δ for all q ≥ 1<br />

|〈φ(x)d(x), ψ1(a j x/δ)ψ0(a j x/δ)〉| ≤ K/ρ(δ/a j )<br />

ρ(δ/a)<br />

ρ(δ) .<br />

for ω ≤ 0 we may assume, that for δ < δ0<br />

ρ(δ/a) ≥ a −ω<br />

1 ρ(δ).<br />

ρ(δ/a j ) ≥ a −ω<br />

1 ρ(δ/aj−1 ) ≥ a 2ω<br />

1 ρ(δ/a j−2 ) ≥ a −jω<br />

1<br />

56<br />

ρ(δ).


Thus<br />

n−1 <br />

|<br />

j=0<br />

〈φ(x)d(x), ψ1(a j x/δ)ψ0(a j n−1<br />

x/δ)〉| ≤<br />

But ρ(δ) → ∞ Hence the convergence is uniform.<br />

We now prove the theorem.<br />

<br />

K/ρ(δ/a j )<br />

j=0<br />

≤ Kρ(δ) −1<br />

∞<br />

j=0<br />

a jω<br />

1<br />

= Kρ(δ) −1 (1 − a ω 1 ) −1 .<br />

Proof of Theorem 7.7. Another way to state the theorem is that the<br />

limit in (7.11) exists for all sequences δn → 0. This is done by showing<br />

〈χ0( v·x)d(x),<br />

φ(x)〉 is a Cauchy sequence. Consider<br />

δn<br />

v · x v · x<br />

v · x v · x<br />

χ0( ) − χ0( ) = χ0(an,m ) − χ0( ), (7.15)<br />

δm<br />

δn<br />

where an,m = δn/δm. Assuming δm < δn, we know from Lemma 7.8 that<br />

for all φ and for all ɛ > 0 there exists some δ0 > 0 such that for all δ < δ0<br />

|〈(χ0(an,m<br />

v · x<br />

δ<br />

δn<br />

δn<br />

v · x<br />

) − χ0( ))d(x), φ(x)〉| < ɛ<br />

δ<br />

Hence given φ and ɛ > 0 we may choose δ0 and N such that δn < δ0 for all<br />

n, m ≥ N<br />

v · x v · x<br />

|〈(χ0( ) − χ0( ))d(x), φ(x)〉| < ɛ.<br />

δm<br />

δn<br />

Hence 〈χ0( v·x)d(x),<br />

φ(x)〉 is a Cauchy sequence.<br />

δn<br />

Now suppose δn and δn ′ are two sequences converging towards 0. Then as<br />

above, with an,n ′ = δn ′/δn in (7.15),<br />

v · x<br />

v · x<br />

〈χ0( )d(x), φ(x)〉 − 〈χ0( )d(x), φ(x)〉 → 0<br />

δn<br />

as n → ∞. Hence the limit exists.<br />

Now we may define<br />

Definition 7.9.<br />

δn ′<br />

v · x<br />

r(x) = lim χ0( )d(x) = θ(v · x)d(x).<br />

δ→0 δ<br />

Similarly there exists an advanced distribution.<br />

57


Corollary 7.10. The limit<br />

exists.<br />

−v · x<br />

lim χ0( )d(x) := −a(x)<br />

δ→0 δ<br />

Proof. This follows by the proof of Lemma 7.8 and Theorem 7.7 with χ0(t)<br />

substituted by χ0(−t).<br />

FIXME: think about this.<br />

Note that by (7.9) and (7.10),<br />

and<br />

Hence<br />

v · x<br />

supp r ⊂ supp(lim χ0(<br />

δ→0 δ )) ⊂ Γ+ n−1 (0)<br />

supp a ⊂ supp(lim<br />

δ→0 χ0(<br />

−v · x<br />

)) ⊂ Γ<br />

δ<br />

− n−1 (0).<br />

r(x) − a(x) = Θ(v · x)d(x) − Θ(−v · x)d(x) = d(x). (7.16)<br />

These support properties allows us to apply a and r to discontinuous<br />

test-functions as follows<br />

〈r, φ〉 = 〈r(x), Θ(v · x)φ(x)〉 and 〈r, (1 − Θ)φ〉 = 0<br />

〈a, φ〉 = 〈a(x), (1 − Θ)(v · x)φ(x)〉 and 〈a, Θφ〉 = 0.<br />

Thus by (7.16) we may split d in a retarded and an advanced part, simply<br />

by multiplication with Θ,<br />

and<br />

Note that by (7.1)<br />

and<br />

〈d, Θφ〉 = 〈r, Θφ〉 − 〈a, Θφ〉 = 〈r, φ〉<br />

〈d, (1 − θ)φ〉 = 〈r, (1 − Θ)φ〉 − 〈a, (1 − Θ)φ〉 = −〈a, φ〉.<br />

〈d0, Θφ〉 = lim<br />

δ→0 ρ(δ)δ m 〈d(δx), Θφ(x)〉 = lim<br />

δ→0 ρ(δ)〈d(x), Θλδφ(x)〉<br />

= lim ρ(δ)〈r(x), φ(x/δ)〉<br />

δ→0<br />

= lim ρ(δ)δ<br />

δ→0 m 〈r(δx), φ(x)〉<br />

〈d0, (1 − Θ)φ〉 = lim<br />

δ→0 ρ(δ)δ m 〈d(δx), (1 − Θ)φ(x)〉 = lim<br />

δ→0 ρ(δ)〈a(x), φ(x/δ)〉<br />

We see that r and a have same singular order as d.<br />

58<br />

= lim<br />

δ→0 ρ(δ)δ m 〈a(δx), φ(x)〉


7.2.2 Uniqueness<br />

We still need to prove that the decomposition of d is unique. To this end<br />

we need the following result<br />

Theorem 7.11. Let u ∈ D ′ (R n ) with support supp u = {0}. Then there<br />

exists a N ∈ N such that<br />

where cα ∈ C.<br />

u = <br />

|a|≤N<br />

cα∂ α δ,<br />

This theorem is a consequence of the following lemmas.<br />

Lemma 7.12.<br />

1<br />

k! ∂k t f(xt) = <br />

|α|=k<br />

x α<br />

α! f (α) (xt) (7.17)<br />

Proof. The first we note that by the chain rule, we may write the left hand<br />

side as<br />

1<br />

k! ∂k t f(xt) = 1<br />

<br />

n<br />

k <br />

∂ <br />

xi f(y) <br />

k! ∂yi<br />

i=1<br />

y=xt<br />

It is this form we will use when proving (7.17) by induction. That is, we<br />

want to prove<br />

1<br />

n<br />

∂<br />

k xi f(y) =<br />

k! ∂yi<br />

xα α! ∂αf(y) i=1<br />

59<br />

|α|=k


It is obvious for k = 0 and k = 1. Now, assume it holds for k = m. Then<br />

1<br />

n<br />

(m + 1)!<br />

= 1<br />

n<br />

m + 1<br />

j=1<br />

= 1<br />

n<br />

m + 1<br />

= 1<br />

m + 1<br />

= 1<br />

m + 1<br />

= 1<br />

m + 1<br />

= 1<br />

m + 1<br />

= <br />

j=1<br />

n<br />

i=1<br />

xj<br />

xj<br />

<br />

xi<br />

∂<br />

∂yj<br />

∂<br />

∂yj<br />

∂<br />

∂yi<br />

m+1<br />

f(y)<br />

<br />

1<br />

n<br />

m!<br />

<br />

x<br />

xj<br />

j=1 |α|=m<br />

α<br />

α!<br />

n<br />

<br />

j=1 |α|=m<br />

<br />

|α|=m j=1<br />

<br />

|β|=m+1 j=1<br />

n<br />

|β|=m+1 j=1<br />

|α|=m<br />

i=1<br />

xi<br />

∂<br />

∂yi<br />

x α<br />

α! ∂α f(y)<br />

∂<br />

∂<br />

∂yj<br />

α f(y)<br />

m<br />

f(y)<br />

xα+(0,...,0,1,0,...,0) ∂<br />

α!<br />

α+(0,...,0,1,0,...,0) f(y)<br />

n<br />

x<br />

(αj + 1)<br />

α+(0,...,0,1,0,...,0)<br />

(α + (0, . . . , 0, 1, 0, . . . , 0))! ∂α+(0,...,0,1,0,...,0) f(y)<br />

n<br />

x β<br />

β! ∂β f(y)<br />

x<br />

βj<br />

β<br />

β! ∂βf(y), where β = α + (0, ..., 0, 1, 0, ...0)<br />

where only the jth entry of (0, . . . , 0, 1, 0, . . . , 0) is non-zero and where we<br />

have used<br />

(α + (0, . . . , 0, 1, 0, . . . , 0))! = (α1, α2, . . . , αj + 1, . . . , αn)!<br />

Finally, rename β = α.<br />

= α1!α2! · · · (αj + 1)! · · · αn!<br />

= (αj + 1)α!.<br />

Lemma 7.13. Let f ∈ C ∞ 0 (Rn ) and t ∈ R, then<br />

f(x) ≤ <br />

for any N ≥ 1.<br />

|α|≤N−1<br />

xα α! f (α) <br />

N N<br />

(0) + |x|<br />

α!<br />

|α|=N<br />

sup |f (α) (x ′ )| <br />

′<br />

|x | < |x| ,<br />

60


Proof. Let F : t ↦→ f(xt). By application of Taylor’s formula [10] to F ,<br />

f(x) = F (1) =<br />

=<br />

N−1 <br />

k=0<br />

N−1 <br />

k=0<br />

= <br />

1<br />

k! ∂k t F (0) +<br />

<br />

|α|≤N−1<br />

|α|=k<br />

1<br />

(N − 1)!<br />

xα α! f (α) <br />

(0) + N<br />

xα α! f (α) (0) + <br />

|α|=N<br />

1<br />

(1 − t)<br />

0<br />

N−1 ∂ N t F (t)dt<br />

1<br />

N−1<br />

(1 − t)<br />

xα α! f (α) (xt)dt<br />

0<br />

|α|=N<br />

x α<br />

α! fα(x), (7.18)<br />

where fα(x) = N 1<br />

0 (1 − t)N−1 f (α) (xt)dt and where we have used (7.17).<br />

Now, since |x α | ≤ maxj |xj| N ≤ |x| N when |α| = N, we can make the<br />

following estimate<br />

As wanted.<br />

<br />

|α|=N<br />

xα α! fα(x)<br />

<br />

N 1<br />

≤ |x|<br />

α!<br />

|α|=N<br />

fα(x)<br />

<br />

N<br />

= |x|<br />

|α|=N<br />

<br />

N<br />

≤ |x|<br />

|α|=N<br />

<br />

N<br />

≤ |x|<br />

|α|=N<br />

1<br />

α! N<br />

1<br />

(1 − t)<br />

0<br />

N−1 f (α) (xt)dt<br />

N<br />

α! sup |(1 − t)<br />

t∈[0,1]<br />

N−1 f (α) (xt)|<br />

N<br />

α! sup |f (α) (x ′ )| |x ′ | < |x| .<br />

Lemma 7.14. Let u ∈ D(R n ) with support supp u = 0, then there exists<br />

an N ≥ 0 such that 〈u, φ〉 = 0 for all φ ∈ C ∞ 0 (Rn ) and ∂ α φ(0) = 0 when<br />

|α| ≤ N.<br />

Proof. Let ψ ∈ C ∞ 0 (Rn ) be defined such that<br />

Let ɛ ∈ (0, 1). Then<br />

⎧<br />

⎪⎨ 1, |x| < 1/2,<br />

ψ(x) = s ∈ [0, 1],<br />

⎪⎩<br />

0,<br />

1/2 ≤ |x| ≤ 1,<br />

|x| > 1.<br />

〈u, φ〉 = 〈u(x), φ(x)ψ(x/ɛ)〉, for all φ ∈ C ∞ 0 (R n ),<br />

since supp u ⊂ {x ∈ R n ||x| < ɛ/2} =: U and ψ(x/ɛ) = 1 on U.<br />

If x > 1, then x/ɛ > x > 1, hence the map x ↦→ φ(x)ψ(x/ɛ) is supported in<br />

the unit-ball. From now on we may therefore assume that |x| ≤ ɛ.<br />

61


By compactness of the unit-ball and the definition of distributions there<br />

exists a real number C ≥ 0 and a non-negative integer N such that<br />

|〈u, φ〉 ≤ C <br />

α≤N<br />

sup |∂ α (φ(x)ψ(x/ɛ))|, for all φ ∈ C ∞ 0 (R n ). (7.19)<br />

If ∂ α φ(0) = 0 when |α| ≤ N and |β| ≤ N, we may apply Lemma 7.13 to<br />

∂ β φ as follows<br />

|∂ β φ(x)| ≤ |x| N−β+1<br />

≤ ɛ N+1−β<br />

By Leibniz’ formula,<br />

<br />

|γ|=N+1−β<br />

<br />

|γ|=N+1−β<br />

∂ α (φ(x)ψ(x/ɛ))) = <br />

α=β+γ<br />

= <br />

α=β+γ<br />

N + 1 − β<br />

γ!<br />

N + 1 − β<br />

γ!<br />

sup |∂ γ (x ′ )| |x ′ | < |x| <br />

sup |∂ γ (x)| |x| < 1 . (7.20)<br />

α!<br />

β!γ! ∂β φ(x)∂ γ (ψ(x/ɛ))<br />

α!<br />

β!γ! ɛ−|γ| ∂ β φ(x)(∂ γ ψ)(x/ɛ). (7.21)<br />

Combining (7.20) and (7.20) together with the fact that ∂ γ ψ(x/ɛ has<br />

compact supported contained in the unit-ball, we see that there exists<br />

constants Cα for each α ≤ N which is independent of ɛ such that<br />

FIXME: (start) Could say that supp ∂ γ compact (in unit-ball) so attains<br />

its maximum value. For a fixed α there is only finitely many possible γ, so<br />

the maximum of these supremums gives us the estimate. Fixme: (end)<br />

Plugging this into (7.19),<br />

|∂ α (φ(x)ψ(x/ɛ)) ≤ Cαɛ N+1−|β| e −|γ| = Cαɛ N+1−|α|<br />

|〈u, φ〉| ≤ C <br />

|α|≤N<br />

letting K be the constant. Finally, let ɛ → 0.<br />

FIXME: maybe this could be better.<br />

Cαɛ N+1−|α| ≤ Kɛ,<br />

Proof of Theorem 7.11. Pick N and ψ as in Lemma 7.14.<br />

By (7.18) every φ ∈ C ∞ 0 (Rn ) can be written on the form<br />

φ = ψ(x) <br />

|α|≤N<br />

62<br />

x α<br />

α! ∂α φ(0) + φ ′


with N and ψ as in the lemma. Then φ ′ ∈ C ∞ 0 (Rn ) and ∂ α φ ′ (0) = 0 for<br />

|α| ≤ N. Hence, by Lemma 7.14 〈u, φ ′ 〉 = 0. Thus<br />

〈u, φ〉 = 〈u(x), ψ(x) <br />

|α|≤N<br />

= 〈u(x), ψ(x) <br />

= <br />

|α|≤N<br />

= <br />

|α|≤N<br />

with cα = (−1) |α| 〈u(x),ψ(x)xα 〉<br />

α! .<br />

|α|≤N<br />

x α<br />

α! ∂α φ(0) + φ ′ 〉<br />

x α<br />

α! ∂α φ(0)〉<br />

〈u(x), ψ(x)xα 〉<br />

∂<br />

α!<br />

α φ(0)<br />

cα∂ α δ,<br />

Now to prove the decomposition d = r − a is unique assume there are two<br />

solutions {r1, a1} and {r2, a2} to the splitting problem. That is,<br />

d = r1 − a1 = r2 − a2 hence r1 − r2 = a1 − a2.<br />

The support of the differences is {0}<br />

FIXME: Is it??<br />

hence by Theorem 7.11<br />

r1 − r2 = a1 − a2 = <br />

cα∂ α δ. (7.22)<br />

|α|≤N<br />

We want to investigate each term of the sum. From (8.6) (FIXME: maybe<br />

better to place this equation before) we know that ˆ δ = 1, hence by<br />

Theorem 5.17 in [2]<br />

Hence,<br />

ρ(δ)〈 cα∂ α δ( p<br />

δ ), ˇ φ(p)〉 = ρ(δ)cαi |α| 〈<br />

∂ α δ = i |α| D α δ = i |α| p αˆ δ = i |α| p α<br />

<br />

p<br />

α ,<br />

δ<br />

ˇ φ(p)〉 = ρ(δ)<br />

δ |α| 〈 cα∂ αδ(p), ˇ φ(p)〉.<br />

Thus ρ(δ) = δ |α| for dα(x) = cα∂ α δ(x) by (7.2) and then clearly the<br />

singular order is ωα = |α|.<br />

For r1 − r2 = <br />

|α|≤N cα∂ α δ we then must have |α| = ωα ≤ ω for all α.<br />

This implies<br />

r1 − r2 = <br />

|α|≤ω<br />

cα∂ α δ (7.23)<br />

Further in our case ω < 0. Hence all the cα must vanish in (7.22). This<br />

shows r1 = r2 and a1 = a2. Hence the splitting is unique.<br />

63


7.3 Case II: Positive Singular Order<br />

Choose positive ɛ < 1 then since C ′ δ ɛ−1 = C′ δ ω+ɛ<br />

δ ω+1<br />

≤ ρ(δ)<br />

δ ω+1 by (6.11),<br />

ρ(δ)<br />

→ ∞ for δ → 0<br />

δω−1 Hence if we choose a multi-index b such that |b| = ω + 1, then<br />

lim<br />

δ→0 〈d(x)xb , ψ(x/δ)〉 = lim δ<br />

δ→0 m+ω+1 〈d(δx)x b , ψ(x)〉<br />

= lim<br />

δ→0<br />

δ m+ω<br />

ρ(δ) 〈d0(x), x b ψ(x)〉 = 0, (7.24)<br />

by (7.1).<br />

We want to show the splitting can be done for test-functions φ satisfying<br />

Definition 7.15. We define 4<br />

D a φ(0) = 0 for |a| ≤ ω.<br />

S ω (R m ) = {φ ∈ S(R m )|D α φ(0) = 0 for all α with |α| ≤ ω}<br />

This is the subspace of S(R m ) of test-functions which vanish up to order ω<br />

at 0.<br />

FIXME: show ∞ > sing order(d) = ω ≥ 0 ⇒ d ∈ S ′ω (R m ).<br />

Definition 7.16. We define operators W (ω,w) : S(R n ) → S ω (R n ) by<br />

(W (ω,w)φ)(x) := φ(x) − w(x)<br />

for each w(x) ∈ S(R n ) such that<br />

ω<br />

|a|=0<br />

w(0) = 1 and D α w(0) = 0 for 1 ≤ |a| ≤ w.<br />

x a<br />

a! (Da φ)(0) (7.25)<br />

Recognizing Taylor’s formula given by (7.18) we note that W is actually a<br />

modified Taylor subtraction operator which projects S(Rm ) into the space<br />

of test-functions which vanish up to order ω at 0.<br />

Note that we may write<br />

(W φ)(x) = <br />

x b ψb(x)<br />

|b|=ω+1<br />

by (7.18)<br />

We now want to apply θ from Theorem 7.7.<br />

4 If ω is not an integer use [ω] the largest integer such that [ω] < ω. The reason I<br />

haven’t written it like this straight away, is that ω always seems to be an integer in QED.<br />

64


Lemma 7.17. Let d (b) (x) = x b d(x), then d (b) has quasi-asymptotics<br />

x b d0(x) with respect to ρ (b) (δ) = δ −|b| ρ(δ), and it has singular order ω − |b|.<br />

Proof. The quasi-asymptotics follows from<br />

lim<br />

δ→0 ρ(b) δ m d (b) (δx) = lim<br />

delta→0 ρ(b) (δ)δ m+|b| x b d(δx) = lim<br />

δ→0 ρ(δ)δ m x b d(δx)<br />

The singular order is given by<br />

for all a > 0.<br />

= x b d0(x)<br />

ρ<br />

lim<br />

δ→0<br />

(b) (aδ)<br />

ρ (b) (δa)<br />

= lim<br />

(δ) δ→0<br />

−|b| ρ(aδ)<br />

δ−|b| = aω−|b|<br />

ρ(δ)<br />

By the previous lemma x b d(x) has singular order ω − |b|. Hence it is<br />

negative for b ≥ ω + 1. Thus we may define<br />

Definition 7.18.<br />

and<br />

〈r(x), φ〉 := 〈d, θ(v · x)W φ〉<br />

a(x) := d(x) − r(x).<br />

By construction supp r ⊂ Γ + n−1<br />

x ∈ Γ + n−1 (0) \ {0} by (7.25), since a test-function φ ∈ S supported in<br />

Γ + n−1 (0) \ {0} satisfies Dαφ(0) = 0 for all α.<br />

Like for the case ω < 0, r and a have the same singular order as d. This<br />

follows from the equations<br />

(0). Further r(x) = d(x) for<br />

lim ρ(δ)〈r(x), φ(x/δ)〉 = lim ρ(δ)〈d(x), (θW φ)(x/δ)〉<br />

δ→0 δ→0<br />

= 〈d0(x), θW φ(x)〉.<br />

In contrast to the case where ω < 0 the splitting is not unique for w ≥ 0<br />

because of the dependence on w. Again the support of the difference<br />

between two solutions is {0}, hence by (7.23)<br />

r1 − r2 = <br />

Cα∂ α δ<br />

|α|≤ω<br />

The coefficients Cα have to be fixed by additional normalization conditions.<br />

65


Chapter 8<br />

Application to QED<br />

8.1 Using the Game Plan<br />

In this section we will use the game plan from the end of Section 5.1 to<br />

find D2 from T1 and then show how to find T2.<br />

In QED 1<br />

T1(x) = ie : ψ(x)γ µ ψ(x) : Aµ(x),<br />

By (5.1) this means that<br />

˜T1(x) = −T1(x) = −ie : ψ(x)γ µ ψ(x) : Aµ(x).<br />

We now want to construct an advanced distribution according to (5.7).<br />

This is done using Theorem 4.6 remembering that the only contractions<br />

existing in QED are given by equations (4.1), (4.2) and (4.3).<br />

1 [6] page 183<br />

66


A ′ 2(x1, x2) = ˜ T1(x1)T1(x2) = −T1(x1)T1(x2)<br />

= e 2 γ µ<br />

abγν cd : ψa(x1)ψb(x1) :: ψc(x2)ψd(x2) : Aµ(x1)Aν(x2)<br />

<br />

: ψa(x1)ψb(x1)ψ c(x2)ψd(x2) :<br />

= e 2 γ µ<br />

ab γν cd<br />

= e 2 γ µ<br />

ab γν cd<br />

+ : ψ a(x1) | ψb(x1)ψ c(x2) | ψd(x2) :<br />

+ : | ψ a(x1)ψb(x1)ψ c(x2)ψd(x2) | :<br />

+ : | ψ a(x1) | ψb(x1)ψ c(x2) | ψd(x2) | :<br />

<br />

×<br />

<br />

: ψa(x1)ψb(x1)ψ c(x2)ψd(x2) :<br />

: Aµ(x1)Aν(x2) : + : | Aµ(x1)Aν(x2) | :<br />

+ 1<br />

i S(+)<br />

bc (x1 − x2) : ψa(x1)ψd(x2) :<br />

+ 1<br />

i S(−)<br />

da (x2 − x1) : ψb(x1)ψc(x2) :<br />

− S (+)<br />

bc (x1 − x2)S (−)<br />

da (x2<br />

<br />

− x1)<br />

<br />

×<br />

: Aµ(x1)Aν(x2) : +gµνiD (+)<br />

0 (x1 − x2)<br />

<br />

<br />

<br />

. (8.1)<br />

From (5.8) we see that the retarded distribution is given by (8.1) simply by<br />

substituting x1 ↔ x2. For convenience we also interchange the indices<br />

µ ↔ ν, a ↔ c and b ↔ d, arriving at<br />

R ′ 2(x1, x2) = T1(x2) ˜ T1(x1) = −T1(x2)T1(x1)<br />

<br />

: ψa(x1)ψb(x1)ψ c(x2)ψd(x2) :<br />

= e 2 γ µ<br />

ab γν cd<br />

− 1<br />

i S(+)<br />

da (x2 − x1) : ψb(x1)ψc(x2) :<br />

− 1<br />

i S(−)<br />

bc (x1 − x2) : ψa(x1)ψd(x2) :<br />

− S (+)<br />

da (x2 − x1)S (−)<br />

bc (x1<br />

<br />

− x2)<br />

<br />

×<br />

: Aµ(x1)Aν(x2) : +gµνiD (+)<br />

0 (x2 − x1)<br />

67<br />

<br />

. (8.2)


Finally we can calculate the difference D(x1, x2) by (5.11).<br />

D(x1, x2) = R ′ n(x1, x2) − A ′ n(x1, x2)<br />

= e 2 γ µ<br />

abγν <br />

cd ψa(x1)ψb(x1)ψ c(x2)ψd(x2) : gµνi(D (+)<br />

0 (x2 − x1) − D (+)<br />

0 (x1 − x2))<br />

− 1<br />

<br />

S<br />

i<br />

(+)<br />

da (x2 − x1) + S (−)<br />

da (x2<br />

<br />

− x1) : ψb(x1)ψc(x2) :: Aµ(x1)Aν(x2) :<br />

<br />

− gµν S (+)<br />

da (x2 − x1)D (+)<br />

0 (x2 − x1) + S (−)<br />

da (x2 − x1)D (+)<br />

<br />

0 (x1 − x2) : ψb(x1)ψc(x2) :<br />

− 1<br />

i (S(−)<br />

bc (x1 − x2) + S (+)<br />

bc (x1 − x2))ψa(x1)ψd(x2) :: Aµ(x1)Aν(x2) :<br />

<br />

− gµν S (−)<br />

bc (x1 − x2)D (+)<br />

0 (x2 − x1) + S (+)<br />

bc (x1 − x2)D (+)<br />

<br />

0 (x1 − x2) : ψa(x1)ψd(x2) :<br />

<br />

+ S (+)<br />

bc (x1 − x2)S (−)<br />

da (x2 − x1) − S (−)<br />

bc (x1 − x2)S (+)<br />

da (x2<br />

<br />

− x1) : Aµ(x1)Aν(x2) :<br />

+ 1<br />

i gµν<br />

<br />

S (−)<br />

bc (x1 − x2)S (+)<br />

da (x2 − x1)D (+)<br />

0 (x2 − x1)<br />

<br />

. (8.3)<br />

− S (+)<br />

bc (x1 − x2)S (−)<br />

da (x2 − x1)D (+)<br />

0 (x1 − x2)<br />

This formula may be considered as a correspondent to the Feynman rules.<br />

Each term can be treated separately and represents different scattering<br />

scenarios, where the field operators create and annihilate particles. Like<br />

the Feynman rules, they may be illustrated by graphs. But notice the huge<br />

difference that while all our terms are well-defined distributions, the<br />

Feynman rules are ill-defined for closed loops, which here are represented<br />

by terms with products of the propagators S and D.<br />

Lets us consider the first term. First we note that<br />

D + 0 (x1 − x2) − D + 0 (x2 − x1) is in fact the Jordan-Pauli distribution for<br />

mass m = 0, where the full Jordan-Pauli distribution is given by 2<br />

D(x) =<br />

sgn x0<br />

2π (δ(x2 ) − Θ(x 2 ) m<br />

2 √ x 2 J1(m √ x 2 )).<br />

Note that for the 1-dimensional δ-distribution,<br />

Therefore<br />

〈δ(δ 2 x 2 ), φ(δ 2 x 2 )〉 = δ −2 〈δ(x 2 ), φ(x 2 )〉.<br />

D0(δx) =<br />

sgn x0<br />

2π δ(δ2x 2 sgn x0 δ(x<br />

) =<br />

2π<br />

2 )<br />

δ2 Hence if we choose ρ(δ) = δ 2−n we get the quasi-asymptotics<br />

2 [6] page 89.<br />

lim<br />

δ→0 ρ(δ)δn sgn x0<br />

D0(δx) =<br />

2π δ(x2 ).<br />

68


and for each a > 0,<br />

ρ(aδ)<br />

ρ(δ)<br />

= a2−n<br />

Thus in R 4 , ω = −2. Since the singular order is negative the splitting may<br />

be done by multiplication by the Θ-function (7.16). Hence the retarded<br />

part of the first term of (8.3) is<br />

R2(x1, x2) = −ie 2 : ψ(x1)γ µ ψ(x1)ψ(x2)γ ν ψ(x2) : ΘD0(x1 − x2)<br />

Further R ′ (x1, x2) is given by (8.2) by inspecting each term. By (5.12)<br />

T2(x1, x2)<br />

= R2(x1, x2) − R ′ (x1, x2)<br />

= −ie 2 : ψ(x1)γ µ ψ(x1)ψ(x2)γ ν ψ(x2) :<br />

<br />

ΘD0(x1 − x2) + D (+)<br />

<br />

0 (x2 − x1) .<br />

It is not the aim of this project to introduce Feynman propagators but the<br />

reader with knowledge of these might notice that the expression in<br />

brackets is actually a Feynman propagator describing the exchange of a<br />

photon between electrons.<br />

Figure 8.1: Photon exhange.<br />

8.2 The Adiabatic Limit<br />

In the introduction of the S-matrix we used test-functions g ∈ S(R 4 ),<br />

so-called switching functions. As the name infers they switch off<br />

long-range interaction to prevent infrared divergences 3 . Of course this is<br />

not a good model and we need to consider the so-called adiabatic limit<br />

g → 1 to take long-range interaction like the Coulomb-potential into<br />

account. We show how we may carry out the adiabatic limit. In practise it<br />

can me done by taking the so-called scaling limit.<br />

Let g0 ∈ S(R 4 ) be a fixed test-function such that g0(0) = 1. Then we let<br />

g(x) := g0(ɛx) and take the scaling limit, that is, we let ɛ → 0.<br />

Calculations are in practise usually done in momentum space. With this in<br />

mind we note that<br />

<br />

ˆg(k) =<br />

g(x)e −ix·k d 4 x =<br />

<br />

g0(ɛx)e −ik·x d 4 x = 1<br />

ɛ<br />

k<br />

ˆg0( ). (8.4)<br />

4 ɛ<br />

3 By infrared divergence we simply mean a divergence due to physical phenomena at<br />

very long distances or because of contributions from objects with very small energy<br />

69


Theorem 8.1. Identify ˆg(k) with the distribution it induces, then<br />

Proof. For all φ ∈ S(R 4 )<br />

lim〈ˆg,<br />

φ〉 = lim<br />

ɛ→0 ɛ→0 〈g, ˆ φ〉 = lim<br />

ɛ→0<br />

lim<br />

ɛ→0 ˆg(k) = (2π)4δ(k). <br />

g(k) ˆ φ(k)d 4 <br />

k = lim<br />

ɛ→0<br />

g0(ɛk) ˆ φ(k)d 4 <br />

k<br />

= ˆφ(k)d 4 k, (8.5)<br />

by the Dominated Convergence Theorem.<br />

Now note that generally<br />

〈 ˆ δ, φ〉 = 〈δ, ˆ φ〉 = ˆ <br />

φ(0) = φ(x)dx = 〈1, φ〉. (8.6)<br />

Therefore ˆ δ = 1. Similarly<br />

〈 ˇ δ, φ〉 = 〈δ, ˇ φ〉 = ˇ φ(0) = (2π) −n<br />

<br />

hence<br />

φ(x)dx = (2π) −n 〈1, φ〉,<br />

〈ˆ1, φ〉 = (2π) −n 〈F( ˇ δ), φ〉 = (2π) −n 〈δ, φ〉. (8.7)<br />

That is, ˆ1 = (2π) nδ. It then follows from (8.5), that<br />

<br />

lim〈ˆg,<br />

φ〉 =<br />

ɛ→0<br />

as wanted.<br />

Now by (8.4)<br />

<br />

= 1<br />

ɛ 4n<br />

= 1<br />

ɛ 4n<br />

ˆφ(k)d 4 k = 〈1, ˆ φ〉 = 〈ˆ1, φ〉 = (2π) 4 〈δ, φ〉,<br />

d 4 k1 · · · d 4 kn ˆ T (k1, . . . , kn)ˆg(k1) · · · ˆg(kn)<br />

<br />

<br />

d 4 k1 · · · d 4 kn ˆ T (k1, . . . , kn)ˆg0(k1/ɛ) · · · ˆg0(kn/ɛ)<br />

d 4 k1 · · · d 4 kn ˆ T (ɛk1, . . . , ɛkn)ˆg0(k1) · · · ˆg0(kn),<br />

by substituting ki by ɛki. Finally, we may calculate in the usual way and<br />

in the end take the limit ɛ → 0 without problems.<br />

70


Chapter 9<br />

The Microlocal Approach -<br />

A Condition on the Wave<br />

Front Set.<br />

In this section I will use the notation T ∗ (X) even though X will be an<br />

open subset of R n (and thus T ∗ (X) = X × R n ). I use this notation to<br />

emphasize that the results can be generalized to manifolds 1 . As the results<br />

are based on [4] where we only considered the case where X was open in<br />

R n it would not make sense to consider manifolds.<br />

The Method of Epstein and Glaser may be generalized to work on<br />

manifolds using microlocal analysis.<br />

Definition 9.1. Let Ψ ∈ H. Then the operator-valued function<br />

Ψ(f) def<br />

= : W (f) : Ψ, where W (f)φ def<br />

= exp(iφ(f) ∗∗ ),<br />

is said to be infinitely often differentiable at f = 0 if for each n,<br />

1. For all h ∈ D(M) and all t ∈ R the map t ↦→ Ψ(th) is n times<br />

norm-differentiable at 0.<br />

2. The derivatives at 0 define symmetrical operator-valued distributions.<br />

With Ψ as above we define a distribution<br />

: φ(x1) · · · φ(xn) : def<br />

=<br />

∂n in : W (f) :<br />

∂f(x1) · · · ∂f(xn)<br />

1 In fact, globally hyperbolic manifolds. See [3] page 7.<br />

71<br />

<br />

<br />

<br />

f=0


in the sense that if h ∈ D(M), then<br />

〈: φ(x1) · · · φ(xn) :, Ψ〉(h n <br />

) = : φ(x1) · · · φ(xn) : Ψ h(x1) · · · h(xn) (9.1)<br />

= i −n<br />

<br />

∂nΨ ∂f(x1) · · · ∂f(xn) h(x1) · · · h(xn)<br />

We want to generalize this.<br />

Definition 9.2. The microlocal domain of smoothness D is defined by<br />

where<br />

D = {Ψ ∈ H|Φ(f) is infinitely often differentiable at f = 0<br />

<br />

∂nΨ and for all n ∈ N, WF<br />

∂f n<br />

<br />

(T ∗ M n )± def<br />

={x, k) ∈ T ∗ M|ki ∈ V − (0)}.<br />

⊂ (T ∗ M n )−},<br />

Now we may use the idea of equation (9.1) to define an operator-valued<br />

distribution on D.<br />

Definition 9.3. We call the operator-valued distribution : φ(x1) · · · φ(xn) :<br />

on D defined by (9.1) a Wick monomial. 2<br />

Now we would like to generalize the Wick monomial to a polynomial. That<br />

is, for arbitrary indices l = (l1, . . . , ln) we would like to make sense to<br />

To this end we need the following definition.<br />

: φ l1 (x1) · · · φ ln (xn) : . (9.2)<br />

Definition 9.4. A partial diagonal ∆l1,...,lj , where l1 + · · · + lj = n is a<br />

subset of M n on the form<br />

∆l1,...,lj (M) = {(x1, . . . , x1),<br />

. . . , (xj, . . . , xj)|xi<br />

∈ M, i = 1 . . . , j} M<br />

<br />

l1 times<br />

lj times<br />

j .<br />

Now we want to define the polynomial (9.2) as a restriction of ∂ n Ψ<br />

∂f1···∂fn to<br />

arbitrary partial diagonals. This is done in practice by a pullback. 3<br />

Define the partial diagonal map<br />

by<br />

δn,l : M n → ∆l1,...,ln (M)<br />

δn,l : (x1, . . . , xn) ↦→ (x1, . . . , x1,<br />

. . . , xn, . . . , xn)<br />

<br />

l1 times<br />

ln times<br />

2 Note that there are different definitions of a Wick monomial and polynomial in the<br />

literature. The definitions used here are not the most used ones.<br />

3 Remember that we considered the pullback of a distribution by a function in Sec-<br />

tion 4.2 of [4].<br />

72


Now by Theorem 4.5 from [4] we may define a Wick polynomial as an<br />

operator-valued distribution on D defined as the pullback of a Wick<br />

monomial by the a diagonal map δn,l. 4<br />

First we calculate the set of normals of the partial diagonal map.<br />

The transpose of the Jacobi matrix of the partial diagonal map is<br />

δ ′ n,l (x1, . . . , xn) t =<br />

⎛<br />

∂x1<br />

⎜<br />

⎝<br />

(δn,l)1<br />

⎞t<br />

. . . ∂xn(δn,l)1<br />

.<br />

. ..<br />

⎟<br />

. ⎠<br />

∂x1 (δn,l)n<br />

⎛<br />

1 . . . 1<br />

⎜ 0 . . . 0<br />

⎜<br />

= ⎜ .<br />

⎝ 0 . . .<br />

. . . ∂xn(δn,l)n<br />

0 . . .<br />

1 . . . 1 0 . . .<br />

0 1 . . . 1 0<br />

⎞<br />

0<br />

0 ⎟<br />

. ⎟ ,<br />

⎟<br />

0 ⎠<br />

0 . . . 0 1 . . . 1<br />

where row j contains lj ones. Hence<br />

δ ′ n,l (x1, . . . , xn) t η =<br />

⎛<br />

⎜<br />

⎝<br />

η1 + . . . + ηl1<br />

ηl1+1 + . . . + ηl1+l2<br />

.<br />

ηl1+...+ln−1+1 + . . . + ηl1+...+ln<br />

⎞<br />

⎟<br />

⎠ .<br />

Note that the matrix product is independent on x. The set of normals is<br />

Nδn,l = {(δn,l(x), η) ∈ ∆l(M) × R n |δ ′ n,l (x)t η = 0}<br />

= {(y, η) ∈ ∆l(M) × R n li+1 <br />

| ηli+j = 0 for all i = 0, . . . , n − 1},<br />

Where we have set l0 := 0.<br />

j=1<br />

Definition 9.5. By a Wick polynomial : φ l1 (x1) · · · φ ln (xn) : we mean the<br />

operator-valued distribution<br />

when it exists, that is, if<br />

.<br />

: φ l1 (x1) · · · φ ln (xn) : def<br />

= δn,l ∗ : φ(x1) · · · φ(xn) :<br />

Nδn,l ∩ WF (: φ(x1) · · · φ(xn) :) = ∅.<br />

4 Note that we proved the theorem exactly for the case of a diagonal map.<br />

73


Note that the definition agrees with Definition 9.3, since the restriction to<br />

the total diagonal ∆ n,(1,...,1)(M) simply returns the monomial itself.<br />

Translational invariance has been an important ingredient in the<br />

development of the method of Epstein-and Glaser. Unfortunately<br />

translations are not well-defined on general manifolds and have to be<br />

replaced by parallel transport. Therefore it is one of the key problems one<br />

has to face in the task of creating a microlocal formulation of the method<br />

(and thus make the method work on curved spaces) to find a suitable<br />

condition of smoothness to replace the translational invariance.<br />

The condition is related to graph theory. Letting G denote the set of all<br />

graphs, we define<br />

and<br />

Gn := {G ∈ G|G is non-oriented with vertices {1, . . . , n}}.<br />

E G := {e|e is an edge of G}.<br />

Further given e ∈ E G connecting i and j with i < j we say that i := s(e)<br />

and j := r(e) are the source and range of e, respectively.<br />

Definition 9.6. Let G ∈ Gn and M a manifold. An immersion (x, γ, k) is<br />

a triplet consisting of<br />

1. A map x mapping the vertices v of G to x(v) ∈ M<br />

2. a map γ mapping the edges e ∈ E G to null-geodesics γ(e) ⊂ M<br />

connecting x s(e) and x r(e)<br />

3. A map k mapping edges e ∈ E G to future directed covector fields<br />

k γ(e) := k(e), which are coparallel to the tangent vector ˙γe of the<br />

null-geodesic.<br />

Along with symmetry of Tn and causality we expected the S-matrix to<br />

have we have to add one more property:<br />

74


Microlocal Spectrum Condition.<br />

For the numerical distribution tn ∈ D ′ (M n ), n ≥ 2 of any time-ordered<br />

product,<br />

WF (tn) ⊂ Γn,<br />

where<br />

Γn =<br />

<br />

(x1, k1; . . . ; xn, kn) ∈ T ∗ M n \ {0}|∃g ∈ Gn and<br />

an immersion (x, γ, k) of G, in which ke is<br />

future directed whenever x s(e) /∈ J − (x r(e)) and<br />

such that ki = <br />

m:s(m)=i<br />

km(xi) − <br />

n:r(n)=i<br />

<br />

kn(xi) ,<br />

where t and s runs over all curves terminating and starting at xi,<br />

respectively.<br />

where<br />

J − (x) := {y ∈ M|y < x and there exists γ causal, connecting x and y}<br />

is the set of all points in M in the past of x that can be connected with x<br />

by a causal curve.<br />

The microlocal spectrum condition may be seen as a replacement of the<br />

spectrum condition (Axiom 4iii of the Wightman Axioms). Further it<br />

ensures that the wave front set has the following property.<br />

Figure 9.1: Illustration of Lemma 9.7.<br />

Lemma 9.7. Let (x, k) ∈ Γn then there exists a pair (xm, km) such that<br />

km /∈ V + (0).<br />

75


Proof. Let xm be a maximal point, that is xm /∈ J − (xi) for all i. Note that<br />

we may write<br />

km = −k1,m − · · · − km−1,m + km+1,m + · · · + kn,m,<br />

some of which might be zero, but not all since xm is connected to at least<br />

> 0 and<br />

one point. Now since xm is maximal, k0 1,m , . . . , k0 m−1,m<br />

k0 m+1,m , . . . k0 n,m < 0.<br />

The spectrum condition is important for the construction of the Scattering<br />

matrix, as the following theorem will witness.<br />

Theorem 9.8. If the spectral condition is satisfied, then<br />

Tn(x1, . . . , xn) : φ l1 (x1) · · · φ ln (xn) : (9.3)<br />

is a well-defined operator-valued distribution for any n and any choice of<br />

indices l1 . . . , ln on D. FIXME: dense invariant?<br />

Proof. Let Ψ ∈ D. Then by Definition 9.5<br />

: φ l1 (x1) · · · φ ln (xn) : Ψ<br />

is an operator-valued distribution. By Definition 9.2<br />

Thus FIXME: tjeck det følgede:<br />

WF (: φ(x1) · · · φ(xn) : Ψ) ⊂ (T ∗ M)−.<br />

WF (: φ l1 (x1) · · · φ ln (xn) : Ψ) = WF (δn,l ∗ : φ(x1) · · · φ(xn) : Ψ)<br />

= δn,l ∗ WF (: φ(x1) · · · φ(xn) : Ψ)<br />

⊂ M n × V ×n<br />

− ,<br />

where we have used that WF (f ∗ u) ⊂ f ∗ WF (u) by Theorem 4.2 in [4].<br />

Now remember our main result Theorem 4.10 of [4], that the product of<br />

two distributions u and v exists unless<br />

(x, k) ∈ WF (u) and (x, −k) ∈ WF (v)<br />

for some (x, k).<br />

But in our case the spectral condition insures that WF (Tn) ⊂ Γn and<br />

therefore by Lemma 9.7 there doesn’t exist a (x1, k1, . . . , xn, kn) ∈ Γn with<br />

ki ∈ V ×n<br />

+ for all i. Thus the product (9.3) is well-defined.<br />

76


Bibliography<br />

[1] J. Foster and J.D. Nightingale: A Short Course in General Relativity<br />

2nd edition Springer-Verlag, New York, 1995.<br />

[2] Gerd Grubb: Introduction to Distribution Theory - Lecture Notes,<br />

”http://www.math.ku.dk/∼grubb/distribution.htm”, Copenhagen,<br />

2003.<br />

[3] Romeo Brunetti and Klaus Fredenhagen: Microlocal Analysis and<br />

Interacting Quantum Field Theories: Renormalization on Physical<br />

Backgrounds, http://arxiv.org/abs/math-ph/9903028, Hamburg, 1999.<br />

[4] Asger Jacobsen and Morten Bakkedal: Wave Front Sets,<br />

”http://www.ajac.dk/projects/wavefront/wavefront.pdf”,<br />

Copenhagen, 2005.<br />

[5] Robert J. Zimmer Essential Results of Functional Analysis, The<br />

University of Chicago Press, Chicago, 1990.<br />

[6] G. Scharf: Finite Quantum Electrodynamics - The Causal Approach<br />

2nd edition, Springer-Verlag, Berlin, 1995.<br />

[7] Richard L. Liboff: Introductory Quantum Mechanics 3rd edition,<br />

Addison-Wesley, 1998.<br />

[8] J.J. Sakurai: Modern Quantum Mechanics Revised Edition,<br />

Addison-Wesley, 1994.<br />

[9] Huzihiro Araki: Mathematical Theory of Quantum Fields, Oxford<br />

University Press, Oxford, 1999.<br />

[10] G. Freidlander and M. Joshi: The Theory of Distributions 2nd edition,<br />

Cambridge University Press, Cambridge, 1998.<br />

[11] Lars Hörmander: The Analysis of Linear Partial Differential<br />

Operators I - Distribution Theory and Fourier Analysis 2nd edition,<br />

Springer-Verlag, Berlin Heidelberg, 1990<br />

77


[12] Walter Rudin: Functional Analysis - 2nd edition McGraw-Hill, Inc.<br />

New York, 1991.<br />

78

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!