12.07.2015 Views

1 Lévy Processes and Infinite Divisibility - Department of ...

1 Lévy Processes and Infinite Divisibility - Department of ...

1 Lévy Processes and Infinite Divisibility - Department of ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The term “<strong>Lévy</strong> process” honours the work <strong>of</strong> the French mathematicianPaul <strong>Lévy</strong> who, although not alone in his contribution, played an instrumentalroleinbringingtogetheranunderst<strong>and</strong>ing<strong>and</strong>characterisation<strong>of</strong>processeswithstationary independent increments. In earlier literature, <strong>Lévy</strong> processes can befound under a number <strong>of</strong> different names. In the 1940s, <strong>Lévy</strong> himself referred tothem as a sub-class <strong>of</strong> processus additifs (additive processes), that is processeswith independent increments. For the most part however, research literaturethrough the 1960s <strong>and</strong> 1970s refers to <strong>Lévy</strong> processes simply as processes withstationary independent increments. One sees a change in language through the1980s<strong>and</strong>bythe1990stheuse<strong>of</strong>the term“<strong>Lévy</strong>process”hadbecomest<strong>and</strong>ard.From Definition 1.1 alone it is difficult to see just how rich a class <strong>of</strong> processesthe class <strong>of</strong> <strong>Lévy</strong> processes forms. De Finetti [4] introduced the notion<strong>of</strong> an infinitely divisible distribution <strong>and</strong> showed that they have an intimaterelationship with <strong>Lévy</strong> processes. This relationship gives a reasonably good impression<strong>of</strong> how varied the class <strong>of</strong> <strong>Lévy</strong> processes really is. To this end, let usnow devote a little time to discussing infinitely divisible distributions.Definition 1.2 We say that a real-valued r<strong>and</strong>om variable Θ has an infinitelydivisible distribution if for each n = 1,2,... there exist a sequence <strong>of</strong> i.i.d. r<strong>and</strong>omvariables Θ 1,n ,...,Θ n,n such thatΘ d = Θ 1,n +···+Θ n,nwhere = d is equality in distribution. Alternatively, we could have expressed thisrelation in terms <strong>of</strong> probability laws. That is to say, the law µ <strong>of</strong> a real-valuedr<strong>and</strong>om variable is infinitely divisible if for each n = 1,2,... there exists anotherlaw µ n <strong>of</strong> a real valued r<strong>and</strong>om variable such that µ = µ ∗nn . (Here µ ∗ n denotesthe n-fold convolution <strong>of</strong> µ n ).In view <strong>of</strong> the above definition, one way to establish whether a given r<strong>and</strong>omvariable has an infinitely divisible distribution is via its characteristic exponent.Suppose that Θ has characteristic exponent Ψ(u) := −logE(e iuΘ ) for all u ∈ R.Then Θ has an infinitely divisible distribution if for all n ≥ 1 there exists acharacteristic exponent <strong>of</strong> a probability distribution, say Ψ n , such that Ψ(u) =nΨ n (u) for all u ∈ R.The full extent to which we may characteriseinfinitely divisible distributionsis described by the characteristic exponent Ψ <strong>and</strong> an expression known as the<strong>Lévy</strong>–Khintchine formula. 2Theorem 1.1 (<strong>Lévy</strong>–Khintchine formula) A probability law µ <strong>of</strong> a realvaluedr<strong>and</strong>om variable is infinitely divisible with characteristic exponent Ψ,∫e iθx µ(dx) = e −Ψ(θ) for θ ∈ R,R2 Note, although it is a trivial fact, it is always worth reminding oneself <strong>of</strong> that one worksin general with the Fourier transform <strong>of</strong> a measure as opposed to the Laplace transform onaccount <strong>of</strong> the fact that the latter may not exist.3


if <strong>and</strong> only if there exists a triple (a,σ,Π), where a ∈ R, σ ≥ 0 <strong>and</strong> Π is ameasure concentrated on R\{0} satisfying ∫ R( ) 1∧x2Π(dx) < ∞, such thatΨ(θ) = iaθ + 1 ∫2 σ2 θ 2 + (1−e iθx +iθx1 (|x| 0, X t is ar<strong>and</strong>om variable belonging to the class <strong>of</strong> infinitely divisible distributions. Thisfollows from the fact that for any n = 1,2,...,X t = X t/n +(X 2t/n −X t/n )+···+(X t −X (n−1)t/n ) (1.1)together with the fact that X has stationary independent increments. Supposenow that we define for all θ ∈ R, t ≥ 0,Ψ t (θ) = −logE ( e iθXt)then using (1.1) twice we have for any two positive integers m,n that<strong>and</strong> hence for any rational t > 0,mΨ 1 (θ) = Ψ m (θ) = nΨ m/n (θ)Ψ t (θ) = tΨ 1 (θ). (1.2)Iftisanirrationalnumber,thenwecanchooseadecreasingsequence<strong>of</strong>rationals{t n : n ≥ 1} such that t n ↓ t as n tends to infinity. Almost sure right continuity<strong>of</strong> X implies right continuity <strong>of</strong> exp{−Ψ t (θ)} (by dominated convergence) <strong>and</strong>hence (1.2) holds for all t ≥ 0.In conclusion, any <strong>Lévy</strong> process has the property that for all t ≥ 0E ( e iθXt) = e −tΨ(θ) ,where Ψ(θ) := Ψ 1 (θ) is the characteristic exponent <strong>of</strong> X 1 , which has an infinitelydivisible distribution.Definition 1.3 In the sequel we shall also refer to Ψ(θ) as the characteristicexponent <strong>of</strong> the <strong>Lévy</strong> process.It is now clear that each <strong>Lévy</strong> process can be associated with an infinitelydivisible distribution. What is not clear is whether given an infinitely divisibledistribution, one may construct a <strong>Lévy</strong> process X, such that X 1 has that distribution.This latter issue is affirmed by the following theorem which gives the<strong>Lévy</strong>–Khintchine formula for <strong>Lévy</strong> processes.4


Theorem 1.2 (<strong>Lévy</strong>–Khintchine formula for <strong>Lévy</strong> processes) Suppose thata ∈ R, σ ≥ 0 <strong>and</strong> Π is a measure concentrated on R\{0} such that ∫ R (1 ∧x 2 )Π(dx) < ∞. From this triple define for each θ ∈ R,Ψ(θ) = iaθ + 1 ∫2 σ2 θ 2 + (1−e iθx +iθx1 (|x| 0 consider a probability distribution µ λ which is concentratedon k = 0,1,2... such that µ λ ({k}) = e −λ λ k /k!. That is to say the Poisson5


distribution. An easy calculation reveals that∑e iθk µ λ ({k}) = e −λ(1−eiθ )k≥0=[e − λ n (1−eiθ ) ] n.The right-h<strong>and</strong> side is the characteristic function <strong>of</strong> the sum <strong>of</strong> n independentPoisson processes, each <strong>of</strong> which with parameter λ/n. In the <strong>Lévy</strong>–Khintchinedecompositionwe seethat a = σ = 0<strong>and</strong> Π = λδ 1 , the Diracmeasuresupportedon {1}.Recall that a Poisson process, {N t : n ≥ 0}, is a <strong>Lévy</strong> process with distributionat time t > 0, which is Poisson with parameter λt. From the abovecalculations we haveE(e iθNt ) = e −λt(1−eiθ )<strong>and</strong> hence its characteristic exponent is given by Ψ(θ) = λ(1−e iθ ) for θ ∈ R.2.2 Compound Poisson <strong>Processes</strong>Suppose now that N is a Poisson r<strong>and</strong>om variable with parameter λ > 0 <strong>and</strong>that {ξ i : i ≥ 1} is an sequence <strong>of</strong> i.i.d. r<strong>and</strong>om variables (independent <strong>of</strong> N)with common law F having no atom at zero. By first conditioning on N, wehave for θ ∈ R,E(e iθ∑ Ni=1 ξi ) = ∑ E(e iθ∑ ni=1 ξi )e −λλnn!n≥0= ∑ (∫ ) ne iθx F(dx) e −λλnn≥0 R n!= e −λ∫ R (1−eiθx )F(dx) . (2.1)Note we use the convention here <strong>and</strong> later that for any n = 0,1,2,...n∑= 0.n+1We see from (2.1) that distributions <strong>of</strong> the form ∑ Ni=1 ξ i are infinitely divisiblewith triple a = −λ ∫ 0


Using the fact that N has stationary independent increments together with themutual independence <strong>of</strong> the r<strong>and</strong>om variables {ξ i : i ≥ 1}, for 0 ≤ s < t < ∞,by writingX t = X s +∑N ti=N s+1it is clear that X t is the sum <strong>of</strong> X s <strong>and</strong> an independent copy <strong>of</strong> X t−s . Rightcontinuity <strong>and</strong> left limits <strong>of</strong> the process N also ensure right continuity <strong>and</strong> leftlimits <strong>of</strong> X. Thus compound Poisson processes are <strong>Lévy</strong> processes. From thecalculations in the previous paragraph, for each t ≥ 0 we may substitute N t forthe variable N to discover that the <strong>Lévy</strong>–Khintchine formula for a compoundPoisson process takes the form Ψ(θ) = λ ∫ R (1−eiθx )F(dx). Note in particularthat the <strong>Lévy</strong> measure <strong>of</strong> a compound Poisson process is always finite with totalmass equal to the rate λ <strong>of</strong> the underlying process N.If a drift <strong>of</strong> rate c ∈ R is added to a compound Poisson process so that nowξ i∑N tX t = ξ i +ct, t ≥ 0i=1thenitisstraightforwardtoseethattheresultingprocessisagaina<strong>Lévy</strong>process.The associated infinitely divisible distribution is nothing more than a shiftedcompound Poisson distribution with shift c. The <strong>Lévy</strong>-Khintchine exponent isgiven by∫Ψ(θ) = λ (1−e iθx )F(dx)−icθ.RIf further the shift is chosen to centre the compound Poisson distribution thenc = λ ∫ RxF(dx) <strong>and</strong> then∫Ψ(θ) = (1−e iθx +iθx)λF(dx).2.3 Linear Brownian MotionTake the probability lawRµ s,γ (dx) :=1√2πs2 e−(x−γ)2 /2s 2 dxsupported on R where γ ∈ R <strong>and</strong> s > 0; the well-known Gaussian distributionwith mean γ <strong>and</strong> variance s 2 . It is well known that∫Re iθx µ s,γ (dx) = e −1 2 s2 θ 2 +iθγ=[e −1 2 ( s √ n) 2 θ 2 +iθ γ n] n7


showing again that it is an infinitely divisible distribution, this time with a =−γ, σ = s <strong>and</strong> Π = 0.We immediately recognise the characteristic exponent Ψ(θ) = s 2 θ 2 /2−iθγas also that <strong>of</strong> a linear Brownian motion,X t := sB t +γt, t ≥ 0,where B = {B t : t ≥ 0} is a st<strong>and</strong>ard Brownian motion. It is a trivial exerciseto verify that X has stationary independent increments with continuous pathsas a consequence <strong>of</strong> the fact that B does.2.4 Gamma <strong>Processes</strong>For α,β > 0 define the probability measureµ α,β (dx) = αβΓ(β) xβ−1 e −αx dxconcentrated on (0,∞); the gamma-(α,β) distribution. Note that when β = 1this is the exponential distribution. We have∫ ∞0e iθx µ α,β (dx) ==1(1−iθ/α) β[ ] n1(1−iθ/α) β/n<strong>and</strong>infinite divisibility follows. Forthe <strong>Lévy</strong>–Khintchinedecompositionwehaveσ = 0 <strong>and</strong> Π(dx) = βx −1 e −αx dx, concentrated on (0,∞) <strong>and</strong> a = − ∫ 10 xΠ(dx).However this is not immediately obvious. The following lemma proves to beuseful in establishing the above triple (a,σ,Π). Its pro<strong>of</strong> is Exercise 3.Lemma 2.1 (Frullani integral) For all α,β > 0 <strong>and</strong> z ∈ C such that Rz ≤ 0we have1(1−z/α) β = ∞e−∫ 0 (1−ezx )βx −1 e −αx dx .To see how this lemma helps note that the <strong>Lévy</strong>–Khintchine formula for agamma distribution takes the formΨ(θ) = β∫ ∞0(1−e iθx ) 1 x e−αx dx = βlog(1−iθ/α)for θ ∈ R. The choice <strong>of</strong> a in the <strong>Lévy</strong>–Khintchine formula is the necessaryquantity to cancel the term coming from iθ1 (|x|


AccordingtoTheorem1.2thereexistsa<strong>Lévy</strong>processwhose<strong>Lévy</strong>–Khintchineformula is given by Ψ, the so-called gamma process.Suppose now that X = {X t : t ≥ 0} is a gamma process. Stationaryindependent increments tell us that for all 0 ≤ s < t < ∞, X t = X s + ˜X t−swhere ˜X t−s is an independent copy <strong>of</strong> X t−s . The fact that the latter is strictlypositive with probabilityone (on account <strong>of</strong>it being gamma distributed) impliesthat X t > X s almost surely. Hence a gamma process is an example <strong>of</strong> a <strong>Lévy</strong>process with almost surely non-decreasing paths (in fact its paths are strictlyincreasing). Another example <strong>of</strong> a <strong>Lévy</strong> process with non-decreasing paths isa compound Poisson process where the jump distribution F is concentrated on(0,∞). Note however that a gamma process is not a compound Poisson processon two counts. Firstly, its <strong>Lévy</strong> measure has infinite total mass unlike the <strong>Lévy</strong>measure <strong>of</strong> a compound Poisson process, which is necessarily finite (<strong>and</strong> equalto the arrival rate <strong>of</strong> jumps). Secondly, whilst a compound Poisson process withpositive jumps does have paths, which are almost surely non-decreasing, it doesnot have paths that are almost surely strictly increasing.<strong>Lévy</strong> processes whose paths are almost surely non-decreasing (or simplynon-decreasing for short) are called subordinators.2.5 Inverse Gaussian <strong>Processes</strong>Suppose as usual that B = {B t : t ≥ 0} is a st<strong>and</strong>ard Brownian motion. Definethe first passage timeτ s = inf{t > 0 : B t +bt > s}, (2.2)that is, the first time a Brownian motion with linear drift b > 0 crosses abovelevel s. Recall that τ s is a stopping time 3 . Moreover,since Brownianmotion hascontinuous paths we know that B τs +bτ s = s almost surely. From the StrongMarkov Property it is known that {B τs+t+b(τ s +t)−s : t ≥ 0} is equal in lawto B <strong>and</strong> hence for all 0 ≤ s < t,τ t = τ s +˜τ t−s ,where ˜τ t−s is an independent copy <strong>of</strong> τ t−s . This shows that the process τ :={τ t : t ≥ 0} has stationary independent increments. Continuity <strong>of</strong> the paths <strong>of</strong>{B t +bt : t ≥ 0} ensures that τ has right continuous paths. Further, it is clearthat τ has almost surely non-decreasing paths, which guarantees its paths haveleft limits as well as being yet another example <strong>of</strong> a subordinator. According toits definition as a sequence <strong>of</strong> first passage times, τ is also the almost sure rightinverse <strong>of</strong> the path <strong>of</strong> the graph <strong>of</strong> {B t +bt : t ≥ 0} in the sense <strong>of</strong> (2.2). Fromthis τ earns its name as the inverse Gaussian process.3 We assume that the reader is familiar with the notion <strong>of</strong> a stopping time for a Markovprocess. By definition, the r<strong>and</strong>om time τ is a stopping time with respect to the filtration{G t : t ≥ 0} if for all t ≥ 0,{τ ≤ t} ∈ G t.9


According to the discussion following Theorem 1.1 it is now immediate thatfor each fixed s > 0, the r<strong>and</strong>om variable τ s is infinitely divisible. Its characteristicexponent takes the formΨ(θ) = s( √ −2iθ+b 2 −b)for all θ ∈ R <strong>and</strong> corresponds to a triple a = −2sb −1∫ b0 (2π)−1/2 e −y2 /2 dy, σ = 0<strong>and</strong>1 xΠ(dx) = s√ 2πx3 e−b2 2 dxconcentrated on (0,∞). The law <strong>of</strong> τ s can also be computed explicitly asµ s (dx) =s√2πx3 esb e −1 2 (s2 x −1 +b 2 x)for x > 0. The pro<strong>of</strong> <strong>of</strong> these facts forms Exercise 6.2.6 Stable <strong>Processes</strong>Stable processes are the class <strong>of</strong> <strong>Lévy</strong> processes whose characteristic exponentscorrespondto those <strong>of</strong> stable distributions. Stable distributions were introducedby [17, 18] as a third example <strong>of</strong> infinitely divisible distributions after Gaussian<strong>and</strong> Poisson distributions. A r<strong>and</strong>om variable, Y, is said to have a stable distributionif for all n ≥ 1 it observes the distributional equalityY 1 +···+Y nd= an Y +b n , (2.3)where Y 1 ,...,Y n are independent copies <strong>of</strong> Y, a n > 0 <strong>and</strong> b n ∈ R. By subtractingb n /n from each <strong>of</strong> the terms on the left-h<strong>and</strong> side <strong>of</strong> (2.3) one seesin particular that this definition implies that any stable r<strong>and</strong>om variable is infinitelydivisible. It turns out that necessarily a n = n 1/α for α ∈ (0,2]; see [3],Sect.VI.1. In that case we refer to the parameter α as the index. A smallerclass <strong>of</strong> distributions are the strictly stable distributions. A r<strong>and</strong>om variable Yis said to have a strictly stable distribution if it observes (2.3) but with b n = 0.In that case, we necessarily haveY 1 +···+Y nd= n 1/α Y. (2.4)The case α = 2 corresponds to zero mean Gaussian r<strong>and</strong>om variables <strong>and</strong> isexcluded in the remainder <strong>of</strong> the discussion as it has essentially been dealt within Sect.2.3.Stable r<strong>and</strong>om variables observing the relation (2.3) for α ∈ (0,1) ∪ (1,2)have characteristic exponents <strong>of</strong> the formΨ(θ) = c|θ| α (1−iβtan πα 2sgnθ)+iθη, (2.5)where β ∈ [−1,1], η ∈ R <strong>and</strong> c > 0. Stable r<strong>and</strong>om variables observing therelation (2.3) for α = 1, have characteristic exponents <strong>of</strong> the formΨ(θ) = c|θ|(1+iβ 2 sgnθlog|θ|)+iθη, (2.6)π10


where β ∈ [−1,1] η ∈ R <strong>and</strong> c > 0. Here we work with the definition <strong>of</strong>the sign function sgnθ = 1 (θ>0) − 1 (θ 0, α ∈ (0,2), β ∈ [−1,1] <strong>and</strong>η ∈ R Theorem 1.2 tells us that there exists a <strong>Lévy</strong> process, with characteristicexponent given by (2.5) or (2.6) according to this choice <strong>of</strong> parameters. Further,from the definition <strong>of</strong> its characteristic exponent it is clear that at each fixedtime the α-stable process will have distribution S(ct,α,β,η).2.7 Other ExamplesThere are many more known examples <strong>of</strong> infinitely divisible distributions (<strong>and</strong>hence <strong>Lévy</strong> processes). Of the many known pro<strong>of</strong>s <strong>of</strong> infinitely divisibility for11


2.52.01.51.00.50.00.0 0.2 0.4 0.6 0.8 1.0Figure 1: A sample path <strong>of</strong> a Poisson process; Ψ(θ) = λ(1−e iθ ) where λ is thejump rate.0.80.60.40.20.0−0.2−0.40.0 0.2 0.4 0.6 0.8 1.0Figure 2: A sample path <strong>of</strong> a compound Poisson process; Ψ(θ) = λ ∫ R (1 −e iθx )F(dx) where λ is the jump rate <strong>and</strong> F is the common distribution <strong>of</strong> thejumps.12


0.20.10.0−0.10.0 0.2 0.4 0.6 0.8 1.0Figure 3: A sample path <strong>of</strong> a Brownian motion; Ψ(θ) = θ 2 /2.0.40.30.20.10.0−0.10.0 0.2 0.4 0.6 0.8 1.0Figure 4: A sample path <strong>of</strong> the independent sum <strong>of</strong> a Brownian motion <strong>and</strong> acompound Poisson process; Ψ(θ) = θ 2 /2+ ∫ R (1−eiθx )F(dx).13


0.20.0−0.2−0.40.0 0.2 0.4 0.6 0.8 1.0Figure 5: A sample path <strong>of</strong> a variance gamma processes. The latter has characteristicexponent given by Ψ(θ) = βlog(1−iθc/α+β 2 θ 2 /2α) where c ∈ R <strong>and</strong>β > 0.0.60.40.20.0−0.2−0.4−0.60.0 0.2 0.4 0.6 0.8 1.0Figure 6: A sample path <strong>of</strong> a normal inverse Gaussian process; Ψ(θ) =δ( √ α 2 −(β +iθ) 2 − √ α 2 −β 2 ) where α,δ > 0, |β| < α.14


specific distributions, most <strong>of</strong> them are non-trivial, <strong>of</strong>ten requiring intimateknowledge <strong>of</strong> special functions. A brief list <strong>of</strong> such distributions might includegeneralised inverse Gaussian (see [5] <strong>and</strong> [14]), truncated stable (see [30], [8],[15], [1] <strong>and</strong> [2]), generalised hyperbolic (see [7]), Meixner (see [24]), Pareto (see[25] <strong>and</strong> [28]), F-distributions (see [10]), Gumbel (see [13] <strong>and</strong> [26]), Weibull(see [13] <strong>and</strong> [25]), lognormal (see [29]) <strong>and</strong> Student t-distribution (see [6] <strong>and</strong>[9]). See also the book <strong>of</strong> Steutel [27]Despite being able to identify a large number <strong>of</strong> infinitely divisible distributions<strong>and</strong> hence associated <strong>Lévy</strong> processes, it is not clear at this point what thepaths <strong>of</strong> <strong>Lévy</strong> processes look like. The task <strong>of</strong> giving a mathematically preciseaccount <strong>of</strong> this lies ahead in the next section. In the meantime let us make thefollowing informal remarks concerning paths <strong>of</strong> <strong>Lévy</strong> processes.Exercise 1 shows that a linear combination <strong>of</strong> a finite number <strong>of</strong> independent<strong>Lévy</strong> processes is again a <strong>Lévy</strong> process. It turns out that one may consider any<strong>Lévy</strong> process as an independent sum <strong>of</strong> a Brownian motion with drift <strong>and</strong> acountable number <strong>of</strong> independent compound Poisson processes with differentjump rates, jump distributions <strong>and</strong> drifts. The superposition occurs in sucha way that the resulting path remains almost surely finite at all times <strong>and</strong>,for each ε > 0, the process experiences at most a countably infinite number<strong>of</strong> jumps <strong>of</strong> magnitude ε or less with probability one <strong>and</strong> an almost surelyfinite number <strong>of</strong> jumps <strong>of</strong> magnitude greater than ε, over all fixed finite timeintervals. If in the latter description there is always an almost surely finitenumber <strong>of</strong> jumps over each fixed time interval then it is necessary <strong>and</strong> sufficientthat one has the linear independent combination <strong>of</strong> a Brownian motion withdrift <strong>and</strong> a compound Poisson process. Depending on the underlying structure<strong>of</strong> the jumps <strong>and</strong> the presence <strong>of</strong> a Brownian motion in the described linearcombination, a <strong>Lévy</strong> process will either have paths <strong>of</strong> bounded variation on allfinite time intervals or paths <strong>of</strong> unbounded variation on all finite time intervals.Weincludesixcomputersimulationstogivearoughsense<strong>of</strong>howthepaths<strong>of</strong><strong>Lévy</strong>processeslook. Figs.1<strong>and</strong>2depictthepaths<strong>of</strong>Poissonprocess<strong>and</strong>acompoundPoissonprocess, respectively. Figs.3 <strong>and</strong> 4 show the paths <strong>of</strong> a Brownianmotion<strong>and</strong>theindependent sum<strong>of</strong>aBrownianmotion<strong>and</strong>acompoundPoissonprocess, respectively. Finally Figs.5 <strong>and</strong> 6 show the paths <strong>of</strong> a variance gammaprocess <strong>and</strong> a normal inverse Gaussian processes. Both are pure jump processes(no Brownian component as described above). Variance gamma processes arediscussed in more detail later normal inverse Gaussian processes are <strong>Lévy</strong> processeswhose jump measure is given by Π(dx) = (δα/π|x|)exp{βx}K 1 (α|x|)dxfor x ∈ R where α,δ > 0, β ≤ |α| <strong>and</strong> K 1 (x) is the modified Bessel function <strong>of</strong>the third kind with index 1 (the precise definition <strong>of</strong> the latter is not worth thedetail at this moment in the text). Both experience an infinite number <strong>of</strong> jumpsover a finite time horizon. However, variance gamma processes have paths <strong>of</strong>bounded variation whereas normal inverse Gaussian processes have paths <strong>of</strong>unbounded variation. The reader should be warned however that computersimulations ultimately can only depict a finite number <strong>of</strong> jumps in any givenpath. All figures were very kindly produced by Antonis Papapantoleon.15


3 The <strong>Lévy</strong>-Itô decomposition intuitivelyOne<strong>of</strong>ourmainobjectivesfortheremainder<strong>of</strong>thistextistoestablisharigorousunderst<strong>and</strong>ing <strong>of</strong> the structure <strong>of</strong> the paths <strong>of</strong> <strong>Lévy</strong> processes. The way we shalldo this is to prove the assertion in Theorem 1.2 that given any characteristicexponent, Ψ, belonging to an infinitely divisible distribution, there exists a <strong>Lévy</strong>processwith the same characteristicexponent. This will be done by establishingthe so-called <strong>Lévy</strong>–Itô decomposition which describes the structure <strong>of</strong> a general<strong>Lévy</strong> process in terms <strong>of</strong> three independent auxiliary <strong>Lévy</strong> processes, each withdifferent types <strong>of</strong> path behaviour.According to Theorem 1.1, any characteristic exponent Ψ belonging to aninfinitely divisible distribution can be written, after some simple reorganisation,in the formΨ(θ) ={iaθ + 1 }2 σ2 θ 2{ ∫}+ Π(R\(−1,1)) (1−e iθx Π(dx))+|x|≥1{ ∫(1−e iθx +iθx)Π(dx)0


The pro<strong>of</strong> <strong>of</strong> existence <strong>of</strong> a <strong>Lévy</strong> process with characteristic exponent givenby (3.1) thus boils down to showing the existence <strong>of</strong> a <strong>Lévy</strong> process, X (3) , whosecharacteristic exponent is given by Ψ (3) . Noting that∫(1−e iθx +iθx)Π(dx)0


4 The <strong>Lévy</strong>-Itô decomposition rigorouslyFix a time horizon T > 0. Let us assume that (Ω,F,{F t : t ∈ [0,T]},P) is afiltered probability space in which the filtration {F t : t ≥ 0} is complete withrespecttothe nullsets<strong>of</strong>P <strong>and</strong>rightcontinuousin thesensethat F t = ⋂ s>t F s.Definition 4.1 Fix T > 0. Define M 2 T = M2 T (Ω,F,{F t : t ∈ [0,T]},P) tobe the space <strong>of</strong> real valued, zero mean, almost surely right-continuous, squareintegrable P-martingales with respect to the given filtration over the finite timeperiod [0,T].One luxury that follows from the assumptions on {F t : t ≥ 0} is that anyzero mean square integrable martingalewith respect to this filtration has a rightcontinuous modification 4 which is also a member <strong>of</strong> M 2 T .If we quotient out the equivalent classes <strong>of</strong> versions 5 <strong>of</strong> each martingale, itis straightforward to deduce that M 2 T is a vector space over the real numberswith zero element M t = 0 for all t ∈ [0,T] <strong>and</strong> all ω ∈ Ω. In fact, as we shallshortly see, M 2 T is a Hilbert space6 with respect to the inner product〈M,N〉 = E(M T N T ),where M,N ∈ M 2 T . It is left to the reader to verify the fact that 〈·,·〉 forms aninner product. The only mild technical difficulty in this verification is showingthat for M ∈ M 2 T , 〈M,M〉 = 0 implies that M = 0, the zero element. Notehowever that if 〈M,M〉 = 0 then by Doob’s Maximal Inequality, which saysthat for M ∈ M 2 T ,( )E sup Ms20≤s≤T≤ 4E ( M 2 T),wehavethatsup 0≤t≤T |M t | = 0almostsurely. ItfollowsnecessarilythatM t = 0for all t ∈ [0,T] with probability one.As alluded to above, we can show without too much difficulty that M 2 T isa Hilbert space. To do that we are required to show that given {M (n) : n =1,2,...} is a Cauchy sequence <strong>of</strong> martingales taken from M 2 T then there existsan M ∈ M 2 T such that ∥ ∥∥M−M∥(n) ∥ → 04 Recall that M ′ = {M t ′ : t ∈ [0,T]} is a modification <strong>of</strong> M if for every t ≥ 0, we haveP(M t ′ = Mt) = 1.5 Recall that M ′ = {M t ′ : t ∈ [0,T]} is a version <strong>of</strong> M if it is defined on the same probabilityspace <strong>and</strong> {∃t ∈ [0,T] : M t ′ ≠ Mt} is measurable with zero probability. Note that if M′ is amodification <strong>of</strong> M then it is not necessarily a version <strong>of</strong> M. However, it is obviously the casethat if M ′ is a version <strong>of</strong> M then it also fulfils the requirement <strong>of</strong> being a modification.6 Recall that 〈·,·〉 : L ×L → R is an inner product on a vector space L over the reals if itsatisfies the following properties for f,g ∈ L <strong>and</strong> a,b ∈ R; (i) 〈af +bg,h〉 = a〈f,h〉+b〈g,h〉for all h ∈ L, (ii) 〈f,g〉 = 〈g,f〉, (iii) 〈f,f〉 ≥ 0 <strong>and</strong> (iv) 〈f,f〉 = 0 if <strong>and</strong> only if f = 0.For each f ∈ L let ||f|| = 〈f,f〉 1/2 . The pair (L,〈·,·〉) are said to form a Hilbert space if allsequences, {f n : n = 1,2,...} in L that satisfy ||f n−f m|| → 0 as m,n → ∞, so called Cauchysequences, have a limit which exists in L.18


as n ↑ ∞ where ‖·‖ := 〈·,·〉 1/2 . To this end let us assume that the sequence <strong>of</strong>processes { M (n) : n = 1,2,... } is a Cauchy sequence, in other words,{ [ (E M (m)T) ]} 2 1/2−M (n)T→ 0 as m,n ↑ ∞.Necessarily then the sequence <strong>of</strong> r<strong>and</strong>om variables {M (k)T: k ≥ 1} is a Cauchysequence in the Hilbert space <strong>of</strong> zero mean, square integrable r<strong>and</strong>om variablesdefined on (Ω,F T ,P), say L 2 (Ω,F T ,P), endowed with the inner product〈M,N〉 = E(MN). Hence there exists a limiting variable, say M T inL 2 (Ω,F T ,P) satisfying{ [E (M (n)T−M T ) 2]} 1/2→ 0as n ↑ ∞. Define the martingale M to be the right continuous version 7 <strong>of</strong>E(M T |F ∗ t ) for t ∈ [0,T]<strong>and</strong> note that by definition∥∥M (n) −M∥ → 0as n tends to infinity. Clearly it is an Ft ∗ -adapted process <strong>and</strong> by Jensen’sinequalityE ( Mt2 )= E(E(M T |Ft ∗ ) 2)≤ E ( E ( MT|F 2 t∗ ))= E ( MT2 )which is finite. Hence Cauchy sequences converge in M 2 T <strong>and</strong> we see that M2 Tis indeed a Hilbert space.Having reminded ourselves<strong>of</strong> some relevant properties <strong>of</strong> the space <strong>of</strong> squareintegrable martingales, let us consider a special class <strong>of</strong> martingale within thelatter class that are also <strong>Lévy</strong> processes <strong>and</strong> which are key to the pro<strong>of</strong> <strong>of</strong> the<strong>Lévy</strong>-Itô decomposition.Henceforth, we shall suppose that {ξ i : i ≥ 1} is a sequence <strong>of</strong> i.i.d. r<strong>and</strong>omvariables with common law F (which does not assign mass to the origin) <strong>and</strong>that N = {N t : t ≥ 0} is a Poisson process with rate λ > 0.Lemma 4.1 Suppose that ∫ |x|F(dx) < ∞.R(i) The process M = {M t : t ≥ 0} where∑N t ∫M t := ξ i −λt xF(dx)Ri=1is a martingale with respect to its natural filtration.7 Here we use the fact that {F ∗ t: t ∈ [0,T]} is complete <strong>and</strong> right continuous.19


(ii) If moreover ∫ R x2 F(dx) < ∞ then M is a square integrable martingale suchthat∫E(Mt 2 ) = λt x 2 F(dx).Pro<strong>of</strong>. For the first part, note that M has stationary <strong>and</strong> independent increments(in fact, as seen earlier, it is a <strong>Lévy</strong> process). Hence if we defineF t = σ(M s : s ≤ t) then for t ≥ s ≥ 0E(M t |F s ) = M s +E(M t −M s |F s ) = M s +E(M t−s ).Hence it suffices to prove that for all u ≥ 0, E(M u ) = 0. However the latter isevident since, by a classical computation (left to the reader) which utilizes theindependence <strong>of</strong> N <strong>and</strong> {ξ i : i ≥ 0},E(Nt∑i=1) ∫ξ i = λtE(ξ 1 ) = λt xF(dx).RNote that this computation also shows that E|M t | < ∞ for each t ≥ 0.For the second part, using the independence <strong>and</strong> distributional properties ifN <strong>and</strong> {ξ i : i ≥ 1} we have that⎡E(Mt 2 ) = E⎣as required.= E(Nt(Nt∫= λt∫= λt∫= λt∑i=1i=1ξ i) 2⎤⎦−λ 2 t 2 (∫i=1 j=1RR) 2xF(dx)⎛ ⎞∑ ∑N t∑N t(∫ξ)+E2 ⎝ 1 {j≠i} ξ i ξ j⎠−λ 2 t 2RRR(∫x 2 F(dx)+E(Nt 2 −N t)x 2 F(dx)+λ 2 t 2 (∫x 2 F(dx)RxF(dx)RR) 2−λ 2 t 2 (∫) 2 (∫x 2 F(dx) −λ 2 t 2) 2xF(dx)xF(dx)RxF(dx)RIf we recall the intuition in the discussion following (3.4) then our interest inthe type <strong>of</strong> martingales described in Lemma 4.1 (which are also <strong>Lévy</strong> processes)as far as the <strong>Lévy</strong>-Itô decomposition is concerned is tantamount to underst<strong>and</strong>inghow to superimpose a countably infinite number <strong>of</strong> such processes togetherso that the resulting sum convergesin an appropriate sense. Precisely this pointis tackled in the next theorem.We need tointroduce somenotationfirst. Supposethat foreachn = 1,2,3,..the process N (n) = {N (n)twe underst<strong>and</strong> the process N (n)t: t ≥ 0} is a Poisson process with rate λ n ≥ 0. Here= 0 for all t ≥ 0 if λ n = 0. We shall also20) 2) 2


assume that the processes N (n) are mutually independent. Moreover, for eachn = 1,2,3,... let us introduce the i.i.d. sequences {ξ (n)i : i = 1,2,3,...} (whichare themselves mutually independent sequences) with common distribution F nwhich does not assign mass to the origin <strong>and</strong> which satisfies ∫ x 2 F n (dx) < ∞.Associated with each pair (λ n ,F n ) is the square integrable martingale describedin Lemma 4.1 which we denote by M (n) = {M (n)t : t ≥ 0}. Suppose that wedenote by {F (n)t : t ≥ 0} as the natural filtration generated by the processM (n) ,then we can put all processes {M (n) : n ≥ 1} on the same probability space <strong>and</strong>take them as martingales with respect to the common filtrationF t := σ( ⋃ n≥1F (n)t )which we may also assume without loss <strong>of</strong> generality is taken in its completed<strong>and</strong> right continuous form 8 .Theorem 4.1 If∑∫λ nn≥1Rx 2 F n (dx) < ∞ (4.1)then there exists a <strong>Lévy</strong> process X = {X t : t ≥ 0} defined on the same probabilityspace as the processes {M (n) : n ≥ 1} which is also a square integrablemartingale <strong>and</strong> whose characteristic exponent is given by∫Ψ(θ) =R(1−e iθx +iθx) ∑ λ n F n (dx)n≥1for all θ ∈ R such that the for each fixed T > 0,⎡ ( ) 2⎤k∑lim E ⎣sup X t − M (n) ⎦t = 0.k↑∞t≤TPro<strong>of</strong>. First note that by linearity <strong>of</strong> conditional expectation, ∑ kn=1 M(k) is asquareintegrablemartingale. Infact, since, byindependence<strong>and</strong>themartingaleproperty, for i ≠ j we have E(M (i)t ) = E(M (i)t )E(M (j)t ) = 0, it follows that⎡( k∑E⎣n=1M (n)t) 2⎤⎦ =k∑n=1t M (j)n=1[E (M (n)t ) 2] = tk∑∫λ nn=1Rx 2 F n (dx) < ∞ (4.2)where the last equality follows by the assumption (4.1).Fix T > 0. We now claim that the sequence {X (k) : k ≥ 1} is a Cauchysequence with respect to ||·|| where X (k) = {X (k)t : 0 ≤ t ≤ T} <strong>and</strong>X (k)t =k∑n=1M (n)t .8 That is to say, if F t is not right continuous then we may work instead with F ∗ t = ⋂ s>t Fs.21


To see why, note that for k ≥ l, similar calculations to those in (4.2) show thatk∑∫||X (k) −X (l) || 2 = E[(X (k)T−X (l)T )2 ] = T λ n x 2 F n (dx)which tends to zero as k,l ↑ ∞ by the assumption (4.1). It follows that thereexists a martingale, say X = {X t : 0 ≤ t ≤ T} with respect to the filtration{F t : 0 ≤ t ≤ T} such that ||X − X (k) || → 0 as k ↑ ∞. Thanks to Doob’smaximal inequality (??), it follows moreover thatn=llim E( sup (X t −X (k)t ) 2 ) = 0. (4.3)k↑∞ 0≤t≤TFrom (4.3) it is implicit one-dimensional <strong>and</strong> hence finite-dimensional distributions<strong>of</strong> X (k) converge to those <strong>of</strong> X. Consequently since the processes X (k) areall <strong>Lévy</strong> processes, for all 0 ≤ s ≤ t ≤ T,E(e iθ(Xt−Xs) ) = lim E(e iθ(X(k) t −X (k)s ) )k↑∞= lim E(e iθX(k) t−s )k↑∞= E(e iθXt−s )showing that X has stationary <strong>and</strong> independent increments. From Section 2.2we see that for all t ≥ 0{k∏∫}k∑E(e iθX(k) t) = E(e iθM(n) t) = exp − (1−e iθx +iθx) λ n F n (dx)n=1which converges to exp{−Ψ(θ)} as k ↑ ∞. Note in particular that the lastintegral converges thanks to the assumption (4.1).To show that X is a <strong>Lévy</strong> process we need only to deduce the paths <strong>of</strong> X areright continuous with left limits almost surely. However this is a direct result<strong>of</strong> the fact that, if D[0,T] is the space <strong>of</strong> functions f : [0,T] → R which areright continuous with left limits, then D[0,T] is a closed space under the metricd(f,g) = sup t∈[0,T] |f(t)−g(t)| for f,g ∈ D[0,1]. See Exercise 8 for a pro<strong>of</strong> <strong>of</strong>this fact.There is one outst<strong>and</strong>ing issue that needs to be dealt with to complete thepro<strong>of</strong>. In principle the limiting process X depends on T <strong>and</strong> we need to dismissthiseventuality. SupposeweindexX bythetimehorizonT, sayX T . Usingthatfor any two sequences <strong>of</strong> real numbers {a n } <strong>and</strong> {b n }, sup n a 2 n = (sup n |a n |) 2<strong>and</strong> sup n |a n +b n | ≤ sup n |a n |+sup n |b n |, we have together with an application<strong>of</strong> Minkowski’s inequality that, when T 1 ≤ T 2 ,E[sup(Xt T1−Xt T2) 2 ] 1/2 ≤ E[sup(Xt T1−X (k)t ) 2 ] 1/2 +E[sup(Xt T2−X (k)t ) 2 ] 1/2 .s≤T 1 s≤T 1 s≤T 1Hence taking limits as k ↑ ∞ we may appeal to (4.3) to deduce that the righth<strong>and</strong> side tends to 0. Consequently the two processesagree almost surely on thetime horizon [0,T 1 ] <strong>and</strong> we may comfortably say that the limit X does indeednot depend on the time horizon T.RRn=122


4.1 Pro<strong>of</strong> <strong>of</strong> the <strong>Lévy</strong>-Itô decompositionWe are now in a position to prove the <strong>Lévy</strong>-Itô decomposition.Pro<strong>of</strong> <strong>of</strong> Theorem 3.1. Recalling the decomposition (3.1) <strong>and</strong> the discussionthereafter, it suffices to prove the existence <strong>of</strong> the process X (3) <strong>and</strong> that it hasa countable number <strong>of</strong> discontinuities which are less than unity in magnitudeover each finite time horizon. However this follows immediately from Theorem4.1 when we take λ n = Π({x : 2 −(n+1) ≤ |x| < 2 −n }) <strong>and</strong>Note in particular thatF n (dx) = λ −1n Π(dx)| {x:2 −(n+1) ≤|x|


5 Path variationThe pro<strong>of</strong><strong>of</strong>the <strong>Lévy</strong>-Itôdecompositionrevealsalittle more than the decompositionin Theorem 3.1. It provides us the necessary point <strong>of</strong> view to characterizeprecisely the almost sure path variation <strong>of</strong> the process.Suppose that X is any <strong>Lévy</strong> process whose characteristic exponent we shallalways refer to in the form (3.5). Moreover we shall talk about X (1) , X (2)<strong>and</strong> X (3) as the three components <strong>of</strong> the <strong>Lévy</strong>-Itô decomposition mentioned inTheorem 3.1.If σ > 0, since it is known that Brownian motion has unbounded variation<strong>and</strong> since the continuous <strong>and</strong> discontinuous elements <strong>of</strong> the path <strong>of</strong> X makeindependent contributions to its variation, the latter must also have unboudnedvariation. In the case that σ = 0 however, the <strong>Lévy</strong> process may have eitherbounded or unbounded variation. Since the process X (2) is a compound Poissonprocess,whichobviouslyhaspaths<strong>of</strong>boundedvariation,the variation<strong>of</strong>X boilsdown to the variation <strong>of</strong> X (3) in this case.Recall that we may see the process X (3) as the uniform L 2 -limit over finitetime horizons <strong>of</strong> a sequence <strong>of</strong> partial sums <strong>of</strong> compound Poisson martingales= ∑ kn=0 M(n) t for t ≥ 0. Suppose in the aforementioned analysis we splitthe compound Poissonmartingalesinto twosub-procesescontainingthe positive<strong>and</strong> negative jumps respectively. To be more precise, defineX (k)t<strong>and</strong>N t∑(n)M (n,+)t =N t∑(n)M (n,−)t =ξ (n)i 1 (n) (ξ i >0) −tλ ni=1|ξ (n)i|1 (n) (ξ i 0)is increasingin k <strong>and</strong> therefore has a limit almost surely (which may be infinite).Since X (k,+)t converges in distribution to a non-degenerate r<strong>and</strong>om variable, itfollows that∫ ∞∑limk↑∞ C(k,+) t < ∞ a.s. ⇔ x λ n F n (dx) < ∞(0,∞)24n=0


With C (k,−) defined in the obvious way it also follows that<strong>and</strong> hence sincek∑∫limk↑∞ C(k,−) t < ∞ a.s. ⇔(−∞,0)N t∑(n)n=0 i=1ξ (n)i∫= X (k)t +tRxk∑n=0|x|∞∑λ n F n (dx) < ∞n=0λ n F n (dx) = C (k,+)t−C (k,−)t ,the left h<strong>and</strong> side above is almost surely absolutely convergent if <strong>and</strong> only if∫R|x|∞∑λ n F n (dx) < ∞.n=0The latter integral test is tantamount to saying∫|x|Π(dx) < ∞. (5.1)(−1,1)The importance <strong>of</strong> the previous calculations is that it is straightforward tocheck that the variation <strong>of</strong> X (3) over [0,t] is at least C (∞,+)t + C (∞,−)t whichexplodes if <strong>and</strong> only if (5.1) fails <strong>and</strong>, when it is well defined, is equal toC (∞,+)t∫+C (∞,−)t +tRxk∑λ n F n (dx)n=0which is almost surely finite if <strong>and</strong> only if (5.1) holds.In conclusion we have the following result.Lemma 5.1 Any <strong>Lévy</strong> process has paths <strong>of</strong> bounded variation if <strong>and</strong> only if∫σ = 0 <strong>and</strong> |x|Π(dx) < ∞.(−1,1)In the case that X has bounded variation, we may express the process X (3) inthe, now meaningful, wayX (3)t = ∑ ∫s 1 (|∆Xs|


Indeed the calculations abovehave shown that the sum ∑ s≤t ∆X s is necessarilyabsolutely convergent almost surely.Note that montone functions necessarily have bounded variation <strong>and</strong> hencea <strong>Lévy</strong> process is a subordinator only if it has paths <strong>of</strong> bounded variation. Inthat case inspecting (5.2) tells us the following.Lemma 5.2 A <strong>Lévy</strong> process is a subordinator if <strong>and</strong> only if it has paths <strong>of</strong>bounded variation, Π(−∞,0) = 0 <strong>and</strong>∫δ := −a− xΠ(dx) ≥ 0.(0,1)In that case the characteristic exponent may be written∫Ψ(θ) = −iδθ+ (1−e iθx )Π(dx).(0,∞)26


ExercisesExercise 1 Using Definition 1.1, show that the sum <strong>of</strong>two (orindeed any finitenumber <strong>of</strong>) independent <strong>Lévy</strong> processes is again a <strong>Lévy</strong> process.Exercise 2 Suppose that S = {S n : n ≥ 0} is any r<strong>and</strong>om walk 9 <strong>and</strong> Γ p is anindependent r<strong>and</strong>om variable with a geometric distribution on {0,1,2,...} withparameter p.(i) Show that Γ p is infinitely divisible.(ii) Show that S Γp is infinitely divisible.Exercise 3 [Pro<strong>of</strong><strong>of</strong>Lemma2.1]InthisexercisewederivetheFrullaniidentity.(i) Show for any function f such that f ′ exists <strong>and</strong> is continuous <strong>and</strong> f(0) <strong>and</strong>f(∞) are finite, that∫ ∞(f(ax)−f(bx)bdx = (f(0)−f(∞))log ,xa)0where b > a > 0.(ii) By choosing f(x) = e −x , a = α > 0 <strong>and</strong> b = α−z where z < 0, show that1(1−z/α) β = e−∫ ∞0 (1−ezx ) β x e−αx dx<strong>and</strong> hence by analytic extension show that the above identity is still validfor all z ∈ C such that Rz ≤ 0.Exercise 4 Establishing formulae (2.5) <strong>and</strong> (2.6) from the <strong>Lévy</strong> measure givenin (2.7) is the result <strong>of</strong> a series <strong>of</strong> technical manipulations <strong>of</strong> special integrals.In this exercise we work through them. In the following text we will use thegamma function Γ(z), defined byΓ(z) =∫ ∞0t z−1 e −t dtfor z > 0. Note the gamma function can also be analytically extended so thatit is also defined on R\{0,−1,−2,...} (see [16]). Whilst the specific definition<strong>of</strong> the gamma function for negative numbers will not play an important role inthis exercise, the following two facts that can be derived from it will. For z ∈R\{0,−1,−2,...} the gamma function observes the recursion Γ(1+z) = zΓ(z)<strong>and</strong> Γ(1/2) = √ π.9 Recall that S = {S n : n ≥ 0} is a r<strong>and</strong>om walk if S 0 = 0 <strong>and</strong> for n = 1,2,3,... theincrements S n −S n−1 are independent <strong>and</strong> identically distributed.27


(i) Suppose that 0 < α < 1. Prove that for u > 0,∫ ∞0(e −ur −1)r −α−1 dr = Γ(−α)u α<strong>and</strong> show that the same equality is valid when −u is replaced by anycomplex number w ≠ 0 with Rw ≤ 0. Conclude by considering w = i that∫ ∞0(1−e ir )r −α−1 dr = −Γ(−α)e −iπα/2(E.1)as well as the complex conjugate <strong>of</strong> both sides being equal. Deduce (2.5)by considering the integral∫ ∞0(1−e iξθr )r −α−1 drfor ξ = ±1 <strong>and</strong> θ ∈ R. Note that you will have to take a = η −∫R x1 (|x|


Exercise 8 Recall that D[0,1] is the space <strong>of</strong> functions f : [0,1] → R whichare right continuous with left limits.(i) Define the norm ||f|| = sup x∈[0,1] |f(x)|. Use the triangle inequality todeduce that D[0,1] is closed under uniform convergence with respect tothe norm ||·||. That is to say, show that if {f n : n ≥ 1} is a sequence inD[0,1] <strong>and</strong> f : [0,1] → R such that lim n↑∞ ||f n −f|| = 0 then f ∈ D[0,1].(ii) Suppose that f ∈ D[0,1] <strong>and</strong> let ∆ = {t ∈ [0,1] : |f(t)−f(t−)| ̸= 0} (theset <strong>of</strong> discontinuity points). Show that ∆ is countable if ∆ c is countablefor all c > 0 where ∆ c = {t ∈ [0,1] : |f(t)−f(t−)| > c}. Next fix c > 0.By supposing for contradiction that ∆ c has an accumulation point, say x,show that the existence <strong>of</strong> either a left or right limit at x fails as it wouldimply that there is no left or right limit <strong>of</strong> f at x. Deduce that ∆ c <strong>and</strong>hence ∆ is countable.Exercise 9 The explicit construction <strong>of</strong> a <strong>Lévy</strong> process given in the <strong>Lévy</strong>–Itôdecomposition begs the question as to whether one may construct examples <strong>of</strong>deterministic functions which have similar properties to those <strong>of</strong> the paths <strong>of</strong><strong>Lévy</strong>processes. The objective <strong>of</strong> this exercise is to do preciselythat. The readeris warned however, that this is purely an analytical exercise <strong>and</strong> one should notnecessarily think <strong>of</strong> the paths <strong>of</strong> <strong>Lévy</strong> processes as being entirely similar to thefunctions constructed below in all respects.(i) Let us recall the definition <strong>of</strong> the Cantor function which we shall use toconstruct a deterministic function which has bounded variation <strong>and</strong> thatis right continuous with left limits <strong>and</strong> whose points <strong>of</strong> discontinuity aredense in its domain. Take the interval C 0 := [0,1] <strong>and</strong> perform the followingiteration. For n ≥ 0 define C n as the union <strong>of</strong> intervals whichremain when removing the middle third <strong>of</strong> each <strong>of</strong> the intervals whichmake up C n−1 . The Cantor set C is the limiting object, ⋂ n≥0 C n <strong>and</strong> canbe described byC = {x ∈ [0,1] : x = ∑ k≥1α k3 k such that α k ∈ {0,2} for each k ≥ 1}.One sees then that the Cantor set is simply the points in [0,1] whichomits numbers whose tertiary expansion contain the digit 1. To describethe Cantor function, for each x ∈ [0,1] let j(x) be the smallest j for whichα j = 1 in the tertiary expansion <strong>of</strong> ∑ k≥1 α k/3 k <strong>of</strong> x. If x ∈ C thenj(x) = ∞ <strong>and</strong> otherwise if x ∈ [0,1]\C then 1 ≤ j(x) < ∞. The Cantorfunction is defined as followsf(x) = 1j(x)−12 + ∑ α ifor x ∈ [0,1].j(x) 2i+1 i=1Now consider the function g : [0,1] → [0,1] given by g(x) = f −1 (x)−axfor a ∈ R. Here we underst<strong>and</strong> f −1 (x) = inf{θ : f(θ) > x}. Note that g is30


monotone if <strong>and</strong> only if a ≤ 0. Show that g has only positive jumps <strong>and</strong>the value <strong>of</strong> x for which g jumps form a dense set in [0,1]. Show furtherthat g has bounded variation on [0,1].(ii) Now let us construct an example <strong>of</strong> a deterministic function which hasunbounded variation, whosepoints<strong>of</strong>discontinuityaredense initsdomain<strong>and</strong> that is right continuous with left limits. Denote by Q 2 the dyadicrationals. Consider a function J : [0,∞) → R as follows. For all x ≥ 0which are not in Q 2 , set J(x) = 0. It remains to assign a value for eachx = a/2 n where a = 1,3,5,... (even values <strong>of</strong> a cancel). Let{ 2J(a/2 n −nif a = 1,5,9,...) =−2 −n if a = 3,7,11,...<strong>and</strong> definef(x) =∑s∈[0,x]∩Q 2J(s).Show that f is uniformly bounded on [0,1], is right continuous with leftlimits <strong>and</strong> has unbounded variation over [0,1].Exercise 10 Show that any <strong>Lévy</strong> process <strong>of</strong> bounded variation may be writtenas the difference <strong>of</strong> two independent subordinators.Exercise 11 Suppose that X = {X t : t ≥ 0} is a <strong>Lévy</strong> process with characteristicexponent Ψ <strong>and</strong> τ = {τ s : s ≥ 0} is an independent subordinator withcharacteristic exponent Ξ. Show that Y = {X τs : s ≥ 0} is again a <strong>Lévy</strong> processwith characteristic exponent Ξ◦iΨ.Exercise 12 This exercise gives another explicit example <strong>of</strong> a <strong>Lévy</strong> process;the variance gamma process, introduced by [21] for modelling financial data.(i) Suppose that Γ = {Γ t : t ≥ 0} is a gamma subordinator with parametersα,β <strong>and</strong> that B = {B t : t ≥ 0} is an independent st<strong>and</strong>ard Brownianmotion. Show that for c ∈ R <strong>and</strong> σ > 0, the variance gamma processX t := cΓ t +σB Γt , t ≥ 0is a <strong>Lévy</strong> process with characteristic exponentΨ(θ) = βlog(1−i θcα + σ2 θ 22α ).(ii) Show that the variance gamma process is equal in law to the <strong>Lévy</strong> processΓ (1) −Γ (2) = {Γ (1)t −Γ (2)t : t ≥ 0},where Γ (1) is a Gamma subordinator with parameters(√ ) −11α (1) c=24α 2 + 1 σ 22 α + 1 c<strong>and</strong> β (1) = β2α31


<strong>and</strong> Γ (2) is a Gamma subordinator, independent <strong>of</strong> Γ (1) , with parametersα (2) =(√14c 2α 2 + 1 2) −1α − 1 c<strong>and</strong> β (2) = β.2ασ 2Exercise 13 Suppose that d is an integer greater than one. Choose a ∈ R d<strong>and</strong> let Π be a measure concentrated on R d \{0} satisfying∫R d (1∧|x| 2 )Π(dx)where |·| is the st<strong>and</strong>ard Euclidian norm. Show that it is possible to construct ad-dimensional process X = {X t : t ≥ 0} on a probability space (Ω,F,P) havingthe following properties.(i) The paths <strong>of</strong> X are right continuous with left limits P-almost surely in thesense that foreacht ≥ 0, P(lim s↓t X s = X t ) = 1<strong>and</strong>P(lim s↑t X s exists) =1.(ii) P(X 0 = 0) = 1, the zero vector in R d .(iii) For 0 ≤ s ≤ t, X t −X s is independent <strong>of</strong> σ(X u : u ≤ s).(iv) For 0 ≤ s ≤ t, X t −X s is equal in distribution to X t−s .(v) For any t ≥ 0 <strong>and</strong> θ ∈ R d ,<strong>and</strong>E(e iθ·Xt ) = e −Ψ(θ)tΨ(θ) = ia·θ+ 1 2 θ ·Aθ + ∫R d (1−e iθ·x +i(θ ·x)1 (|x|


References[1] Boyarchenko, S.I <strong>and</strong> Levendorskii, S.Z. (2002a) Perpetual American optionsunder <strong>Lévy</strong> processes. SIAM J. Control Optim. 40, 1663–1696.[2] Carr, P., Geman, H., Madan, D. <strong>and</strong> Yor, M. (2003) Stochastic volatilityfor <strong>Lévy</strong> processes. Math. Finance 13, 345–382.[3] Feller, W. (1971) An Introduction to Probability Theory <strong>and</strong> its Applications.Vol II, 2nd edition. Wiley, New York.[4] De Finetti, B. (1929) Sulle funzioni ad incremento aleatorio. Rend. Acc.Naz. Lincei. 10, 163–168.[5] Good, I.J.(1953)Thepopulation frequencies<strong>of</strong>species <strong>and</strong>the estimation<strong>of</strong> populations parameters. Biometrika 40, 273–260.[6] Grosswald, E. (1976) The student t-distribution <strong>of</strong> any degree <strong>of</strong> freedomis infinitely divisible. Zeit. Wahrsch. Verw. Gebiete 36, 103–109.[7] Halgreen, C. (1979) Self-decomposability <strong>of</strong> the generalized inverse Gaussian<strong>and</strong>hyperbolicdistributions.Zeit.Wahrsch. Verw. Gebiete47,13–18.[8] Hougaard, P. (1986) Survival models for heterogeneous populations derivedfrom stable distributions. Biometrika 73, 386–396.[9] Ismail, M.E.H. (1977) Bessel functions <strong>and</strong> the infinite divisibility <strong>of</strong> theStudent t-distribution. Ann. Probab. 5, 582–585.[10] Ismail, M.E.H. <strong>and</strong> Kelker, D.H. (1979) Special functions, Stieltjes transforms<strong>and</strong> infinite divisibility. SIAM J. Math. Anal. 10, 884–901.[11] Itô, K. (1942) On stochastic processes. I. (<strong>Infinite</strong>ly divisible laws <strong>of</strong> probability).Jpn. J. Math. 18, 261–301.[12] Itô, K. (2004) Stochastic processes. Springer, Berlin Heidelberg New York.[13] Johnson, N.L. <strong>and</strong> Kotz, S. (1970) Distributions in Statistics. ContinuousUnivariate Distributions. Vol 1. Wiley, New York.[14] Jørgensen, B. (1982) Statistical Properties <strong>of</strong> the Generalized InverseGaussian Distribution. Lecture Notes in Statistics vol. 9. Springer, BerlinHeidelberg New York.[15] Koponen, I. (1995) Analytic approach to the problem <strong>of</strong> convergence <strong>of</strong>truncated<strong>Lévy</strong>flightstowardstheGaussianstochasticprocess.Phys. Rev.E 52, 1197–1199.[16] Lebedev, N.N. (1972) Special Functions <strong>and</strong> Their Applications. DoverPublications, Inc., New York.33


[17] <strong>Lévy</strong>, P. (1924) Théorie des erreurs. La loi de Gauss et les lois exceptionelles.Bull. Soc. Math. France. 52, 49–85.[18] <strong>Lévy</strong>, P. (1925) Calcul des Probabilités. Gauthier–Villars, Paris.[19] <strong>Lévy</strong>, P. (1954) Théorie de l’addition des variables aléatoires, 2nd edition.Gaulthier–Villars, Paris.[20] Lukacs, E. (1970) Characteristic Functions. Second edition, revised <strong>and</strong>enlarged. Hafner Publishing Co., New York.[21] Madan, D.P. <strong>and</strong> Seneta, E. (1990) The VG for share market returns J.Business 63, 511–524.[22] Samorodnitsky, G. <strong>and</strong> Taqqu, M.S. (1994) Stable Non-Gaussian R<strong>and</strong>om<strong>Processes</strong>. Chapman <strong>and</strong> Hall/CRC, Boca Raton, FL.[23] Sato,K.(1999)<strong>Lévy</strong> <strong>Processes</strong> <strong>and</strong> <strong>Infinite</strong>ly Divisible Distributions.CambridgeUniversity Press, Cambridge.[24] Schoutens, W. <strong>and</strong> Teugels, J.L. (1998) <strong>Lévy</strong> processes, polynomials <strong>and</strong>martingales. Commun. Stat.–Stochast. Models. 14, 335–349.[25] Steutel, F.W. (1970)Preservation <strong>of</strong> <strong>Infinite</strong> <strong>Divisibility</strong> under Mixing <strong>and</strong>Related Topics. Math. Centre Tracts, No 33, Math. Centrum, Amsterdam.[26] Steutel, F.W. (1973) Some recent results in infinite divisibility. Stochast.Process. Appl. 1, 125–143.[27] Steutel, F.W. (2004) <strong>Infinite</strong> <strong>Divisibility</strong> <strong>of</strong> Probability Distributions onthe Real Line. Marcel Dekker.[28] Thorin, O. (1977a) On the infinite divisibility <strong>of</strong> the Pareto distribution.Sc<strong>and</strong>. Actuarial J. 31–40.[29] Thorin, O. (1977b) On the infinite divisibility <strong>of</strong> the lognormal distribution.Sc<strong>and</strong>. Actuarial J. 121–148.[30] Tweedie, M.C.K. (1984) An index which distinguishes between some importantexponential families. Statistics: Applications <strong>and</strong> New Directions(Calcutta, 1981), 579–604, Indian Statist. Inst., Calcutta.[31] Zolotarev, V.M. (1986) One Dimensional Stable Distributions. AmericanMathematical Society Providence, RI.34

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!