28.07.2014 Views

Solution to selected problems.

Solution to selected problems.

Solution to selected problems.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Solution</strong> <strong>to</strong> <strong>selected</strong> <strong>problems</strong>.<br />

Chapter 1. Preliminaries<br />

1. ∀A ∈ F S , ∀t ≥ 0, A ∩ {T ≤ t} = (A ∩ {S ≤ t}) ∩ {T ≤ t}, since {T ≤ t} ⊂ {S ≤ t}. Since<br />

A ∩ {S ≤ t} ∈ F t and {T ≤ t} ∈ F t , A ∩ {T ≤ t} ∈ F t . Thus F S ⊂ F T .<br />

2. Let Ω = N and F = P(N) be the power set of the natural numbers. Let F n = σ({2}, {3}, . . . , {n+<br />

1}), ∀n. Then (F n ) n≥1 is a filtration. Let S = 3 · 1 3 and T = 4. Then S ≤ T and<br />

{<br />

{3} if n = 3<br />

{ω : S(ω) = n} =<br />

∅ otherwise<br />

{<br />

Ω if n = 4<br />

{ω : T (ω) = n} =<br />

∅ otherwise<br />

Hence {S = n} ∈ F n , {T = n} ∈ F n , ∀n and S, T are both s<strong>to</strong>pping time. However {ω : T − S =<br />

1} = {ω : 1 {3} (ω) = 1} = {3} /∈ F 1 . Therefore T − S is not a s<strong>to</strong>pping time.<br />

3. Observe that {T n ≤ t} ∈ F t and {T n < t} ∈ F t for all n ∈ N , t ∈ R + , since T n is s<strong>to</strong>pping<br />

time and we assume usual hypothesis. Then<br />

(1) sup n T n is a s<strong>to</strong>pping time since ∀t ≥ 0, {sup n T n ≤ t} = ∩ n {T n ≤ t} ∈ F t .<br />

(2) inf n T n is a s<strong>to</strong>pping time since {inf n T n < t} = ∪{T n < t} ∈ F t<br />

(3) lim sup n→∞ is a s<strong>to</strong>pping time since lim sup n→∞ = inf m sup n≥m T n (and (1), (2).)<br />

(4) lim inf n→∞ is a s<strong>to</strong>pping time since lim inf n→∞ = sup m inf n≥m T n (and (1), (2).).<br />

4. T is clearly a s<strong>to</strong>pping time by exercise 3, since T = lim inf n T n . F T ⊂ F Tn , ∀n since T ≤ T n ,<br />

and F T ⊂ ∩ n F Tn by exercise 1. Pick a set A ∈ ∩ n F Tn .∀n ≥ 1, A ∈ F Tn and A ∩ {T n ≤ t} ∈ F t .<br />

Therefore A ∩ {T ≤ t} = A ∩ (∩ n {T n ≤ t}) = ∩ n (A ∩ {T n ≤ t}) ∈ F t . This implies ∩ n F Tn ⊂ F T<br />

and completes the proof.<br />

5. (a) By completeness of L p space, X ∈ L P . By Jensen’s inequality, E|M t | p = E|E(X|F t )| p ≤<br />

E [E(|X| p |F t )] = E|X| p < ∞ for p > 1.<br />

(b) By (a), M t ∈ L p ⊂ L 1 . For t ≥ s ≥ 0, E(M t |F s ) = E(E(X|F t )|F s ) = E(X|F s ) = M s a.s. {M t }<br />

is a martingale. Next, we show that {M t } is continuous. By Jensen’s inequality, for p > 1,<br />

E|M n t − M t | p = E|E(M n ∞ − X|F t )| p ≤ E|M n ∞ − X| p , ∀t ≥ 0. (1)<br />

It follows that sup t E|Mt n − M t | p ≤ E|M∞ n − X| p → 0 as n → ∞. Fix arbitrary ε > 0. By<br />

Chebychev’s and Doob’s inequality,<br />

(<br />

)<br />

P sup |Mt n − M t | > ε ≤ 1 ( ) p p<br />

t<br />

ε p E(sup |Mt n − M t | p sup<br />

) ≤<br />

t E|Mt n − M t | p<br />

t<br />

p − 1<br />

ε p → 0. (2)<br />

Therefore M n converges <strong>to</strong> M uniformly in probability. There exists a subsequence {n k } such that<br />

M nk converges uniformly <strong>to</strong> M with probability 1. Then M is continuous since for almost all ω, it<br />

is a limit of uniformly convergent continuous paths.<br />

1


6. Let p(n) denote a probability mass function of Poisson distribution with parameter λt. Assume<br />

λt is integer as given.<br />

∑λt<br />

E|N t − λt| =E(N t − λt) + 2E(N t − λt) − = 2E(N t − λt) − = 2 (λt − n)p(n)<br />

=2e −λt λt<br />

∑<br />

(λt − n) (λt)n<br />

n!<br />

n=0<br />

=2e −λt (λt) λt<br />

(λt − 1)!<br />

( ∑λt<br />

= 2λte −λt (λt) n<br />

−<br />

n!<br />

n=0<br />

n=0<br />

λt−1<br />

∑<br />

n=0<br />

)<br />

(λt) n<br />

n!<br />

(3)<br />

7. Since N has stationary increments, for t ≥ s ≥ 0,<br />

E(N t − N s ) 2 = EN 2 t−s = V ar(N t−s ) + (EN t−s ) 2 = λ(t − s)[1 + λ(t − s)]. (4)<br />

As t ↓ s (or s ↑ t), E(N t − N s ) 2 → 0. N is continuous in L 2 and therefore in probability.<br />

8. Suppose τ α is a s<strong>to</strong>pping time. A 3-dimensional Brownian motion is a strong Markov process<br />

we know that P (τ α < ∞) = 1. Let’s define W t := B τα +t −B τα . W is also a 3-dimensional Brownian<br />

motion and its argumented filtration Ft W is independent of F τα+ = F τα . Observe ‖W 0 ‖ = ‖B τα ‖ =<br />

α. Let S = inf t {t > 0 : ‖W t ‖ ≤ α}. Then S is a Ft W s<strong>to</strong>pping time and {S ≤ s} ∈ Fs W . So {S ≤ s}<br />

has <strong>to</strong> be independent of any sets in F τα . However {τ α ≤ t} ∩ {S ≤ s} = ∅, which implies that<br />

{τ α ≤ t} and {S ≤ s} are dependent. Sine {τ α ≤ t} ∈ F τα , this contradicts the fact that F τα and<br />

Ft<br />

W are independent. Hence τ α is not a s<strong>to</strong>pping time.<br />

9. (a) Since L 2 -space is complete, it suffices <strong>to</strong> show that S n t = ∑ n<br />

i=1 M i t is a Cauchy sequence<br />

w.r.t. n. For m ≥ n, by independence of {M i },<br />

E(S n t − S m t ) 2 = E<br />

( m<br />

∑<br />

i=n+1<br />

M i t<br />

) 2<br />

=<br />

m∑<br />

i=n+1<br />

E ( Mt<br />

i ) 2<br />

∑<br />

m 1 = t<br />

i 2 . (5)<br />

i=n+1<br />

, Since ∑ ∞<br />

i=1 1/i2 < ∞, as n, m → ∞, E(S n t − S m t ) 2 → ∞. {S n t } n is Cauchy and its limit M t is<br />

well defined for all t ≥ 0.<br />

(b)First, recall Kolmogorov’s convergence criterion: Suppose {X i } i≥1 is a sequence of independent<br />

random variables. If ∑ i V ar(X i) < ∞, then ∑ i (X i − EX i ) converges a.s.<br />

For all i and t, △M i t = (1/i)△N i t and △M i t > 0. By Fubini’s theorem and mono<strong>to</strong>ne convergence<br />

theorem,<br />

∑<br />

△M s = ∑<br />

s≤t s≤t<br />

∞∑<br />

△Ms i =<br />

i=1<br />

∞∑ ∑<br />

△Ms i =<br />

i=1 s≤t<br />

∞∑ ∑<br />

i=1 s≤t<br />

△N i s<br />

i<br />

=<br />

∞∑<br />

i=1<br />

N i t<br />

i . (6)<br />

Let X i = (Nt i −t)/i. Then {X i } i is a sequence of independent random variables such that EX i = 0,<br />

V ar(X i ) = 1/i 2 and hence ∑ ∞<br />

∑ i=1 V ar(X i) < ∞. Therefore Kolmogorov’s criterion implies that<br />

∞<br />

i=1 X i converges almost surely. On the other hand, ∑ ∞<br />

i=1 t/i = ∞. Therefore, ∑ s≤t △M s =<br />

∑ ∞<br />

i=1 N t i /i = ∞.<br />

2


10. (a) Let N t = ∑ i 1 i (N i t − t) and L t = ∑ i 1 i (Li t − t). As we show in exercise 9(a), N, M are<br />

well defined in L 2 sense. Then by linearity of L 2 space M is also well defined in L 2 sense since<br />

M t = ∑ i<br />

1 [<br />

(N<br />

i<br />

i t − t) − (L i t − t) ] = ∑ i<br />

1<br />

i (N i t − t) − ∑ i<br />

1<br />

i (Li t − t) = N t − L t . (7)<br />

Both terms in right size are martingales change only by jumps as shown in exercise 9(b). Hence<br />

M t is a martingale which changes only by jumps.<br />

(b) First show that given two independent Poisson processes N and L, ∑ s>0 △N s△L s = 0 a.s.,<br />

i.e. N and L almost surely don’t jump simultaneously. Let {T n } n≥1 be a sequence of jump times<br />

of a process L. Then ∑ s>0 △N s△L s = ∑ n △N T n<br />

. We want <strong>to</strong> show that ∑ n △N T n<br />

= 0 a.s. Since<br />

△N Tn ≥ 0, it is enough <strong>to</strong> show that E△N Tn = 0 for ∀n ≥ 1.<br />

Fix n ≥ 1 and let µ Tn be a induced probability measure on R + of T n . By conditioning,<br />

E(△N Tn ) = E [E (△N Tn |T n )] =<br />

∫ ∞<br />

0<br />

E (△N Tn |T n = t) µ Tn (dt) =<br />

∫ ∞<br />

0<br />

E (△N t ) µ Tn (dt), (8)<br />

where last equality is by independence of N and T n . It follows that E△N Tn = E△N t . Since<br />

△N t ∈ L 1 and P (△N t = 0) = 1 by problem 25, E△N t = 0, hence E△N Tn = 0.<br />

Next we show that the previous claim holds even when there are countably many Poisson processes.<br />

assume that there exist countably many independent Poisson processes {N i } i≥1 . Let A ⊂ Ω be a<br />

set on which more than two processes of {N i } i≥1 jump simultaneously. Let Ω ij denotes a set on<br />

which N i and N j don’t jump simultaneously. Then P (Ω ij ) = 1 for i ≠ j by previous assertion.<br />

Since A ⊂ ∪ i>j Ω c ij , P (A) ≤ ∑ i>j P (Ωc ij ) = 0. Therefore jumps don’t happen simultaneously<br />

almost surely.<br />

Going back <strong>to</strong> the main proof, by (a) and the fact that N and L don’t jump simultaneously, ∀t > 0,<br />

∑<br />

|△M s | = ∑ |△N s | + ∑ |△L s | = ∞ a.s. (9)<br />

s≤t<br />

s≤t<br />

s≤t<br />

11. Continuity: We use notations adopted in Example 2 in section 4 (P33). Assume E|U 1 | < ∞.<br />

By independence of U i , elementary inequality, Markov inequality, and the property of Poisson<br />

process, we observe<br />

lim P (|Z t − Z s | > ɛ) = lim<br />

s→t<br />

≤ lim<br />

s→t<br />

∑<br />

k<br />

P (<br />

≤ lim<br />

s→t<br />

E|U 1 |<br />

ɛ<br />

k∑<br />

i=1<br />

s→t<br />

∑<br />

k<br />

P (|Z t − Z s | > ɛ |N t − N s = k)P (N t − N s = k)<br />

∑<br />

|U i | > ɛ)P (N t − N s = k) ≤ lim<br />

[kP (|U 1 | > ɛ ]<br />

s→t k ) P (N t − N s = k)<br />

∑<br />

k 2 E|U 1 |<br />

P (N t − N s = k) = lim {λ(t − s)} = 0<br />

s→t ɛ<br />

k<br />

k<br />

(10)<br />

3


Independent Increment: Let F be a distribution function of U. By using independence of<br />

{U k } k and strong Markov property of N, for arbitrary t, s : t ≥ s ≥ 0,<br />

)<br />

E<br />

(e iu(Z t−Z s )+ivZ s<br />

(<br />

=E E<br />

(<br />

=E<br />

=E<br />

= E<br />

(e iu P N t<br />

k=Ns+1 U k+iv P )<br />

Ns<br />

k=1 U k<br />

(e iu P N t<br />

k=Ns+1 U k+iv P Ns<br />

k=1 U k<br />

|F s<br />

))<br />

e iv P Ns<br />

k=1 U k<br />

E<br />

(<br />

e iv P Ns<br />

k=1 U k<br />

)<br />

E<br />

(e iu P N t<br />

)) (<br />

k=Ns+1 U k<br />

|F s = E<br />

(e iu P N t<br />

)<br />

k=Ns+1 U k<br />

= E ( e ivZs) E<br />

This shows that Z has independent increments.<br />

e iv P Ns<br />

k=1 U k<br />

E<br />

(<br />

e iu(Zt−Zs)) .<br />

(e iu P N t<br />

))<br />

k=Ns+1 U k<br />

(11)<br />

Stationary increment: Since {U k } k are i.i.d and independent of N t ,<br />

E ( e iuZ ) t<br />

=E ( E ( [ (∫ ) ]<br />

e iuZ t<br />

)) N(t)<br />

|N t = E e iux F (dx) = ∑ (λt) n e −λt<br />

n!<br />

n≥0<br />

{ (∫<br />

= exp −tλ<br />

)}<br />

[1 − e iux ]F (dx) .<br />

(∫<br />

) n<br />

e iux F (dx)<br />

(12)<br />

By (11) and (12), set v = u,<br />

(<br />

E e iu(Zt−Zs)) =E ( e iuZt) /E ( e iuZs) { (∫<br />

)}<br />

= exp −(t − s)λ [1 − e iux ]F (dx)<br />

=E<br />

(e ) (13)<br />

iu(Z t−s)<br />

Hence Z has stationary increments.<br />

12. By exercise 12, a compound Poisson process is a Lévy process and has independent stationary<br />

increments.<br />

(<br />

∞∑ n∑<br />

)<br />

E|Z t − λtEU 1 | ≤E(E(|Z t ||N t )) + λtE|U 1 | ≤ E |U i ||N t = n P (N t = n) + λtE|U 1 |<br />

n=1<br />

(14)<br />

i=1<br />

=E|U 1 |EN t + λtE|U 1 | = 2λtE|U 1 | < ∞,<br />

For t ≥ s, E(Z t |F s ) = E(Z t − Z s + Z s |F s ) = Z s + E(Z t−s ). Since<br />

(<br />

∞∑ n∑<br />

)<br />

EZ t = E(E(Z t |N t )) = E U i |N t = n P (N t = n) = λtEU 1 (15)<br />

n=1 i=1<br />

E(Z t − EU 1 λt|F s ) = Z s − EU 1 λs a.s. and {Z t − EU 1 λt} t≥0 is a martingale.<br />

13. By Lévy decomposition theorem and hypothesis, Z t can be decomposed as<br />

∫<br />

∫<br />

Z t = xN t (·, dx) + t(α − xν(dx)) = Z t ′ + βt (16)<br />

R<br />

|x|≥1<br />

4


where Z t ′ = ∫ R xN t(·, dx), β = α− ∫ |x|≥1 xν(dx). By theorem 43, E(eiuZ′ t ) =<br />

∫R (1−eiux )ν(dx). Z t ′ is<br />

a compound Poisson process (See problem 11). Arrival rate (intensity) is λ since E( ∫ R N t(·, dx)) =<br />

t ∫ R ν(dx) = λt. Since Z t is a martingale, EZ t ′ = −βt. Since EZ t ′ = E( ∫ R xN t(·, dx)) = t ∫ R xν(dx),<br />

β = − ∫ R xν(dx). It follows that Z t = Z t ′ − λt ∫ R x ν λ<br />

(dx) is a compensated compound Poisson<br />

process. Then problem 12 shows that EU 1 = ∫ R x ν λ<br />

(dx). It follows that the distribution of jumps<br />

µ = (1/λ)ν.<br />

14. Suppose EN t < ∞. At first we show that Z t ∈ L 1 .<br />

(<br />

Nt<br />

)<br />

)<br />

∑<br />

∞∑<br />

E|Z t | ≤ E |U i | = E |U i | P (N t = n) =<br />

Then<br />

≤ sup<br />

i<br />

i=1<br />

E|U i |<br />

∞∑<br />

n=0<br />

E(Z t |F s ) = E(Z s +<br />

i=1<br />

n=0<br />

( n∑<br />

i=1<br />

nP (N t = n) = EN t sup E|U i | < ∞.<br />

i<br />

∞∑<br />

n=0 i=1<br />

∞∑<br />

∞∑<br />

U i 1 {s


16. Let F t be natural filtration of B t satisfying usual hypothesis. By stationary increments<br />

property of Brownian motion and symmetry,<br />

W t = B 1−t − B 1<br />

d = B1 − B 1−t<br />

d = B1−(1−t) = B t (21)<br />

This shows W t is Gaussian. W t has stationary increments because W t − W s = B 1−s − B 1−t<br />

d = Bt−s .<br />

Let G t be a natural filtration of W t . Then G t = σ(−(B 1 − B 1−s ) : 0 ≤ s ≤ t). By independent<br />

increments property of B t , G t is independent of F 1−t . For s > t, W s −W t = −(B 1−t −B 1−s ) ∈ F 1−t<br />

and hence independent of G t .<br />

17. a) Fix ε > 0 and ω such that X·(ω) has a sample path which is right continuous, with left<br />

limits. Suppose there exists infinitely many jumps larger than ε at time {s n } n≥1 ∈ [0, t]. (If there<br />

are uncountably many such jumps, we can arbitrarily choose countably many of them.) Since [0, t]<br />

is compact, there exists a subsequence {s nk } k≥1 converging <strong>to</strong> a cluster point s ∗ ∈ [0, 1]. Clearly<br />

we can take further subsequence converging <strong>to</strong> s ∗ ∈ [0, 1] mono<strong>to</strong>nically either from above or from<br />

below. To simplify notations, Suppose ∃{s n } n≥1 ↑ s ∗ . ( The other case {s n } n≥1 ↓ s ∗ is similar.)<br />

By left continuity of X t , there exists δ > 0 such that s ∈ (s ∗ − δ, s ∗ ) implies |X s − X s ∗ −| < ε/3 and<br />

|X s− − X s ∗ −| < ε/3. However for s n ∈ (s ∗ − δ, s ∗ ),<br />

|X sn − X s ∗ −| = |X sn − − X s ∗ − + △X sn | ≥ |△X sn | − |X sn − − X s ∗ −| > 2ε<br />

3<br />

This is a contradiction and the claim is shown.<br />

b) By a), for each n there is a finitely many jumps of size larger than 1/n. But J = {s ∈ [0, t] :<br />

|△X s | > 0} = ∪ ∞ n=1 {s ∈ [0, t] : |△X s| > 1/n}. We see that cardinality of J is countable.<br />

18. By corollary <strong>to</strong> theorem 36 and theorem 37, we can immediately see that J ε and Z − J ε are<br />

Lévy processes. By Lévy -Khintchine formula (theorem 43), we can see that ψ J εψ Z−J ε = ψ Z . Thus<br />

J ε and Z − J ε are independent. ( For an alternative rigorous solution without Lévy -Khintchine<br />

formula, see a proof of theorem 36 .)<br />

19. Let T n = inf{t > 0 : |X t | > n}. Then T n is a s<strong>to</strong>pping time. Let S n = T n 1 {X0 ≤n}. We have<br />

{S n ≤ t} = {T n ≤ t, X 0 ≤ n} ∪ {X 0 > n} = {T n ≤ t} ∪ {X 0 > n} ∈ F t and S n is a s<strong>to</strong>pping time.<br />

Since X is continuous, S n → ∞ and X S n<br />

1 {Sn >0} ≤ n, ∀n ≥ 1. Therefore X is locally bounded.<br />

20. Let H be an arbitrary unbounded random variable. (e.g. Normal) and let T be a positive<br />

random variable independent of H such that P (T ≥ t) > 0, ∀t > 0 and P (T < ∞) = 1. (e.g.<br />

Exponential). Define a process Z t = H1 {T ≥t} . Z t is a càdlàg process and adapted <strong>to</strong> its natural<br />

filtration with Z 0 = 0. Suppose there exists a sequence of s<strong>to</strong>pping times T n ↑ ∞ such that Z Tn is<br />

bounded by some K n ∈ R. Observe that<br />

{<br />

H T n ≥ T > t<br />

Z Tn<br />

t =<br />

0 otherwise<br />

Since Z T n<br />

is bounded by K n , P (Z T n<br />

> K n ) = P (H > K n )P (T n ≥ T > t) = 0. It follows that<br />

P (T n ≥ T > t) = 0, ∀n and hence P (T n ≤ T ) = 1. Moreover P (∩ n {T n ≤ T }) = P (T = ∞) = 1.<br />

This is a contradiction.<br />

(22)<br />

(23)<br />

6


21. a) let a = (1 − t) −1 ( ∫ 1<br />

t Y (s)ds) and Let M t = Y (ω)1 (0,t) (ω) + a1 [t,1) (ω). For arbitrary<br />

B ∈ B([0, 1]), {ω : M t ∈ B} = ((0, t) ∩ {Y ∈ B}) ∪ ({a ∈ B} ∩ (t, 1)). (0, t) ∩ {Y ∈ B} ⊂ (0, t) and<br />

hence in F t . {a ∈ B} ∩ (t, 1) is either (t, 1) or ∅ depending on B and in either case in F t . Therefore<br />

M t is adapted.<br />

Pick A ∈ F t . Suppose A ⊂ (0, t). Then clearly E(M t : A) = E(Y : A). Suppose A ⊃ (t, 1). Then<br />

( ∫ 1 1<br />

)<br />

E(M t : A) = E(Y : A ∩ (0, t)) + E<br />

Y (s)ds : (t, 1)<br />

1 − t t<br />

(24)<br />

=E(Y : A ∩ (0, t)) +<br />

∫ 1<br />

t<br />

Y (s)ds = E(Y : A ∩ (0, t)) + E(Y : (t, 1)) = E(Y : A).<br />

Therefore for ∀A ∈ F t , E(M t : A) = E(Y : A) and hence M t = E(Y |F t ) a.s.<br />

b) By simple calculation EY 2 = 1/(1 − 2α) < ∞ and Y ∈ L 2 ⊂ L 1 . It is also straightforward <strong>to</strong><br />

compute M t = (1 − t) −1 ∫ 1<br />

t Y (s)ds = (1 − α)−1 Y (t) for ω ∈ (t, 1).<br />

c) From b), M t (ω) = Y (ω)1 (0,t) (ω) + 1/(1 − α) −1 Y (t)1 (t,1) (ω). Fix ω ∈ (0, 1). Since 0 < α < 1/2,<br />

Y/(1 − α) > Y and Y is a increasing on (0, 1). Therefore,<br />

(<br />

) ( )<br />

Y (t)<br />

sup M t = sup ∨ sup Y (ω) = Y (ω)<br />

Y (ω)<br />

∨ Y (ω) = (25)<br />

0 ε} ⊂ (0, ε] for all ε > 0. Then T (ω) ≤ ε on (ε, 1) for all ε > 0 and it<br />

follows that T ≡ 0. This is contradiction. Therefore there exists ε 0 such that {T > ε 0 } ⊃ (ε 0 , 1).<br />

Fix ε ∈ (0, ε 0 ). Then {T > ε} ⊃ {T > ε 0 } ⊃ (ε 0 , 1). On the other hand, since T is a s<strong>to</strong>pping time,<br />

{T > ε} ⊂ (0, ε] or {T > ε} ⊃ (ε, 1). Combining these observations, we know that {T > ε} ⊃ (ε, 1)<br />

for all ε ∈ (0, ε 0 ). ∀ε ∈ (0, ε 0 ), there exists δ > 0 such that ε − δ > 0 and {T > ε − δ} ⊃ (ε − δ, 1).<br />

Especially T (ε) > ε − δ. Taking a limit of δ ↓ 0, we observe T (ε) ≥ ε on (0, ε 0 ).<br />

c) Using the notation in b), T (ω) ≥ ω on (0, ε 0 ) and hence M T (ω) = ω −1/2 on (0, ε 0 ). Therefore,<br />

EMT 2 ≥ E(1/ω : (0, ε 0)) = ∫ ε 0<br />

0 ω−1 dω = ∞. Since this is true for all s<strong>to</strong>pping times not identically<br />

equal <strong>to</strong> 0, M cannot be locally a L 2 martingale.<br />

d) By a), |M t (ω)| ≤ ω −1/2 ∨ 2 and M has bounded path for each ω. If M T 1 {T >0} were a bounded<br />

random variable, then M T would be bounded as well since M 0 = 2. However, from c) M T /∈ L 2<br />

unless T ≡ 0 a.s. and hence M T 1 {T >0} is unbounded.<br />

23. Let M be a positive local martingale and {T n } be its fundamental sequence. Then for t ≥ s ≥<br />

0, E(Mt<br />

Tn 1 {Tn >0}|F s ) = M T n<br />

s 1 {Tn >0}. By applying Fa<strong>to</strong>u’s lemma, E(M t |F s ) ≤ M s a.s. Therefore<br />

7


a positive local martingale is a supermartingale. By Doob’s supermartingale convergence theorem,<br />

positive supermartingale converges almost surely <strong>to</strong> X ∞ ∈ L 1 and closable. Then by Doob’s<br />

optional s<strong>to</strong>pping theorem E(M T |F S ) ≤ M S a.s. for all s<strong>to</strong>pping times S ≤ T < ∞. If equality<br />

holds in the last inequality for all S ≤ T , clearly M is a martingale since deterministic times<br />

0 ≤ s ≤ t are also s<strong>to</strong>pping time. Therefore any positive honest local martingale makes a desired<br />

example. For a concrete example, see example at the beginning of section 5, chapter 1. (p. 37)<br />

24. a) To simplify a notation, set F 1 = σ(Z T − , T ) ∨ N . At first, we show G T − ⊃ F 1 . Since<br />

N ⊂ G 0 , N ⊂ G T − . Clearly T ∈ G T − since {T ≤ t} = (Ω ∩ {t < T }) c ∈ G T − .<br />

∀t > 0, {Zt T − ∈ B} = ({Z t ∈ B} ∩ {t < T }) ∪ ({Z T − ∈ B} ∩ {t ≥ T } ∩ {T < ∞}).<br />

{Z t ∈ B} ∩ {t < T } ∈ G T − by definition of G T − . Define a mapping f : {T < ∞} → {T < ∞} × R +<br />

by f(ω) = (ω, T (ω)). Define a σ-algebra P 1 consists of subsets of Ω × R + that makes all left<br />

continuous processes with right limits (càglàd processes) measurable. Especially {Z t− } t≥0 is P-<br />

measurable i.e, {(ω, t) : Z t (ω) ∈ B} ∈ P. Without proof, we cite a fact about P. Namely,<br />

f −1 (P) = G T − ∩ {T < ∞}. Since Z T (ω)− (ω) = Z − ◦ f(ω), {Z T − ∈ B} ∩ {T < ∞} = f −1 ({(ω, t) :<br />

Z t (ω) ∈ B}) ∩ {T < ∞} ∈ G T − ∩ {T < ∞}. Note that {T < ∞} = ∩ n {n < T } ∈ G T − . Then<br />

Z T − ∈ G T − and G T − ⊃ F 1 .<br />

Next We show G T − ⊂ F 1 . G 0 ⊂ F 1 , since Z T −<br />

0 = Z 0 . Fix t > 0. We want <strong>to</strong> show that for all<br />

A ∈ G t , A ∩ {t < T } ∈ F 1 . Let Λ = {A : A ∩ {t < T } ∈ F 1 }. Let<br />

Π = {∩ n i=1{Z si ≤ x i } : n ∈ N + , 0 ≤ s i ≤ t, x i ∈ R} ∪ N (27)<br />

Then Π is a π-system and σ(Π) = G t . Observe N ⊂ F 1 and<br />

(∩ n i=1{Z si ≤ x i }) ∩ {t < T } = (∩ n i=1{Z T −<br />

s i<br />

≤ x i }) ∩ {t < T } ∈ F 1 , (28)<br />

Π ⊂ Λ. By Dynkin’s theorem (π − λ theorem), G t = σ(Π) ⊂ Λ hence the claim is shown.<br />

b) To simplify the notation, let H σ(T, Z T ) ∧ N . Then clearly H ⊂ G T since N ⊂ G T , T ∈ G T ,<br />

and Zt<br />

T ∈ G T for all t. It suffices <strong>to</strong> show that G T ⊂ H. Let<br />

{<br />

}<br />

L = A ∈ G ∞ : E[1 A |G T ] = E[1 A |H]<br />

(29)<br />

Observe that L is a λ-system (Dynkin’s system) and contains all the null set since so does G ∞ . Let<br />

⎧<br />

⎫<br />

⎨ n⋂<br />

⎬<br />

C = {Z tj ∈ B j } : n ∈ N, t j ∈ [0, ∞), B j ∈ B(R)<br />

⎩<br />

⎭ . (30)<br />

j=1<br />

Then C is a π-system such that σ(C) ∨ N = G ∞ . Therefore by Dynkin’s theorem, provided that<br />

C ⊂ L, σ(C) ⊂ L and thus G ⊂ L. For arbitrary A ∈ G T ⊂ G ∞ , 1 A = E[1 A |G T ] = E[1 A |H] ∈ H and<br />

hence A ∈ H.<br />

It remains <strong>to</strong> show that C ⊂ L. Fix n ∈ N. Since H ⊂ G T , it suffices <strong>to</strong> show that<br />

[<br />

∏ n ]<br />

∣<br />

E<br />

∣G T ∈ H. (31)<br />

j=1<br />

1 {Ztj ∈B j }<br />

1 P is called a predictable σ-algebra. Its definition and properties will be discussed in chapter 3.<br />

8


For this, let t 0 = 0 and t n+1 = ∞ and write<br />

[<br />

∏ n<br />

E<br />

j=1<br />

1 {Ztj ∈B j }<br />

] n+1<br />

∣ ∑<br />

∣G T =<br />

∏<br />

1 {T ∈[tk−1 ,t k )}<br />

k=1<br />

j 0, ∀t > 0, {|△Z t | > ε} = ∪ n ∩ n≥m {|Z t − Z T −1/n | > ε}.<br />

Therefore,<br />

P (|△Z t | > ε) ≤ lim inf<br />

n→∞ P (|Z t − Z T −1/n | > ε) = 0 (33)<br />

Since this is true for all ε > 0, P (|△Z t | > 0) = 0, ∀t.<br />

26. To apply results of exercise 24 and 25, we first show following almost trivial lemma.<br />

Lemma Let T be a s<strong>to</strong>pping time and t ∈ R + . If T ≡ t, then G T = G t and G T − = G t− .<br />

Proof. G T − = {A ∩ {T > s} : A ∈ G s } = {A ∩ {t > s} : A ∈ G s } = {A : A ∈ G s , s < t} = G t− . Fix<br />

A ∈ G t . A ∩ {t ≤ s} = A ∈ G t ⊂ G s (t ≤ s), or ∅ ∈ G s (t > s) and hence G t ⊂ G T . Fix A ∈ G T ,<br />

A ∩ {t ≤ s} ∈ G s , ∀s > 0. Especially, A ∩ {t ≤ t} = A ∈ G t and G T ⊂ G t . Therefore G T = G t .<br />

Fix t > 0. By exercise 25, Z t = Z t− a.s.. By exercise 24 (d), G t = G t− since t is a bounded s<strong>to</strong>pping<br />

time.<br />

27. Let A ∈ F t . Then A ∩ {t < S} = (A ∩ {t < S}) ∩ {t < T } ∈ F T − since A ∩ {t < S} ∈ F t .<br />

Then F S− ⊂ F T − .<br />

Since T n ≤ T , F Tn− ⊂ F T − for all n as shown above. Therefore, ∨ n F Tn− ⊂ F T − . Let A ∈ F t .<br />

A ∩ {t < T } = ∪ n (A ∩ {t < T n }) ∈ ∨ n F Tn − and F T − ⊂ ∨ n F Tn −. Therefore, F T − = ∨ n F Tn −.<br />

28. Observe that the equation in theorem 38 depends only on the existence of a sequence of simple<br />

functions approximation f1 Λ ≥ 0 and a convergence of both sides in E{ ∑ j a jN Λ j<br />

j<br />

} = t ∑ j a jν(Λ j ).<br />

For this, f1 Λ ∈ L 1 is enough. (Note that we need f1 Λ ∈ L 2 <strong>to</strong> show the second equation.)<br />

9


29. Let M t be a Lévy process and local martingale. M t has a representation of the form<br />

∫<br />

∫<br />

M t = B t + x [N t (·, dx) − tν(dx)] + αt + xN t (·, dx) (34)<br />

{|x|≤1}<br />

{|x|>1}<br />

First two terms are martingales. Therefore WLOG, we can assume that<br />

∫<br />

M t = αt + xN t (·, dx) (35)<br />

{|x|>1}<br />

M t has only finitely many jumps on each interval [0, t] by exercise 17 (a). Let {T n } n≥1 be a sequence<br />

of jump times of M t . Then P (T n < ∞) = 1, ∀n and T n ↑ ∞. We can express M t by a sum of<br />

compound Poisson process and a drift term (See Example in P.33):<br />

M t =<br />

∞∑<br />

U i 1 {t≥Ti } − αt. (36)<br />

i=1<br />

Since M t is local martingale by hypothesis, M is a martingale if and only if M t ∈ L 1 and E[M t ] = 0,<br />

∀t. There exists a fundamental sequence {S n } such that M S n<br />

is a martingale, ∀n. WLOG, we<br />

can assume that M S n<br />

is uniformly integrable. If U 1 ∈ L 1 , M t ∈ L 1 for every α and M t becomes<br />

martingale with α ∗ = EN t EU 1 /t. Furthermore, for any other α, M t can’t be local martingale since<br />

for any s<strong>to</strong>pping time T , Mt<br />

T = L T t + (α ∗ − α)(t ∧ T ) where L t is a martingale given by α = α ∗ and<br />

Mt T can’t be a martingale if α ≠ α ∗ . It follows that if M t is only a local martingale, U 1 /∈ L 1 .<br />

By exercise 24,<br />

since M T 1−<br />

t<br />

Therefore,<br />

F T1 − = σ(M T 1− , T 1 ) ∨ N = σ(T 1 ) ∨ N (37)<br />

= M t 1 {t 0 : Z t ≥ z}. Then T z is a s<strong>to</strong>pping time and P (T z < ∞) = 1. Let’s define a<br />

coordinate map ω(t) = Z t (ω). Let R = inf{s < t : Z s ≥ z}. We let<br />

{<br />

{<br />

1 s ≤ t , ω(t − s) < z − y<br />

Y s (ω) =<br />

, Y ′ 1 s ≤ t , ω(t − s) > z + y<br />

s(ω) =<br />

(40)<br />

0 otherwise<br />

0 otherwise<br />

10


So that<br />

Y R ◦ θ R (ω) =<br />

{<br />

1 R ≤ t , Z t < z − y<br />

0 otherwise<br />

, Y ′ R ◦ θ R (ω) =<br />

{<br />

1 R ≤ t , Z t > z + y<br />

0 otherwise<br />

(41)<br />

Strong Markov property implies that on {R < ∞} = {T z ≤ t} = {S t ≥ z},<br />

E 0 (Y R ◦ θ R |F R ) = E ZR Y R , E 0 (Y ′ R ◦ θ R |F R ) = E ZR Y ′ R (42)<br />

Since Z R ≥ z and Z is symmetric,<br />

E a Y s = E a 1 {Zt−s z+y} = E a Y ′<br />

s, ∀a ≥ z, s < t. (43)<br />

By taking expectation,<br />

P 0 (T z ≤ t, Z t < z − y) = E(E 0 (Y R ◦ θ R |F R ) : R < ∞) = E(E ZR Y R : R < ∞)<br />

≤E(E ZR Y ′ R : R < ∞) = E(E 0 (Y ′ R ◦ θ R |F R ) : R < ∞) = P 0 (T z ≤ t, Z t > z + y)<br />

=P (Z t > z + y)<br />

31. We define T z , R, ω(t) as in exercise 30. We let<br />

{<br />

1 s ≤ t , ω(t − s) ≥ z<br />

Y s (ω) =<br />

0 otherwise<br />

(44)<br />

(45)<br />

Since Z R ≥ z and Z is symmetric,<br />

E a Y s = E a 1 {Zt−s ≥z} ≥ 1 2<br />

∀a ≥ z, s < t. (46)<br />

By the same reasoning as in exercise 30, taking expectation yields<br />

P 0 (Z t ≥ z) = P 0 (T z ≤ t, Z t ≥ z) = E(E 0 (Y R ◦ θ R |F R ) : R < ∞) = E(E ZR Y R : R < ∞)<br />

≥E( 1 2 : R < ∞) = 1 2 P (R < ∞) = 1 2 P (S t ≥ z)<br />

(47)<br />

11


Chapter 2. Semimartingales and S<strong>to</strong>chastic Integrals<br />

1. Let x 0 ∈ R be a discontinuous point of f. Wlog, we can assume that f is a right continuous<br />

function with left limit and △f(x 0 ) = d > 0. Since inf t {B t = x 0 } < ∞ and due <strong>to</strong> Strong Markov<br />

property of B t , we can assume x 0 = 0. Almost every Brownian path does not have point of decrease<br />

(or increase) and it is a continuous process. So B· visit x 0 = 0 and changes its sign infinitely many<br />

times on [0, ɛ] for any ɛ > 0. Therefore, Y t = f(B t ) has infinitely many jumps on any compact<br />

interval almost surely. Therefore,<br />

∑<br />

(△Y s ) 2 = ∞ a.s.<br />

s≤t<br />

2. By dominated convergence theorem,<br />

lim E Q[|X n − X| ∧ 1] = lim E P<br />

n→∞ n→∞<br />

[<br />

|X n − X| ∧ 1 · dQ<br />

dP<br />

]<br />

= 0<br />

|X n − X| ∧ 1 → 0 in L 1 (Q) implies that X n → X in Q-probability.<br />

3. B t is a continuous local martingale by construction and by independence between X and Y ,<br />

[B, B] t = α 2 [X, X] t + (1 − α 2 )[Y, Y ] t = t<br />

So by Lévy theorem, B is a standard 1-dimensional Brownian motion.<br />

[X, B] t = αt,<br />

[Y, B] t = √ 1 − α 2 t<br />

4. Assume that f(0) = 0. Let M t = B f(t) and G t = F f(t) where B· is a one-dimensional standard<br />

Brownian motion and F t is a corresponding filtration. Then<br />

E[M t |G s ] = E [ B f(t) |F f(s)<br />

]<br />

= Bf(s) = M s<br />

Therefore M t is a martingale w.r.t. G t . Since f is continuous and B t is a continuous process, M t is<br />

clearly continuous process. Finally,<br />

[M, M] t = [B, B] f(t) = f(t)<br />

If f(0) > 0, then we only need <strong>to</strong> add a constant process A t <strong>to</strong> B f(t) such that 2A t M 0 +A 2 t = −B 2 f(0)<br />

for each ω <strong>to</strong> get a desired result.<br />

5. Since B is a continuous process with △B 0 = 0, M is also continuous. M is local martingale<br />

since B is a locally square integrable local martingale.<br />

[M, M] t =<br />

∫ t<br />

0<br />

H 2 s ds =<br />

∫ t<br />

0<br />

1ds = t<br />

So by Lévy ’s characterization theorem, M is also a Brownian motion.<br />

12


6. Pick arbitrary t 0 > 1. Let Xt n = 1 (t0 − 1 ,∞)(t) for n ≥ 1, X t = 1 [t0 ,∞), Y t = 1 [t0 ,∞). Then<br />

n<br />

X n , X, Y are finite variation processes and Semimartingales. lim n Xt<br />

n = X t almost surely. But<br />

lim<br />

n<br />

[X n , Y ] t0 = 0 ≠ 1 = [X, Y ] t0<br />

7. Observe that<br />

[X n , Z] = [H n , Y · Z] = H n · [Y, Z], [X, Z] = [H, Y · Z] = H · [Y, Z]<br />

and [Y, Z] is a semimartingale. Then by the continuity of s<strong>to</strong>chastic integral, H n → H in ucp<br />

implies, H n · [Y, Z] → H · [Y, Z] and hence [X n , Z] → [XZ] in ucp.<br />

8. By applying I<strong>to</strong>’s formula,<br />

[f n (X) − f(X), Y ] t =[f n (X 0 ) − f(X 0 ), Y ] t +<br />

+ 1 2<br />

∫ t<br />

0<br />

∫ t<br />

(f” n (X s ) − f” n (X s ))d[[X, X], Y ] s<br />

=(f n (X 0 ) − f(X 0 ))Y 0 +<br />

0<br />

∫ t<br />

0<br />

(f ′ n(X s ) − f ′ (X s ))d[X, Y ] s<br />

(f ′ n(X s ) − f ′ (X s ))d[X, Y ] s<br />

Note that [[X, X], Y ] ≡ 0 since X and Y are continuous semimartingales and especially [X, X] is a<br />

finite variation process. As n → ∞, (f n (X 0 )−f(X 0 ))Y 0 → 0 for arbitrary ω ∈ Ω. Since [X, Y ] has a<br />

path of finite variation on compacts, we can treat f n (X) · [X, Y ], f(X) · [X, Y ] as Lebesgue-Stieltjes<br />

integral computed path by path. So fix ω ∈ Ω. Then as n → ∞,<br />

sup |f n (B s ) − f(B s )| = sup |f n (x) − f(x)| → 0<br />

0≤s≤t<br />

inf s≤t B s≤x≤sup s≤t B s<br />

since f n ′ → f uniformly on compacts. Therefore<br />

∫ t<br />

∣ ∣∣∣ ∣ (f n(X ′ s ) − f ′ (X s ))d[X, Y ] s ≤ sup |f n (B s ) − f(B s )|<br />

0≤s≤t<br />

0<br />

∫ t<br />

0<br />

d|[X, Y ]| s −→ 0<br />

9. Let σ n be a sequence of random partition tending <strong>to</strong> identity. By theorem 22,<br />

∑ (<br />

) (<br />

)<br />

[M, A] = M 0 A 0 + lim M T i+1 n − M<br />

Ti<br />

n A T i+1 n − A<br />

Ti<br />

n<br />

n→∞<br />

i<br />

∣<br />

≤ 0 + lim<br />

sup<br />

n→∞ i<br />

∣<br />

∣M T i+1 n − M<br />

Ti<br />

n<br />

∣ ∑ i<br />

∣<br />

∣A T n i+1 − A<br />

T n i<br />

∣ = 0<br />

since M is a continuous process and ∑ ∣<br />

i ∣A T i+1 n − A<br />

Ti<br />

n ∣ < ∞ by hypothesis. Similarly<br />

∑ (<br />

) (<br />

)<br />

[A, A] = M0 2 + lim A T i+1 n − A<br />

Ti<br />

n A T i+1 n − A<br />

Ti<br />

n<br />

n→∞<br />

Therefore,<br />

≤ 0 + lim<br />

sup<br />

n→∞ i<br />

i<br />

∣<br />

∣A T i+1 n − A<br />

Ti<br />

n<br />

∣ ∑ i<br />

[X, X] = [M, M] + 2[M, A] + [A, A] = [M, M]<br />

∣<br />

∣A T n i+1 − A<br />

T n i<br />

∣ = 0<br />

13


10. X 2 is P -semimartingale and hence Q-semimartingale by Theorem 2. By Corollary of theorem<br />

15, (X − · X) Q is indistinguishable form (X − · X) P . Then by definition of quadric variation,<br />

[X, X] P = X 2 − (X − · X) P = X 2 − (X − · X) Q = [X, X] Q<br />

up <strong>to</strong> evanescent set.<br />

11. a) Λ = [−2, 1] is a closed set and B has a continuous path, by Theorem 4 in chapter 1,<br />

T (ω) = inf t : B t /∈ Λ is a s<strong>to</strong>pping time.<br />

b) M t is a uniformly integrable martingale since M t = E[B T |F t ]. Clearly M is continuous.<br />

Clearly N is a continuous martingale as well. By Theorem 23, [M, M] t = [B, B] T t = t ∧ T and<br />

[N, N] t = [−B, −B] T t = t ∧ T . Thus [M, M] = [N, N]. However P (M t > 1) = 0 ≠ P (N t > 1). M<br />

and N does not have the same law.<br />

12. Fix t ∈ R + and ω ∈ Ω. WLOG we can assume that X(ω) has a càdlàg path and ∑ s≤t |△X s(ω)| <<br />

∞. Then on [0, t], continuous part of X is bounded by continuity and jump part of X is bounded<br />

by hypothesis. So {X s } s≤t is bounded. Let K ⊂ R be K = [inf s≤t X s (ω), sup s≤t X s (ω)]. Then f,<br />

f ′ , f” is bounded on K. Since ∑ s≤t {f(X s) − f(X s− ) − f ′ (X s− )△X s } is an absolute convergent<br />

series, (see proof of I<strong>to</strong>’s formula (Theorem 32), it suffices <strong>to</strong> show that ∑ s≤t {f ′ (X s− )△X s } < ∞.<br />

By hypothesis,<br />

∑ ∣<br />

s≤t<br />

∣f ′ (X s− )△X s<br />

∣ ∣ ≤ sup<br />

x∈K<br />

|f ′ (x)| ∑ s≤t<br />

|△X s | < ∞<br />

Since this is true for all t ∈ R + and almost all ω ∈ Ω, the claim is shown.<br />

13. By definition, s<strong>to</strong>chastic integral is a continuous linear mapping J X : S ucp → D ucp . (Section<br />

4). By continuity, H n → H under ucp <strong>to</strong>pology implies H n · X → H · X under ucp <strong>to</strong>pology.<br />

14. Let  = 1 + A <strong>to</strong> simplify notations. Fix ω ∈ Ω and let Â−1 · X = Z. Then Z ∞ (ω) < ∞ by<br />

hypothesis and  · Z = X. Applying integration by parts <strong>to</strong> X and then device both sides by Â<br />

yields<br />

X t<br />

 = Z t − 1 Â<br />

∫ t<br />

0<br />

Z s dÂs<br />

Since Z ∞ < ∞ exists, for any ɛ > 0, there exists τ such that |Z t − Z ∞ | < ε for all t ≥ τ. Then for<br />

t > τ,<br />

∫<br />

1 t<br />

Z s dÂs = 1 ∫ τ<br />

Z s dÂs + 1 ∫ t<br />

 t −<br />

(Z s − Z ∞ )dÂs + Z Âτ<br />

∞<br />

 t  t  t<br />

0<br />

0<br />

τ<br />

Let’s evaluate right hand side. As t → ∞, the first term goes <strong>to</strong> 0 while the last term converges<br />

<strong>to</strong> Z ∞ by hypothesis. By construction of τ, the second term is smaller than ε(A t − A τ )/A t , which<br />

converges <strong>to</strong> ε. Since this is true for all ε > 0,<br />

1<br />

lim<br />

t→∞ Â<br />

∫ t<br />

0<br />

Z s dÂs = Z ∞<br />

Thus lim t→∞ X t /Ât = 0.<br />

14<br />

 t


15. If M 0 = 0 then by Theorem 42, there exists a Brownian motion B such that M t = B [M,M]t .<br />

By the law of iterated logarithm,<br />

lim<br />

t→∞<br />

M t B [M,M]t B τ<br />

= lim = lim<br />

[M, M] t t→∞ [M, M] t τ→∞ τ = 0<br />

If M 0 ≠ 0, Let X t = M t − M 0 . Then [X, X] t = [M, M] t − M 2 0 . So [X, X] t → ∞ and<br />

M t<br />

[M, M] t<br />

= X t + M 0<br />

[X, X] t + M 2 0<br />

→ 0<br />

17. Since integral is taken over [t − 1/n, t] and Y is adapted, Xt n is also an adapted process. For<br />

t > s ≥ 1/n such that |t − s| ≤ 1/n,<br />

∫ t ∫ t−1/n<br />

|Xt n − Xs n | = n<br />

Y τ dτ − Y τ dτ<br />

≤ 2n(t − s)<br />

∣ s<br />

s−1/n ∣ sup |Y τ |<br />

s− 1 n ≤τ≤t<br />

Since X is constant on [0, t] and [1, ∞), let π be a partition of [t, 1] and M = sup s−<br />

1<br />

n ≤τ≤t |Y τ | < ∞.<br />

Then for each n, <strong>to</strong>tal variation of X n is finite since<br />

sup |Xt n i+1<br />

− Xt n i<br />

| ≤ 2nM ∑<br />

π<br />

π<br />

(t i+1 − t i ) = 2nM(1 − t)<br />

Therefore X n is a finite variation process and in particular a semimartingale. By the fundamental<br />

theorem of calculus and continuity of Y , Y s = lim n→∞ n ∫ s<br />

s−1/n Y sdu = lim n→∞ Xt n for each t > 0.<br />

At t = 0, lim n X n = 0 = Y 0 . So X n → Y for all t ≥ 0. However Y need not be a semimartingale<br />

since Y t = (B t ) 1/3 where B t is a standard Brownian motion satisfies all requirements but not a<br />

semimartingale.<br />

18. B t has a continuous path almost surely. Fix ω such that B t (ω) is continuous. Then lim n→∞ Xt n (ω) =<br />

B t (ω) by Exercise 17. By Theorem 37, the solution of dZt<br />

n = Zs−dX n s n , Z 0 = 1 is<br />

= exp<br />

(X t n − 1 )<br />

2 [Xn , X n ] t = exp(Xt n ),<br />

Z n t<br />

since X n is a finite variation process. On the other hand, Z t = exp(B t − t/2) and<br />

lim<br />

n→∞ Zn t<br />

= exp(B t ) ≠ Z t<br />

19. A n is a clearly càdlàg and adapted. By periodicity and symmetry of triangular function,<br />

∫ π<br />

2<br />

0<br />

|dA n s | = 1 n<br />

∫ π<br />

2<br />

Since <strong>to</strong>tal variation is finite, A n t<br />

lim<br />

sup<br />

n→∞<br />

0≤t≤ π 2<br />

0<br />

|d sin(ns)| = 1 n · n ∫ π<br />

2n<br />

0<br />

d(sin(ns)) =<br />

∫ π<br />

2<br />

0<br />

d(sin(s)) = 1<br />

is a semimartingale. On the other hand,<br />

|A n t | = lim<br />

n→∞<br />

1<br />

n = 0 15


20. Applying I<strong>to</strong>’s formula <strong>to</strong> u ∈ C 2 (R 3 − {0}) and B t ∈ R 3 \{0}, ∀t, a.s.<br />

u(B t ) = u(x) +<br />

∫ t<br />

0<br />

∇u(B s ) · dB s + 1 2<br />

∫ t<br />

0<br />

△u(B s )ds = 1<br />

‖x‖ + 3∑<br />

i=1<br />

∫ t<br />

0<br />

∂ i u(B s )dB i s<br />

and M t = u(B t ) is a local martingale. This solves (a). Fix 1 ≤ α ≤ 3. Observe that E(u(B 0 ) α ) <<br />

∞. Let p, q be a positive number such that 1/p + 1/q = 1. Then<br />

∫<br />

E x (u(B t ) α 1 1<br />

‖x−y‖2<br />

) =<br />

R 3 ‖y‖ α e− 2t dy<br />

(2πt) 3/2<br />

∫<br />

∫<br />

1 1<br />

‖x−y‖2<br />

1 1<br />

‖x−y‖2<br />

=<br />

{B(0;1)∩B c (x;δ)} ‖y‖ α e− 2t dy +<br />

(2πt) 3/2<br />

R 3 \{B(0;1)∩B c (x;δ)} ‖y‖ α e− 2t dy<br />

(2πt) 3/2<br />

∫<br />

≤<br />

sup<br />

y∈{B(0;1)∩B c (x;δ)}<br />

+<br />

1<br />

(2πt) 3 2<br />

( ∫<br />

R 3 \{B(0;1)∩B c (x;δ)}<br />

e − ‖x−y‖2<br />

2t ·<br />

B(0;1)<br />

1<br />

‖y‖ α dy<br />

1<br />

‖y‖ αp dy ) 1/p ( ∫<br />

R 3 \{B(0;1)∩B c (x;δ)}<br />

(<br />

1<br />

‖x−y‖2<br />

e− 2t<br />

(2πt) 3/2<br />

) q<br />

dy) 1/q<br />

Pick p > 3/α > 1. Then the first term goes <strong>to</strong> 0 as t → ∞. In particular it is finite for all t ≥ 0.<br />

The first fac<strong>to</strong>r in the second term is finite while the second fac<strong>to</strong>r goes <strong>to</strong> 0 as t → ∞ since<br />

∫<br />

1 1<br />

(2πt) 3/2 (2πt)<br />

3(q−1)/2<br />

e−<br />

‖x−y‖2<br />

2t/q<br />

dy =<br />

∫<br />

1<br />

(2πt) 3(q−1)/2<br />

1 1<br />

‖x−y‖2<br />

q 3/2 e− 2t/q<br />

dy =<br />

(2πt/q) 3/2<br />

1 1<br />

(2πt) 3(q−1)/2 q 3/2<br />

and the second fac<strong>to</strong>r is finite for all t ≥ 0. (b) is shown with α = 1. (c) is shown with α = 2. It<br />

also shows that<br />

lim<br />

t→∞ Ex (u((B t ) 2 ) = 0<br />

which in turn shows (d).<br />

21. Applying integration by parts <strong>to</strong> (A ∞ − A·)(C ∞ − C . ),<br />

(A ∞ − A·)(C ∞ − C . ) = A ∞ C ∞ −<br />

∫ ·<br />

Since A, C are processes with finite variation,<br />

[A, C] t = ∑<br />

△A s △C s<br />

∫ t<br />

0<br />

(A ∞ − A s− )dC s =<br />

Substituting these equations,<br />

0


22. Claim that for all integer k ≥ 1 and prove by induction.<br />

(A ∞ − A s ) k = k!<br />

∫ ∞<br />

s<br />

dA s1<br />

∫ ∞<br />

s 1<br />

dA s2 . . .<br />

∫ ∞<br />

s p−1<br />

dA sp (48)<br />

For k = 1, equation (48) clearly holds. Assume that it holds for k = n. Then<br />

(k + 1)!<br />

∫ ∞<br />

s<br />

dA s1<br />

∫ ∞<br />

s 1<br />

dA s2 . . .<br />

∫ ∞<br />

s p−1<br />

dA sk+1 = (k + 1)<br />

∫ ∞<br />

s<br />

(A ∞ − A s1 )dA s1 = (A ∞ − A s ) k+1 , (49)<br />

since A is non-decreasing continuous process, (A ∞ − A) · A is indistinguishable from Lebesgue-<br />

Stieltjes integral calculated on path by path. Thus equation (48) holds for n = k + 1 and hence for<br />

all integer n ≥ 1. Then setting s = 0 yields a desired result since A 0 = 0.<br />

24. a)<br />

∫ t<br />

0<br />

∫<br />

1<br />

t<br />

1 − s dβ s =<br />

=<br />

=<br />

0<br />

∫ t<br />

0<br />

∫ t<br />

0<br />

∫<br />

1<br />

t<br />

1 − s dB s −<br />

0<br />

∫<br />

1<br />

t<br />

1 − s dB s − B 1<br />

∫<br />

1<br />

t<br />

1 − s dB s − B 1<br />

1 B 1 − B s<br />

1 − s 1 − s ds<br />

0<br />

0<br />

∫<br />

1<br />

t<br />

(1 − s) 2 ds +<br />

= B [ (<br />

t 1<br />

1<br />

1 − t − 1 − s<br />

]t<br />

, B − B 1<br />

1 − t − 1 1<br />

= B t − B 1<br />

1 − t<br />

+ B 1<br />

Arranging terms and we have desired result.<br />

0<br />

B s<br />

0 (1 − s) 2 ds<br />

∫<br />

1<br />

t<br />

( ) 1<br />

(1 − s) 2 ds + B s d<br />

0 1 − s<br />

)<br />

b) Using integration by arts, since [1 − s, ∫ s<br />

0 (1 − u)−1 dβ u ] t = 0,<br />

∫ t<br />

∫<br />

1<br />

t<br />

∫<br />

X t = (1 − t)<br />

0 1 − s dβ 1<br />

t ∫ s<br />

s = (1 − s)<br />

0 1 − s dβ 1<br />

s +<br />

0 0 1 − u dβ u(−1)ds<br />

∫ t<br />

(<br />

=β t + − X )<br />

s<br />

ds,<br />

1 − s<br />

as required. The last equality is by result of (a).<br />

27. By I<strong>to</strong>’s formula and given SDE,<br />

d(e αt X t ) = αe αt X t dt + e αt dX t = αe αt X t dt + e αt (−αX t dt + σdB t ) = σe αt dX t<br />

Then integrating both sides and arranging terms yields<br />

∫ t<br />

)<br />

X t = e<br />

(X −αt 0 + σ e αs dB s<br />

0<br />

17


B<br />

28. By the law of iterated logarithm, lim sup t t→∞ t<br />

= 0 a.s. In particular, for almost all ω there<br />

exists t 0 (ω) such that t > t 0 (ω) implies B t (ω)/t < 1/2 − ɛ for any ɛ ∈ (0, 1/2). Then<br />

{ (<br />

lim E(B Bt<br />

t) = lim exp t<br />

t→∞ t→∞ t − 1 )}<br />

≤ lim e −ɛt = 0, a.s.<br />

2 t→∞<br />

29. E(X) −1 = E(−X + [X, X]) by Corollary of Theorem 38. This implies that E(X) −1 is the<br />

solution <strong>to</strong> a s<strong>to</strong>chastic differential equation,<br />

E(X) −1<br />

t = 1 +<br />

∫ t<br />

0<br />

E(X) −1<br />

s− d(−X s + [X, X] s ),<br />

which is the desired result. Note that continuity assumed in the corollary is not necessary if we<br />

assume △X s ≠ −1 instead so that E(X) −1 is well defined.<br />

30. a) By I<strong>to</strong>’s formula and continuity of M, M t = 1 + ∫ t<br />

0 M sdB s . B t is a locally square<br />

integrable local martingale and M ∈ L. So by Theorem 20, M t is also a locally square integrable<br />

local martingale.<br />

b)<br />

[M, M] t =<br />

and for all t ≥ 0,<br />

∫ t<br />

0<br />

E([M, M] t ) =<br />

M 2 s ds =<br />

∫ t<br />

0<br />

∫ t<br />

0<br />

e 2B s−s ds<br />

E[e 2B s<br />

]e −s ds =<br />

∫ t<br />

0<br />

e s ds < ∞<br />

Then by Corollary 3 of Theorem 27, M is a martingale.<br />

c) Ee Bt is calculated above using density function. Alternatively, using the result of b),<br />

Ee Bt = E(M t e t 2 ) = e<br />

t<br />

2 EM0 = e t 2<br />

31. Pick A ∈ F such that P (A) = 0 and fix t ≥ 0. Then A ∩ {R t ≤ s} ∈ F s since F s contains all<br />

P -null sets. Then A ∈ G t = F Rt . If t n ↓ t, then by right continuity of R, R tn ↓ R t . Then<br />

G t = F Rt = ∩ n F Rt n<br />

= ∩ nG tn = ∩ s≥t G s<br />

by the right continuity of {F t } t and Exercise 4, chapter 1. Thus {G t } t satisfies the usual hypothesis.<br />

32. If M has a càdlàg path and R t is right continuous, ¯Mt has a càdlàg path. ¯Mt = M Rt ∈ F Rt =<br />

G t . So ¯M t is adapted <strong>to</strong> {G t }. For all 0 ≤ s ≤ t, R s ≤ R t < ∞. Since M is uniformly integrable<br />

martingale, by optional sampling theorem,<br />

¯M s = M Rs = E(M Rt |F Rs ) = E( ¯M t |G s ),<br />

a.s.<br />

So ¯M is G-martingale.<br />

18


Chapter 3. Semimartingales and Decomposable Processes<br />

1. Let {T i } n i=1 be a set of predictable s<strong>to</strong>pping time. For each i, T i has an announcing sequence<br />

{T i,j } ∞ j=1 . Let S k := max i T i,k and R k := min i T i,k . Then S k , R k are s<strong>to</strong>pping time. {S k } and {R k }<br />

make announcing sequences of maximum and minimum of {T i } i .<br />

2. Let T n = S + (1 − 1/n) for each n. Then T n is a s<strong>to</strong>pping time and T n ↑ T as n ↑ ∞. Therefore<br />

T is a predictable s<strong>to</strong>pping time.<br />

3. Let S, T be two predictable s<strong>to</strong>pping time. Then S ∧ T , S ∨ T are predictable s<strong>to</strong>pping time<br />

as shown in exercise 1. In addition, Λ = {S ∨ T = S ∧ T }. Therefore without loss of generality,<br />

we can assume that S ≤ T . Let {T n } be an announcing sequence of T . Let R n = T n + n1 {Tn ≥S}.<br />

Then R n is a s<strong>to</strong>pping time since {R n ≤ t} = {T n ≤ t} ∩ ({t − T n ≤ n} ∪ {T n < S}) ∈ F t . R n is<br />

increasing and lim R n = T Λ = S Λ .<br />

4. For each X ∈ L, define a new process X n by X n = ∑ k∈N X k/2 n1 [k/2 n ,(k+1)/2 n ). Since each<br />

summand is an optional set (See exercise 6) X n is an optional process. As a mapping on the<br />

product space, X is a pints limit of X n . Therefore X is optional. Then by the definition P ⊂ O.<br />

5. Suffice <strong>to</strong> show that all càdlàg processes are progressively measurable. (Then we can apply<br />

mono<strong>to</strong>ne class argument.) For a càdlàg process X on [0, t], define a new process X n by putting<br />

Xu n = X k/2 n for u ∈ [(k − 1)t/2 n , kt/2 n ), k = {1, . . . , 2 n }. Then on Ω × [0, t],<br />

[ k − 1<br />

{X n ∈ B} = ∪ k∈N+<br />

[{ω : X k/2 n(ω) ∈ B} ×<br />

2 n , k )]<br />

2 n ∈ F t ⊗ B([0, t]) (50)<br />

since X is adapted and X n is progressively measurable process. Since X is càdlàg , {X n } converges<br />

pints <strong>to</strong> X, which therefore is also F t ⊗ B([0, t]) measurable.<br />

6. (1) (S, T ]: Since 1 (S,T ] ∈ L, (S.T ] = {(s, ω) : 1 (S,T ] = 1} ∈ P by definition.<br />

(2) [S, T ): Since 1 [S,T ) ∈ D, [S.T ) = {(s, ω) : 1 [S,T ) = 1} ∈ O by definition.<br />

(3) (S, T ): Since [S, T ] = ∪ n [S.T + 1/n), [S, T ] is an optional set. Then (S, T ) = [0, T ) ∩ [0, S] c<br />

and (S, T ) is optional as well.<br />

(4) [S, T ) when S, T is predictable : Let {S n } and {T n } be announcing sequences of S and T .<br />

Then [S, T ) = ∩ m ∪ n (S m , T n ]. Since (S m , T n ] is predictable by (1) for all m, n, [S, T ) is predictable.<br />

7. Pick a set A ∈ F Sn . Then A = A ∩ {S n < T } ∈ F T − by theorem 7 and the definition of<br />

{S n } n . (Note: The proof of theorem 7 does not require theorem 6). Since this is true for all n,<br />

∨ n F Sn ⊂ F T − . To show the converse let Π = {B ∩ {t < T } : B ∈ F t }. Then Π is closed with<br />

respect <strong>to</strong> finite intersection. B ∩ {t < T } = ∪ n (B ∩ {t < S n }). Since (B ∩ {t < S n }) ∩ {S n ≤ t} =<br />

∅ ∈ F t ,B ∩ {t < S n } ∈ F Sn . Therefore B ∩ {t < T } ∈ ∨ n F Sn and Π ⊂ ∨ n F Sn . Then by Dynkin’s<br />

theorem, F T − ⊂ ∨ n F Sn .<br />

19


8. Let S, T be s<strong>to</strong>pping times such that S ≤ T . Then F Sn ⊂ F T . and ∨ n F Sn ⊂ F T . By the<br />

same argument as in exercise 7, F T − ⊂ ∨ n F Sn . Since F T − = F T by hypothesis, we have desired<br />

result. (Note: {S n } is not necessarily an announcing sequence since S n = T is possible. Therefore<br />

∨F Sn ≠ F T − in general.)<br />

9. Let X be a Lévy process, G be its natural filtration and T be s<strong>to</strong>pping time. Then by<br />

exercises 24(c) in chapter 1, G T = σ(G T − , X T ). Since X jumps only at <strong>to</strong>tally inaccessible time<br />

(a consequence of theorem 4), X T = X T − for all predictable s<strong>to</strong>pping time T . Therefore if T<br />

is a predictable time, G T = σ(G T − , X T ) = σ(G T − , X T − ) = G T − since X T − ∈ G T − . Therefore a<br />

completed natural filtration of a Lévy process is quasi left continuous.<br />

10. As given in hint, [M, A] is a local martingale. Let {T n } be its localizing sequence, that is<br />

[M, A] T n<br />

· is a martingale for all n. Then E([X, X] T n<br />

t ) = E([M, M] T n<br />

t )+E([A, A] T n<br />

t ). Since quadratic<br />

variation is non-decreasing non-negative process, first letting n → ∞ and then letting t → ∞, we<br />

obtain desired result with mono<strong>to</strong>ne convergence theorem.<br />

11. By the Kunita-Watanabe inequality for square bracket processes, ([X + Y, X + Y ]) 1/2 ≤<br />

([X, X]) 1/2 + ([Y, Y ]) 1/2 , It follows that [X + Y, X + Y ] ≤ 2 ([X, X] + [Y, Y ]). This implies that<br />

[X + Y, X + Y ] is locally integrable and the sharp bracket process 〈X + Y, X + Y 〉 exists. Since<br />

sharp bracket process is a predictable projection (compensa<strong>to</strong>r) of square bracket process, we obtain<br />

polarization identity of sharp bracket process from one of squared bracket process. Namely,<br />

〈X, Y 〉 = 1 (〈X + Y.X + Y 〉 − 〈X, X〉 − 〈Y, Y 〉) (51)<br />

2<br />

Then the rest of the proof is exactly the same as the one of theorem 25, chapter 2, except that we<br />

replace square bracket processes by sharp bracket processes.<br />

12. Since a continuous finite process with finite variation always has a integrable variation, without<br />

loss of generality we can assume that the value of A changes only by jumps. Thus A can be<br />

represented as A t = ∑ s≤t △A s. Assume that C· = ∫ ·<br />

0 |dA s| is predictable. Then T n = inf{t > 0 :<br />

C t ≥ n} is a predictable time since it is a debut of right closed predictable set. Let {T n,m } m be an<br />

announcing sequence of T n for each n and define S n by S n = sup 1≤k≤n T k,n . Then S n is a sequence<br />

of s<strong>to</strong>pping time increasing <strong>to</strong> ∞, S n < T n and hence C Sn ≤ n. Thus EC Sn < n and C is locally<br />

integrable. To prove that C is predictable, we introduce two standard results.<br />

lemma Suppose that A is the union of graphs of a sequence of predictable times. Then there<br />

exists a sequence of predictable times {T n } such that A ⊂ ∪ n [T n ] and [T n ] ∩ [T m ] = ∅ for n ≠ m.<br />

Let {S n } n be a sequence of predictable s<strong>to</strong>pping times such that A ⊂ ∪ n [S n ]. Put T 1 = S 1 and for<br />

n ≥ 2, B n = ∩ n−1<br />

k=1 [S k ≠ S n ], T n = (S n ) Bn . Then B n ∈ F Sn−, T n is predictable, [T n ] ∩ [T m ] = ∅<br />

when n ≠ m, and A = ∪ n≥1 [T n ]. (Note: By the definition of the graph, [T n ] ∩ [T m ] = ∅ even if<br />

P (T n = T m = ∞) > 0 as long as T n and T m are disjoint on Ω × R + )<br />

20


lemma Let X t be a càdlàg adapted process and predictable. Then there exists a sequence of<br />

strictly positive predictable times {T n } such that [△X ≠ 0] ⊂ ∪ n [T n ].<br />

Proof. Let S 1/k<br />

1/k<br />

n+1 = inf{t : t > Tn (ω), |X 1/k S<br />

− X t | > 1/k or |X 1/k<br />

n<br />

S<br />

− X t− | > 1/k}. Then since<br />

n<br />

X is predictable, we can show by induction that {S n } n≥1 is predictable s<strong>to</strong>pping time. In addition<br />

[△X ≠ 0] ⊂ ∪ n,k≥1 [Sn<br />

1/k ]. Then by previous lemma, there exists a sequence of predictable s<strong>to</strong>pping<br />

times {T n } such that ∪ n,k≥1 [Sn 1/k ] ⊂ ∪ n [T n ]. Then [△X ≠ 0] ⊂ ∪ n [T n ]<br />

Proof of the main claim: Combining two lemmas, we see that {△A ≠ 0} is the union<br />

of a sequence of disjoint graphs of predictable s<strong>to</strong>pping times. Since A t = ∑ s≤t △A s is absolute<br />

convergent for each ω, it is invariant with respect <strong>to</strong> the change of the order of summation. Therefore<br />

A t = ∑ n △A S n<br />

1 {Sn ≤t} where S n is a predictable time. △A Sn 1 {Sn ≤t} is a predictable process since<br />

S n is predictable and A Sn ∈ F Sn−. This clearly implies that |△A Sn |1 {Sn≤t} is predictable. Then<br />

C is predictable as well.<br />

Note:<br />

As for the second lemma, following more general claim holds.<br />

lemma Let X t be a càdlàg adapted process. Then X is predictable if and only if X satisfies<br />

the following conditions (1). There exists a sequence of strictly positive predictable times {T n } such<br />

that [△X ≠ 0] ⊂ ∪ n [T n ]. (2). For each predictable time T , X T 1 {T


Poisson process,<br />

⎡<br />

C∑<br />

t−s<br />

E[N t − N s |F s ] = E ⎣<br />

=µ<br />

i=1<br />

U i<br />

⎤<br />

⎦ =<br />

∞∑<br />

kP (C t−s = k) = µλ(t − s)<br />

k=1<br />

⎡<br />

⎤<br />

C ∞∑ ∑ t−s<br />

E ⎣ U i |C t−s = k⎦ P (C t−s = k)<br />

k=1<br />

Therefore N t − µλt is a martingale and the claim is shown.<br />

i=1<br />

(53)<br />

16. By direct computation,<br />

1<br />

1 − e −λh<br />

λ(t) = lim P (t ≤ T < t + h|T ≥ t) = lim<br />

h→0 h h→0 h<br />

A case that T is Weibull is similar and omitted.<br />

= λ. (54)<br />

17. Observe that P (U > s) = P (T > 0, U > s) = exp{−µs}. P (T ≤ t, U > s) = P (U ><br />

s) − P (T > t.U > s). Then<br />

∫ t+h<br />

P (t ≤ T < t + h, t ≤ U) t<br />

e −µt (λ + θt)e −(λ+θt)u du<br />

P (t ≤ T < t + h|T ≥ t, U ≥ t) = =<br />

P (t ≤ T, t ≤ U)<br />

exp{−λt − µt − θt 2 }<br />

and<br />

= −1[exp{−λ(t + h) − θt(t + h)} − exp{−λt − θt2 }]<br />

exp{−λt − θt 2 }<br />

= 1 − exp{−(λ + θt)h}<br />

λ # 1 − exp{−(λ + θt)h}<br />

(t) = lim<br />

= λ + θt (56)<br />

h→0 h<br />

18. There exists a disjoint sequence of predictable times {T n } such that A = {t > 0 :<br />

△〈M, M〉 t ≠ 0} ⊂ ∪ n [T n ]. (See the discussion in the solution of exercise 12 for details.) In<br />

addition, all the jump times of 〈M, M〉· is a predictable time. Let T be a predictable time such<br />

that 〈M, M〉 T ≠ 0. Let N = [M, M] − 〈M, M〉. Then N is a martingale with finite variation<br />

since 〈M, M〉 is a compensa<strong>to</strong>r of [M, M]. Since T is predictable, E[N T |F T − ] = N T − . On the<br />

other hand, since {F t } is a quasi-left-continuous filtration, N T − = E[N T |F T − ] = E[N T |F T ] = N T .<br />

This implies that △〈M, M〉 T = △[M, M] T = (△M T ) 2 . Recall that M itself is a martingale. So<br />

M T = E[M T |F T ] = E[M T |F T − ] = M T − and △M T = 0. Therefore △〈M, M〉 T = 0 and 〈M, M〉 is<br />

continuous.<br />

19. By theorem 36, X is a special semimartingale. Then by theorem 34, it has a unique<br />

decomposition X = M + A such that M is local martingale and A is a predictable finite variation<br />

process. Let X = N + C be an arbitrary decomposition of X. Then M − N = C − A. This implies<br />

that A is a compensa<strong>to</strong>r of C. It suffices <strong>to</strong> show that a local martingale with finite variation<br />

is locally integrable. Set Y = M − N and Z = ∫ t<br />

0 |dY s|. Let S n be a fundamental sequence of<br />

Y and set T n = S n ∧ n ∧ inf{t : Z t > n}. Then Y Tn ∈ L 1 (See the proof of theorem 38) and<br />

Z Tn ≤ n + |Y Tn | ∈ L 1 . Thus Y = M − N is has has a locally integrable variation. Then C is a sum<br />

of two process with locally integrable variation and the claim holds.<br />

22<br />

(55)


20. Let {T n } be an increasing sequence of s<strong>to</strong>pping times such that X Tn is a special semimartingale<br />

as shown in the statement. Then by theorem 37, Xt∧T ∗ n<br />

= sup s≤t |X T n<br />

s | is locally integrable.<br />

Namely, there exists an increasing sequence of s<strong>to</strong>pping times {R n } such that (Xt∧T ∗ n<br />

) R n<br />

is integrable.<br />

Let S n = T n ∧ R n . Then S n is an increasing sequence of s<strong>to</strong>pping times such that (Xt ∗ ) Sn<br />

is integrable. Then Xt<br />

∗ is locally integrable and by theorem 37, X is a special semimartingale.<br />

21. Since Q ∼ P, dQ/dP > 0 and Z > 0. Clearly M ∈ L 1 (Q) if and only if MZ ∈ L 1 (P). By<br />

generalized Bayes’ formula.<br />

E Q [M t |F s ] = E P[M t Z t |F s ]<br />

E P [Z t |F s ]<br />

Thus E P [M t Z t |F s ] = M s Z s if and only if E Q [M t |F s ] = M s .<br />

= E P[M t Z t |F s ]<br />

Z s<br />

, t ≥ s (57)<br />

22. By Rao’s theorem. X has a unique decomposition X = M + A where M is (G, P)-local<br />

martingale and A is a predictable process with path of locally integrable variation. E[X ti+1 −<br />

X ti |G ti ] = E[A ti+1 − A ti |G ti ]. So<br />

[ n∑<br />

]<br />

sup E |E[A ti − A ti +1|G ti ]| < ∞ (58)<br />

τ<br />

i=0<br />

Y t = E[X t |F t ] = E[M t |F t ] + E[A t |F t ]. Since E[E[M t |F t ]|F s ] = E[M t |F s ] = E[E[M t |G s ]|F s ]] =<br />

E[M s | fil s ], E[M t |F t ] is a martingale. Therefore<br />

[ n∑<br />

]<br />

n∑<br />

Var τ (Y ) = E |E[E[A ti |F ti ] − E[A ti +1|F ti +1]|F ti ]| = E [|E[A ti − A ti +1|F ti ]|] (59)<br />

i=1<br />

i=1<br />

For arbitrary σ-algebra F, G such that F ⊂ G and X ∈ L 1 ,<br />

E (|E[X|F]|) = E (|E (E[X|G]|F) |) ≤ E (E (|E[X|G]||F)) = E (|E[X|G]|) (60)<br />

Thus for every τ and t i , E [|E[A ti − A ti +1|F ti ]|] ≤ E [|E[A ti − A ti +1|G ti ]|]. Therefore Var(X) < ∞<br />

(w.r.t. {G t })implies Var(Y) < ∞ (w.r.t {F t }) and Y is ({F t }, P)-quasi-martingale.<br />

23. We introduce a following standard result without proof.<br />

Lemma. Let N be a local martingale. If E([N, N] 1/2<br />

∞ ) < ∞ or alternatively, N ∈ H 1 then<br />

N is uniformly integrable martingale.<br />

This is a direct consequence of Fefferman’s inequality. (Theorem 52 in chapter 4. See chapter 4<br />

section 4 for the definition of H 1 and related <strong>to</strong>pics.) We also take a liberty <strong>to</strong> assume Burkholder-<br />

Davis-Gundy inequality (theorem 48 in chapter 4) in the following discussion.<br />

Once we accept this lemma, it suffices <strong>to</strong> show that a local martingale [A, M] ∈ H 1 . By Kunita-<br />

Watanabe inequality, [A, M] 1/2<br />

∞ ≤ [A, A] 1/4<br />

∞ [M, M] 1/4<br />

∞ . Then by Hölder inequality,<br />

( )<br />

E [A, M] 1/2<br />

∞<br />

(<br />

≤ E<br />

[A, A] 1/2<br />

∞<br />

) 1 (<br />

2<br />

E<br />

[M, M] 1/2<br />

∞<br />

23<br />

)<br />

. (61)


(<br />

By hypothesis E [A, A] 1/2<br />

∞<br />

) 1<br />

2<br />

< ∞. By BDG inequality and the fact that M is a bounded martingale,<br />

E([M, M] 1/2 ) ≤ c 1 E[M ∗ ∞] < ∞ for some positive constant c 1 . This complete the proof.<br />

24. Since we assume that the usual hypothesis holds throughout this book (see page 3), let<br />

Ft<br />

0 = σ{T ∧ s : s ≤ t} and redefine F t by F t = ∩ ɛ Ft+ɛ 0 ∨ N . Since {T < t} = {T ∧ t < t} ∈ F t , T<br />

is F-s<strong>to</strong>pping time. Let G = {G t } be a smallest filtration such that T is a s<strong>to</strong>pping time, that is a<br />

natural filtration of the process X t = 1 {T ≤t} . Then G ⊂ F.<br />

For the converse, observe that {T ∧ s ∈ B} = ({T ≤ s} ∩ {T ∈ B}) ∪ ({T > s} ∩ {s ∈ B}) ∈ F s<br />

since {s ∈ B} is ∅ or Ω, {T ∈ B} ∈ F T and in particular {T ≤ s}∩{T ∈ B} ∈ F s and {T > s} ∈ F s .<br />

Therefore for all t, T ∧s, (∀s ≤ t) is G t measurable. Hence Ft 0 ⊂ G t . This shows that G ⊂ F. (Note:<br />

we assume that G satisfies usual hypothesis as well. )<br />

25. Recall that the {(ω, t) : △A t (ω) ≠ 0} is a subset of a union of disjoint predictable times and<br />

in particular we can assume that a jump time of predictable process is a predictable time. (See the<br />

discussion in the solution of exercise 12). For any predictable time T such that E[△Z T |F T − ] = 0,<br />

△A T = E[△A T |F T − ] = E[△M T |F T − ] = 0 a.s. (62)<br />

26. Without loss of generality, we can assume that A 0 = 0 since E[△A 0 |F 0− ] = E[△A 0 |F 0 ] =<br />

△A 0 = 0. For any finite valued s<strong>to</strong>pping time S, E[A S ] ≤ E[A ∞ ] since A is an increasing process.<br />

Observe that A ∞ ∈ L 1 because A is a process with integrable variation. Therefore A {A S } S is<br />

uniformly integrable and A is in class (D). Applying Theorem 11 (Doob-Meyer decomposition) <strong>to</strong><br />

−A, we see that M = A − Ã is a uniformly integrable martingale. Then<br />

0 = E[△M T |F T − ] = E[△A T |F T − ] − E[△ÃT |F T − ] = 0 − E[△ÃT |F T − ]. a.s. (63)<br />

Therefore à is continuous at time T .<br />

27. Assume A is continuous. Consider an arbitrary increasing sequence of s<strong>to</strong>pping time {T n } ↑ T<br />

where T is a finite s<strong>to</strong>pping time. M is a uniformly integrable martingale by theorem 11 and the<br />

hypothesis that Z is a supermartingale of class D. Then ∞ > EZ T = −EA T and in particular<br />

A T ∈ L 1 . Since A T ≥ A Tn for each n. Therefore by Doob’s optional sampling theorem, Lebesgue’s<br />

dominated convergence theorem and continuity of A yields,<br />

lim<br />

n<br />

E[Z T − Z Tn ] = lim<br />

n<br />

E[M T − N Tn ] − lim<br />

n<br />

E[A T − A Tn ] = −E[lim<br />

n<br />

(A T − A Tn )] = 0. (64)<br />

Therefore Z is regular.<br />

Conversely suppose that Z is regular and assume that A is not continuous at time T . Since<br />

A is predictable, so is A − and △A. In particular, T is a predictable time. Then there exists an<br />

announcing sequence {T n } ↑ T . Since Z is regular,<br />

0 = lim<br />

n<br />

E[Z T − Z Tn ] = lim E[M T − M Tn ] − lim E[A T − A Tn ] = E[△A T ]. (65)<br />

Since A is an increasing process and △A T ≥ 0. Therefore △A T = 0 a.s. This is a contradiction.<br />

Thus A is continuous.<br />

24


31. Let T be an arbitrary F µ s<strong>to</strong>pping time and Λ = {ω : X T (ω) ≠ X T − (ω)}. Then by<br />

Meyer’s theorem, T = T Λ ∧ T Λ c where T Λ is <strong>to</strong>tally inaccessible time and TΛ c is predictable time.<br />

By continuity of X, Λ = ∅ and T Λ = ∞. Therefore T = T Λ c. It follows that all s<strong>to</strong>pping times are<br />

predictable and there is no <strong>to</strong>tally inaccessible s<strong>to</strong>pping time.<br />

32. By exercise 31, the standard Brownian space supports only predictable time since Brownian<br />

motion is clearly a strong Markov Feller process. Since O = σ([S, T [: S, T are s<strong>to</strong>pping time and S ≤<br />

T ) and P = σ([S, T [: S, T are predictable times and S ≤ T ), if all s<strong>to</strong>pping times are predictable<br />

O = P.<br />

35. E(M t ) ≥ 0 and E(M t ) = exp [B τ t − 1/2(t ∧ τ)] ≤ e. So it is a bounded local martingale and<br />

hence martingale. If E(−M) is a uniformly integrable martingale, there exists E(−M ∞ ) such that<br />

E[E(−M ∞ )] = 1. By the law of iterated logarithm, exp(B τ − 1/2τ)1 {τ=∞} = 0 a.s. Then<br />

[ (<br />

E[E(−M ∞ )] = E exp −1 − τ ) ]<br />

1<br />

2 {τ

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!