12.07.2015 Views

inequality for geometric Brownian motion

inequality for geometric Brownian motion

inequality for geometric Brownian motion

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

J. Appl. Probab. Vol. 35, No. 4, 1998, (856-872)Research Report No. 330, 1995, Dept. Theoret. Statist. AarhusOptimal Stopping and Maximal Inequalities<strong>for</strong> Geometric <strong>Brownian</strong> MotionS. E. GRAVERSEN and G. PESKIRExplicit <strong>for</strong>mulas are found <strong>for</strong> the payoff and the optimal stopping strategy ofthe optimal stopping problem:supEmax X t 0 c0twhere X = (X t ) t0 is <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong> with drift and volatility > 0 , and the supremum is taken over all stopping times <strong>for</strong> X . The payoff isshown to be finite, if and only if < 0 . The optimal stopping time is given by:3 = inft > 0 j X t = g3max0st X swhere s 7! g3(s) is the maximal solution of the (nonlinear) differential equation:@g@s = K g1+1s 1 0g 1 (s > 0)under the condition 0 < g(s) < s , where 1 = 1 0 2= 2 and K = 1 2 =2c .The estimate is established: 101g3(s) K1 1 0 21=1s 101=1as s ! 1. Applying these results we prove the following maximal <strong>inequality</strong>:Emax0t X t2 + 22 exp 0( 2 02) 22 2E tsup X = 1 0 2t>02E() 0 1where may be any stopping time <strong>for</strong> X . This extends the well-known identity:and is shown to be sharp. The method of proof relies upon a smooth pasting guess(<strong>for</strong> the Stephan problem with moving boundary) and Itˆo-Tanaka’s <strong>for</strong>mula (beingapplied two-dimensionally). The key point and main novelty in our approach is themaximality principle <strong>for</strong> the moving boundary (the optimal stopping boundary is themaximal solution of the differential equation obtained by a smooth pasting guess).We think that this principle is by itself of theoretical and practical interest.1. IntroductionThe main purpose of the paper is to describe the structure and derive explicit <strong>for</strong>mulas <strong>for</strong>the payoff and the optimal stopping strategy in the optimal stopping problem associated with a<strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong>, where the gain is given by the maximum of the process, while theAMS 1980 subject classifications. Primary 60G40, 60J65, 34A34. Secondary 60G44, 60J60, 34A10.Key words and phrases: Optimal stopping, <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong>, the payoff, stopping time, Stephan’s problem with moving boundary,the principle of smooth fit, maximality principle <strong>for</strong> the optimal stopping boundary, Picard’s method of successive approximations, Itô-Tanaka’s<strong>for</strong>mula, Burkholder-Gundy’s <strong>inequality</strong>, Doob’s theorem, continuous local martingale. (Second edition) © goran@imf.au.dk1


cost is proportional to the duration of the observation time. The main interest <strong>for</strong> such a “regretless”class of optional stopping problems comes from option pricing theory (see [9], [6], [2]).The results obtained are used in the last section to derive a maximal <strong>inequality</strong> <strong>for</strong> <strong>geometric</strong><strong>Brownian</strong> <strong>motion</strong>. To the best of our knowledge it appears to be the first of its kind.The method of proof relies upon a smooth pasting guess (<strong>for</strong> the Stephan problem with movingboundary) and a maximality principle <strong>for</strong> the moving boundary (the optimal stopping boundaryturns out to be a maximal solution of the differential equation obtained from the smooth pastingguess). The smooth pasting guess allows us to apply Itô-Tanaka’s <strong>for</strong>mula (in a two-dimensionalsetting), while the maximality principle enables us to pick out the optimal stopping boundary fromamongst all possible ones in a unique way. It is this maximality principle which is the key pointand main novelty in our approach (compare with [5] and [9]), and it is not clear how the problemcould be solved without its use. It should be noted that the optimal stopping boundary found here isnontrivial, and it is probably impossible in a closed <strong>for</strong>m to find even a particular (nonzero) solutionof the differential equation it solves, thus showing the full power of the method. Moreover, wefeel that this maximality principle might be fulfilled in a similar setting involving other diffusionsas well, and there<strong>for</strong>e find it by itself of theoretical and practical interest.The problem is more precisely <strong>for</strong>mulated in the beginning of Section 2. The remaining partof Section 2 is devoted to the smooth pasting guess and <strong>for</strong>mulation of the maximality principle.In Section 3 we prove the existence and uniqueness of the maximal solution to the differentialequation obtained by the smooth pasting guess, and derive sharp estimates <strong>for</strong> this solution. Thesefacts are used in Section 4 where the proof is presented <strong>for</strong> the prescribed <strong>for</strong>m of the payoff and theoptimal stopping strategy. Applying these results in Section 5, we derive a new type of maximal<strong>inequality</strong> <strong>for</strong> <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong>.2. The maximality principleIn this section we shall introduce the setting and <strong>for</strong>mulate the main problem under consideration.Applying the so-called principle of smooth fit (smooth pasting) which was enunciated byA: Kolmogorov in the 1950’s and used later by many authors (see [5] and [9] <strong>for</strong> more details),we shall first guess a solution (up to the choice of the optimal stopping boundary satisfying a(nonlinear) differential equation obtained by the smooth pasting guess). Then we will <strong>for</strong>mulatethe maximality principle which will enable us to select the optimal stopping boundary in a uniqueway. The work started here will be completed in the following two sections. The next sectionis devoted to the differential equation itself, while the following section contains a proof that theprinciple of smooth fit and the maximality principle are indeed satisfied.1. Suppose we are given a <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong> X = (X t ) t0 with drift < 0 andvolatility > 0 , being defined on the probability space (;F; P ) , and X under P startsat x > 0 . Thus, we have:(2.1) X t = x exp B t +0 22t!, t 0where B = (B t ) t0 is standard <strong>Brownian</strong> <strong>motion</strong> which under P starts at 0 . The process Xsatisfies the stochastic differential equation:2


(2.2) dX t = X t dt + X t dB t .The infinitesimal operator of X in ]0;1[ equals:(2.3) L X = 22x2@2@x 2+ x @@x .Given s x > 0 , introduce the maximum process associated with X :(2.4) S t =max0rt X r_ s<strong>for</strong> t 0 . Then under P x;s := P the process (X; S) starts at (x; s) .2. The main problem under consideration in this paper is to find explicit <strong>for</strong>mulas <strong>for</strong> the payoffand the optimal stopping strategy in the optimal stopping problem:E x;s0S 0c1(2.5) V (x; s) := supP 0(2.6) sup Bt 0t1t>0 = e 02where the supremum is taken over all stopping times <strong>for</strong> X . Recalling Doob’s theorem:where ; > 0 , we easily find out that:E t(2.7) sup X = 1 0 2t>02<strong>for</strong> < 0 , while this expectation equals +1 if 0 . Thus V (x; s) < 1 <strong>for</strong> all < 0 .Moreover, we will obtain as a consequence of Theorem 4.1 below that the converse is also true. Inother words, we have V (x; s) < 1 if and only if < 0 . This explains why we do assume fromthe very beginning that the drift is strictly negative ( <strong>for</strong> 0 we have V (x; s) = +1 ).Finally, note that since V (x; s) E x;s (S 0 0 c 0) = s > 01 , by (2.7) we see that the supremumin (2.5) could equivalently be taken over all stopping times <strong>for</strong> X <strong>for</strong> which E() < 1 .3. In order to guess candidates <strong>for</strong> the payoff V (x; s) and the optimal stopping strategy 3 , weshould note that the state space of the Markov process (X; S) equals E = f (x; s) j 0 < x sg ,and inside E the process moves horizontally (only the first coordinate changes) when being off thediagonal x = s , while it could increase (in the second coordinate) only after hitting the diagonal.Moreover, if analyzing the structure of the expression <strong>for</strong> V (x; s) in (2.5) we may note that thereis a very strong intuitive argument <strong>for</strong> the existence of a point g3(s) 2 ]0; s[ ( <strong>for</strong> the givenvertical level s > 0 ) at which (or being on the left from it) we should stop the process instantly.Otherwise, we could wait too long (be<strong>for</strong>e stopping the process) which eventually would make thepayoff negative, due (in part) to its cost which is proportional to the duration of the observationtime and (on the other hand) to a rather moderate increase of the process X . ( Considering theexit times of X from small enough intervals around the given starting point x > 0 , one mayeasily find that V (x; x) > x . Thus we cannot stop the process at the diagonal, and there<strong>for</strong>e themap s 7! g3(s) is to lie strictly below the diagonal in E .) For the reasons just explained itseems reasonable to assume that the optimal stopping strategy <strong>for</strong> the problem (2.5) is of the <strong>for</strong>m:3


(2.8) 3 = inf8t > 0 j X t g3(S t )9G3 =80g 3(s); s1 j s > 09 E n fdiagonalg is the optimal stopping boundary. Thus,wherethe state space E of the process (X; S) is to be divided into two parts: the domain ofthe continued observationC3 = f (x; s) j 0 < g3(s) < x s g and the stopping domainD3 = f (x; s) j 0 < x g3(s) < s g . The border line between C3 and D3 is the optimalstopping boundary G3 which is to be determined.4. If our premises were true, then the payoff would be given by:(2.9) V (x; s) = E x;s0S3 0c3 1 .Since 3 may be viewed as the exit time of the “diffusion” (X; S) from an open set, and sincethe infinitesimal operator of (X; S) at each horizontal level g3(s) < x < s coincides with theinfinitesimal operator of X (at each such level only the first coordinate of (X; S) changes), bygeneral Markov process theory it is clear that V (x; s) is to satisfy the differential equation:(2.10) L X V (x; s) = c<strong>for</strong> x 2 ]g3(s); s[ . It is a Stephan problem with moving (free) boundary ( the boundary s 7! g3(s)of the problem is unknown as well ). In order to approach possible solutions to (2.10), we shoulddetermine some smoothness conditions at the boundaries. One condition is obvious: if the process(X; S) starts at the optimal stopping boundary G3 , we must stop instantly:(2.11) V (x; s)x=g3(s)+= s .The next condition might be viewed as its refinement. It is the condition of smooth pasting at theoptimal stopping boundary:(2.12)@V@x (x; s) = 0 .x=g3(s)+The last condition we shall make use of is the condition of normal reflection at the diagonal:(2.13)@V@s (x; s) x=s0 = 0 .5. In this context it is very instructive to look below into the proof of Theorem 4.1 in orderto see how (2.10)-(2.13) are matched together when applying Itô-Tanaka’s <strong>for</strong>mula to the processV3(X t ; S t ) <strong>for</strong> t 0 (see (4.16)+(4.17)). It should be noted that the candidate V3(x; s)(which will shortly be given) is twice continuously differentiable inEnG3 , while the smoothpasting condition (2.12) implies that x 7! V (x; s) at the vertical level s > 0 is convex andcontinuously differentiable at g3(s) . Thus, when the process (X; S) is “away” from the optimalstopping boundary G3 , Itô’s <strong>for</strong>mula can be applied. Moreover, when the process approaches the“dangerous” place g3(s) at the vertical level s > 0 , then only the first coordinate X of theprocess changes, and thus Itô-Tanaka’s <strong>for</strong>mula may be applied ( the measure associated with the4


first derivative, due to the smooth pasting condition, has no atoms).6. The equation (2.10) with L X from (2.3) is of Cauchy’s type:(2.14) 22 x2 y 00 + xy 0 = c .Since 1 := 1 0 2= 2 6= 0 , the general solution to (2.14) is easily verified to be given by:(2.15) y = Ax 1 0 2c1 2 log(x) + B .In other words, the payoff is of the <strong>for</strong>m:(2.16) V (x; s) = A(s)x 1 0 2c log(x) + B(s)12 <strong>for</strong>0 < g3(s) < x < s . By (2.11) and (2.12) we readily find:(2.17) A(s) =2c1 2 2 g3(s) 1(2.18) B(s) = s + 2c1 2 log g3(s)02c1 2 2 .Inserting this into (2.16) we get:(2.19) V (x; s) = 2c1 2 2 xg3(s)!10 logxg3(s)!10 1!<strong>for</strong> 0 < g3(s) < x < s . Moreover, it is clear from our reasoning above that V (x; s) = s<strong>for</strong> 0 < x g3(s) . Finally, by using condition (2.13) we easily find that the optimal stoppingboundary s 7! g3(s) is to satisfy the differential equation:(2.20) g 0 (s) = K g(s)1+1s 1 0g(s) 1 (s > 0)where K = 1 2 =2c . This is the maximum in<strong>for</strong>mation obtained by (2.10)-(2.13).7. In this way we have arrived at the crucial point of our approach: how to determine theoptimal stopping boundary s 7! g3(s) ? All we know so far is that g3(s) is to satisfy the equation(2.20). But how to choose the right solution from (2.20)? A priori this is not clear. In a similarcontext of [5] and [9], the authors were gifted enough to be able to pick up the right solution byguessing, but this approach breaks down here, since it is impossible to guess even a particular(nonzero) solution to (2.20) under our condition 1 > 1 ( see Section 3 ). For these reasons wewere <strong>for</strong>ced to search <strong>for</strong> an argument which would help us to select the right solution from (2.20).A very simple observation in this direction is already stated above when deriving (2.8). Namely, itis clear from the structure of the payoff that we shouldn’t wait too long be<strong>for</strong>e stopping the process.(By analytic methods it is possible to clarify this intuitive argument, and make it more quantitative,but <strong>for</strong> simplicity we shall refrain from presenting the details.) Such an argument leads us to theconclusion that the optimal stopping boundary s 7! g3(s) should be the maximal solution to (2.20)which lies below the diagonal. (In this context it is instructive to examine the last part of the proof+ s5


of Theorem 4.1 below. There, it is easily seen that we are <strong>for</strong>ced to choose the optimal stoppingboundary s 7! g3(s) as large as possible in value (see (4.18)+(4.20)+(4.21) and (4.22)+(4.24)).In such a way the expected waiting time E(3) to hit the boundary is minimal (see (4.20)).8. It is our pleasure to report that this maximality principle turns out to be true. We moreoverfeel that such a principle could be fulfilled in a similar setting <strong>for</strong> other diffusions as well, butwill not enter into such a discussion here. Instead, motivated by the maximality principle in ourparticular case, we shall in the following section devote our attention to the equation (2.20) itself.It is a first order nonlinear differential equation, and we were unable to find any disscusion of itin the existing literature.3. Existence and uniqueness of the maximal solution(3.1)1. Throughout we shall consider the first order nonlinear differential equation:@y@x = K y1+1x 1 0y 1 ( x > 0 )where K > 0 and 1 > 1 are given and fixed constants. Motivated by the maximality principleexplained in the preceding section, we shall show that there exists a maximal non-negative solutionto (3.1) which lies below the diagonal y = x . This solution is given by:(3.2) y3 = sup8 y j y solves (3.1) and satisfies 0 < y < x <strong>for</strong> all x > 09where the supremum is attained and there<strong>for</strong>e may be written as a maximum. A more constructiveway of obtaining y3 is to choose any x 0 > 0 and find a sequence of solutions y n to (3.1)satisfying y3(x 0 ) = lim n!1 y n (x 0 ) . Then we get:(3.3) y3(x) = limn!1 y n(x)<strong>for</strong> all x > 0 . (In the end of this section we indicate how to compute the value of y3 at anygiven point as exactly as desired.) Finally, we shall show that the following estimates are valid:(3.4) x1 +101 K11=1 y 1013(x) x K11=1x 101=1<strong>for</strong> all x > (101)=K1 ( the left-hand <strong>inequality</strong> holds <strong>for</strong> all x>0 as well ). These estimatesare sharp and the following consequences are valid: 1=1101(3.5) y3(x) x 101=1 ( x ! 1 )K1(3.6) lim1!1+ y 3;1(x) = 0 ( x > 0 )( compare (3.6) with the solution <strong>for</strong> the case 1=1 below ). To conclude we shall note that anysolution Y K to (3.1) with K > 0 given and fixed is self-similar in the following sense:(3.7) Y K (x) = 1 K Y 1(Kx)6


<strong>for</strong> all x > 0 . Thus in our proof below it is no restriction to assume in (3.1) that K = 1 .2. The proof of the facts just presented is essentially based upon a well-known theorem on theexistence and uniqueness of solutions <strong>for</strong> the first order nonlinear differential equations (of normal<strong>for</strong>m). For convenience we shall recall its statement. The initial value problem:(3.8)@y@x = f(x; y) ; y(x 0) = y 0has a unique solution defined on an interval containing x 0 , whenever f(x; y) and (@f=@y)(x; y)are continuous on some open rectangle containing (x 0 ; y 0 ) . The method of successive approximationsdue to Picard is used in its proof, and there<strong>for</strong>e this theorem will be referred to as Picard’stheorem below.First note that Picard’s theorem applies to the case of (3.1) above, whenever x 0 and y 0are taken strictly positive, and this explains the constructive way <strong>for</strong> obtainingy3 as describedfollowing (3.2) above. Moreover, to verify that y3 defines a solution to (3.1), it is enough to replacethe equation (3.1) by an equivalent integral equation and use monotone convergence theorem ( thesequence y n converging to y3 in (3.3) above may be taken increasing ). The arguments justdescribed are to be completed by showing that there exists at least one (global) solution to (3.1)satisfying 0 < y < x <strong>for</strong> all x > 0 . To show its existence, we find it convenient to replace theequation (3.1) under 0 < y < x with the equivalent system:(3.9)@z@x =1z2x 1 0z( 0 < z < x 1 )upon identifying z = y 1 . In order to show the global existence of a solution <strong>for</strong> (3.9), we shallmake use of the following maps:(3.10) w (x) = x 101being defined <strong>for</strong> all x > 0 with > 0 . Since w 0 (x) = (101)x 102 , in view of applyingPicard’s theorem, we shall determine those > 0 <strong>for</strong> which there exists x 0 () > 0 such that:(3.11)1 (w (x)) 2x 1 0w (x) (101) x102<strong>for</strong> all x x 0 () . From (3.11) we find that such ’s must satisfy:(3.12) 0 < 1101 + 1 x 01<strong>for</strong> all x x 0 () . Thus, <strong>for</strong> 0 < < (101)=1 given, we may take:(3.13) x 0 () = 1 0 1101 01and (3.11) will hold. Note that < x 0 () , thus w (x) < x 1<strong>for</strong> x x 0 () . Now applyingPicard’s theorem repeatedly and using the fact that z 7! 1z 2 =(x 1 0z) is increasing, the solutionz to (3.9) satisfying z(x 0 ()) = (x 0 ()) 101 can be continued <strong>for</strong> all x x 0 () to stay below7


3. We shall conclude this section with two supplements. The first is devoted to the equation(3.1) in the case 1 = 1 . We shall show that in this case there is no strictly positive solution whichstays below the diagonal. ( In the context of our main problem in Section 2, this corresponds tothe case when the drift equals zero and the payoff (2.5) is infinite ). In the second remark weshall indicate how estimates (3.4) can be used to compute the maximal solution y3 of (3.1) lyingbelow the diagonal as close to the exact values as desired. This is directly applied to the exactcomputation of the optimal stopping boundary s 7! g3(s) <strong>for</strong> our main problem in Section 2.4. The equation (3.1) <strong>for</strong> 1 = 1 is equivalently ( up to dividing by zero ) written as follows( <strong>for</strong> simplicity we shall assume that K = 1 as well ):(3.22) y +10 x y @y@x = 0 ( x > 0 ) .It is easily seen that this equation is not exact. However, multiplying through in (3.22) by(1=y) exp(1=y) , the equation obtained becomes exact, due to the identity:@(3.23)@yexp0 011y = exp0 01@xy @ 01 y0111 0 xy .Hence by a standard method we find that each (nonzero) solution y to (3.22) satisfies the <strong>for</strong>mula:(3.24) xe 1=y +Zye 1=tt 0tdt = C<strong>for</strong> some constant C and any t 0 > 0 . Thus, the equation (3.1) with 1 = 1 admits a closed<strong>for</strong>m <strong>for</strong> its solutions. (This is very likely false in the main case 1 > 1 treated above.)Moreover, let us show that there is no solution to (3.22) satisfying 0 < y < x <strong>for</strong> all x > 0 .Indeed, if so, since y 0 > 0 and thus y is increasing, by (3.24) we would have:(3.25) e 1=y 1= xe 1=y +Zye 1=ty 1tdt xe 1=x + log(y=y 1 ) x<strong>for</strong> all x 1 where y 1 = y(1) . Letting x ! 1 this leads to a contradiction.5. Here we indicate how estimates (3.4) can be used to approximate the value of y3 evaluatedat any given point as close as desired. Let x 0 and " > 0 be given and fixed. Suppose wewant to compute y3(x 0 ) within an error up to the size of " . For this, we shall first find apoint x " > x 0 such that the difference of the two bounds found in (3.4) when being evaluatedat x " is smaller than " . ( Note that this difference tends to zero when the argument tends toinfinity.) Next we shall solve (numerically) the differential equation (3.1) by going from x " tox 0 backward, firstly under the initial condition y(x " ) being equal to the left-hand bound in (3.4)evaluated at x " , and then to the right-hand bound, respectively. In this way we get two solutionsy 1 and y 2 of (3.1) satisfying:(3.26) y 1 (x) y3(x) y 2 (x)<strong>for</strong> allx 0 x x " . It remains to be noted from (3.1) that:9


(3.27) y 0 2(x) 0 y 0 1(x) 0<strong>for</strong> all x > 0 . Thus the difference x 7! y 2 (x) 0 y 1 (x) is increasing on ]0;1[ . There<strong>for</strong>e:(3.28) max8)9y 3(x 0 ) 0 y 1 (x 0 ) ; y 2 (x 0 ) 0 y3(x 0 y 2 (x 0 ) 0 y 1 (x 0 ) y 2 (x " ) 0 y 1 (x " ) " .Thus, either y 1 (x 0 ) or y 2 (x 0 ) is within " of y3(x 0 ) . This completes the claim.4. The payoff and the optimal stopping strategyIn this section we prove that the explicit <strong>for</strong>mulas <strong>for</strong> the payoff and the optimal stoppingstrategy guessed in Section 2 are correct. This establishes the validity of the principle of smoothfit and the maximality principle <strong>for</strong> the optimal stopping problem under consideration. The resultsobtained here are further applied in the following section. The fundamental result of the paper is<strong>for</strong>mulated in the theorem as follows.Theorem 4.1Let X = (X t ) t0 be <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong> with drift < 0 and volatility > 0 asdefined in (2.1), and let S = (S t ) t0 be the maximum process associated with X as defined in(2.4). Consider the optimal stopping problem with the payoff given by:(4.1) V (x; s) := sup Ex;s0S 0c1<strong>for</strong> s x > 0 given and fixed ( under P x;sthe process (X; S) starts at (x; s) ), where thesupremum is taken over all stopping times <strong>for</strong> X .1. Then V (x; s) < 1 if and only if < 0 . The optimal stopping strategy in (4.1) (thestopping time at which the supremum is attained) is given by the following <strong>for</strong>mula:(4.2) 3 = inf8t > 0 j X t g3(S t )9where s 7! g3(s) is the maximal solution of the differential equation:(4.3) g 0 (s) = K g(s)1+1s 1 (s > 0)0g(s) 1under the condition 0 < g(s) < s , with 1 = 1 0 2= 2given by the following <strong>for</strong>mula:(4.4) V (x; s) = 2c1 2 2 xg3(s)!10 logxg3(s)= s , if 0 < x g3(s) .and K = 1 2 =2c . The payoff is!10 1!+ s , if g3(s) < x s2. The optimal stopping boundary s 7! g3(s) is strictly increasing on ]0;1[ and satisfies0 < g3(s) < s <strong>for</strong> all s > 0 , as well as the following limiting conditions:10


(4.5) g3(0+) = g 0 3(+1) = 0(4.6) g 0 3(0+) = 1 .Moreover, the following estimates are shown to be valid:(4.7)s1 +101 K11=1 g 1013(s) s K11=1s 101=1<strong>for</strong> all s > (101)=K1 (the left-hand <strong>inequality</strong> holds <strong>for</strong> all s>0 as well). Finally, the optimalstopping strategy 3 is exponentially integrable, and the following estimate holds true:(4.8) E x;s exp 2 1 283! !1=2xg3(s)whenever g3(s) < x s . ( The left-hand side equals 1 <strong>for</strong> 0 < x g3(s) .)Proof. Since the facts from the second part of the theorem will be needed in the proof of thefirst part, we shall begin with the second part.For the second part it should be noted that (4.5)-(4.7) follow directly from our construction ofthe maximal solution which is presented in Section 3. ( The estimates (4.7) are nothing but theestimates (3.4) just rewritten, while (4.5)+(4.6) is easily deduced by (4.7).)To show (4.8) assume that g3(s) < x s are given and fixed. The key point is to note that:(4.9) 3 g 3(s) := inf8t > 0 j X t = g3(s)9since s 7! g3(s) is increasing. Moreover, by (2.1) we find:(4.10) g3(s) = inf8t > 0 j B t = at0b9:= a;bwith a = ( 2 02)=2 and b = log(x=g3(s)) 1= , both strictly positive. Finally, it is well-known(see [8], p.70) that we have:(4.11) E exp a22 a;b!= exp0 ab1.Now, matching (4.9)-(4.11) together, we get (4.8). This completes the proof <strong>for</strong> the second part.For the first part note that V (x; s) < 1 <strong>for</strong> < 0 follows by (2.7). To prove the converse,it is enough to show that V (x; s) = +1 when = 0 . For this, denote the payoff associatedwith the given by V (x; s) , and the corresponding optimal stopping boundary by g3;(s) .Then clearly V 0 (x; s) V (x; s) , while from (4.7) we find that lim "0 g3;(s) = 0 . There<strong>for</strong>efrom (4.4) (being proved below) we get lim "0 V (x; s) = +1 , thus proving the claim.It remains to show the validity of (4.2) and (4.4). For this, we shall denote the function on theright-hand side in (4.4) by V3(x; s) . It should be recalled that this function satisfies (2.10)-(2.13).Since clearly V3(x; s) s , we may write:11


(4.12) V (x; s) = sup Ex;s0S 0c1 sup Ex;s0)1S 0V3(X ; S supE x;s0V 3(X ; S )0c1 .+ supE x;s0V 3(X ; S )0c1Hence, to complete the proof, it is enough to show the following two facts:(4.13) supE x;s0V 3(X ; S )0c1 V 3(x; s)(4.14) E x;s0S3 0c3 1 = V3(x; s) .Proof of (4.13)+(4.14): By Itˆo’s <strong>for</strong>mula ( see Remark 4.2 below ) we have:(4.15) V3(X t ; S t ) = V3(X 0 ; S 0 ) +Zt+ 1 2Zt00@V3@x (X r; S r ) dX r +Zt@ 2 V3@x 2 (X r; S r ) d X; Xr0@V3@s (X r; S r ) dS rwhere we set (@ 2 V3=@x 2 )(g3(s); s) = 0 . By (2.2), (2.3), and the fact that dX; Xt = 2 X 2 t dt ,this can be written as follows:(4.16) V3(X t ; S t ) = V3(X 0 ; S 0 ) +ZtL X V3(X r ; S r ) dr +Zt+Zt00@V3@x (X r; S r ) X r dB r .0@V3@s (X r; S r ) dS rNext note that L X V3(x; s) = c <strong>for</strong> g3(s) < x < s and L X V3(x; s) = 0 <strong>for</strong> 0 < x g3(s) .Moreover, due to the normal reflection of X , the set of those r > 0 <strong>for</strong> which X r = S r isof Lebesgue measure zero. Finally, since the increment dS r equals 0 outside of the diagonalx = s , and V3(x; s) at the diagonal satisfies (2.13), we see that the second integral in (4.16) isidentically zero. These facts matched together in (4.16) show that:(4.17) V3(X ; S ) V3(x; s) + c + M <strong>for</strong> any (bounded) stopping time <strong>for</strong> X , where M = (M t ) t0is the local martingale given by:(4.18) M t =Zt0@V3@x (X r; S r ) X r dB r<strong>for</strong>t 0 . Moreover, this also shows that:(4.19) V3(X ; S ) = V3(x; s) + c + M .<strong>for</strong> any stopping time <strong>for</strong> X satisfying 3 .12


Since the suprema in (4.12) are equivalently taken over all bounded stopping times, and sinceby (4.8) we see that 3 has all moments finite, by (4.17) and (4.19) respectively, the proof of= 0(4.13) and (4.14) will be completed if we are able to show that:(4.20) E x;s0M1<strong>for</strong> any stopping time <strong>for</strong> X with all moments finite.To complete the proof, we shall show that (4.20) holds <strong>for</strong> any stopping time <strong>for</strong> whichthere is r > 1=2(101) such that E( r ) < 1 . So, let such a stopping time be given andfixed. Note from (4.4) that:(4.21)@V3(x; s) =2c@x!x 1010 1 1 2 g3(s) 1 x= 0 , if 0 < x g3(s) ., if g3(s) < x sThus, to get (4.20), by Burkholder-Gundy’s <strong>inequality</strong> <strong>for</strong> continuous local martingales (see [3]and [4]) it is enough to show that:(4.22) E x;sZ0 X1 rg3(S r ) 1 0 121fX rg3(S r)g dr!1=2:= I < 1 .(4.23) I E x;sZ0 2 K1101 S rdr!1=2K1101 E x;sS p .Now recall the left-hand estimate in (4.7). Since X r S r , this gives:Further, by the Hölder <strong>inequality</strong> we get:(4.24) I K1 E101 x;s0Sp11=pE x;s0q=211=q<strong>for</strong> any p > 1 with 1=p + 1=q = 1 . By Doob’s theorem (2.6) we easily find that:E(4.25) sup X tp< 1t>0provided that < 2 (10p)=2 . Thus, if p is taken close enough to 1 to satisfy the last <strong>inequality</strong>,then E x;s01Sp < 1 in (4.24). This imposes a condition on q which is easily verified to beq >1=(101) . Since by our hypothesis E( q=2 ) < 1 <strong>for</strong> all such q , this completes the proof.Remark 4.2Here we explain in more detail how to get (4.15) by means of Itˆo’s <strong>for</strong>mula. Note that (X; S)is a two-dimensional continuous semimartingale, where X is of diffusion type while S is anincreasing process. The state space of (X; S) is E = f(x; s) j 0 < x sg . The function V3(x; s)defined by (4.4) is twice continuously differentiable in E n80g 3(s); s1 j s > 09, while on each13


fixed vertical level s > 0 its restriction x 7! V3(x; s) is convex and continuously differentiableon ]0; s[ . These two facts together indicate that Itô’s <strong>for</strong>mula can be applied. Since away from thediagonal the process (X; S) moves only horizontally thus V3(X t ; S t ) can be controlled by the(one-dimensional) Itô-Tanaka <strong>for</strong>mula, while away from the optimal stopping boundary the functionV3(x; s) is C 2and there<strong>for</strong>e (two-dimensional) Itˆo <strong>for</strong>mula can be applied.In order to <strong>for</strong>malize this let us choose two continuous (increasing) maps s 7! g 1 (s) ands 7! g 2 (s) satisfying g3(s) < g 1 (s) < g 2 (s) < s <strong>for</strong> all s > 0 . Introduce the stopping times:(4.26) 1 = inf8t > 0 j X t g 1 (S t )9(4.27) 2 = inf8t > 0 j X t g 2 (S t )9and define 0 := 0 , 1 = 1 , 2 = 1 + 2 1 , 3 = 2 + 1 2 , 4 = 3 + 2 3 . . .Note that n " 1 . In order to prove (4.15) we only need to show that <strong>for</strong> each fixed t > 0we have <strong>for</strong> all k 0 :(4.28) V3(X t^ k; S t^ k) = V3(x; s) +Zt+Zt0@V3@s (X r; S r )1 [[0;k ]](r) dS r + 1 20@V3@x (X r; S r ) 1 [[0;k]](r) dX rZt0@ 2 V3@x 2 (X r; S r )1 [[0;k ]](r) d X; Xr .We prove this by induction. Fix t > 0 . For k = 0 it is trivial, so suppose it is true <strong>for</strong>k01 0 and consider the case k . First assume k is odd, and let ~V (x; s) be any C 2 -functionconiciding with V3(x; s) on G = f (x; s) 2 E j g 1 (s) xg ( such an extension evidently exists ).By applying the ordinary two-dimensional Itˆo’s <strong>for</strong>mula we get:(4.29) V3(X t^ k; S t^ k) 0 V3(X t^ k01; S t^ k01) = ~V3(X t^ k; S t^ k) 0 ~V3(X t^ k01; S t^ k01)=Zt0+ 1 2Zt0=Zt0+ 1 2Zt@ ~V3@x (X r; S r ) 1 ]]k01; k ]](r) dX r +Zt@ 2 ~V3@x 2 (X r; S r ) 1 ]]k01; k ]](r) dX; Xr@V3@x (X r; S r ) 1 ]]k01; k ]](r) dX r +Zt0@ 2 V3@x 2 (X r; S r ) 1 ]]k01; k ]](r) dX; Xr00@ ~V3@s (X r; S r ) 1 ]]k01; k ]](r) dS r@V3@s (X r; S r ) 1 ]]k01; k ]](r) dS rwhich proves the case k if k is odd. ( Observe that (X r ; S r ) 2 G if r 2 [ k01; k ] .)Now consider the case k even, and let (S (n)t ) t0 denote the right continuous increasingprocess defined by:(4.30) S (n)t =Xn2 nk=1k012 n 1 [(k01)=2 n ;k=2 n [(S t ) + n1 [n;1[(S t ) .By applying Itô-Tanaka’s <strong>for</strong>mula to the C 1 -convex function x 7! V3(x; s) <strong>for</strong> different s > 0we get ( r 7! S r is constant on [[ k01; k ]] ):14


(4.31) V3(X t^ k; S t^ k)0V3(X t^ k01; S t^ k01) = limn!1V3(X t^ k; S (n)t^ k)0V3(X t^ k01; S (n)t^ ) k01X n2 n 01= limn!11 fS (n) k01g =j=2nj=0= limn!1V3(X t^ k; S (n)k01) 0 V3(X t^ k01; S (n)X n2 n 01= limn!1= limn!1=Zt0=Zt0j=0+ 1 2ZtZt01 fS (n) k01 =j=2n g0+ 1 2Ztk01)) V3(X t^ k; j=2 n ) 0 V3(X t^ k01; j=2 nZt0@V3@x (X r; j=2 n ) 1 ]]k01; k ]](r) dX rd !@ 2 V3@x 2 (X r; j=2 n ) 1 ]]k01; k ]](r) X; Xr@V3@x (X r; S (n) k01)) 1 ]]k01; k ]](r) dX r0d !@ 2 V3@x 2 (X r; S (n)k01) 1 ]]k01; k]](r) X; Xr@V3@x (X r; S k01)) 1 ]]k01; k ]](r) dX r+ 1 2Zt0@ 2 V3@x 2 (X r; S k01) 1 ]]k01; k ]](r) dX; Xr@V3@x (X r; S r ) 1 ]]k01; k ]](r) dX r+ 1 2Zt0@ 2 V3@x 2 (X r; S r ) 1 ]]k01; k ]](r) d X; XrThe limit identification in the penultimate equality is, <strong>for</strong> the first integral just an application of thecontinuity of @V3=@x and properties of the stochastic integral with respect to X t , while <strong>for</strong> thesecond integral we use the fact that @ 2 V3=@x 2 is locally bounded and <strong>for</strong> almost all ! we haved X; Xr (!) 0 j X r(!) = g3(S r (!))g is zero.Remark 4.3It should be observed in the proof of (4.2)+(4.4) above that up to (4.19) we didn’t make any useof the specific <strong>for</strong>m of the optimal (maximal) stopping boundary s 7! g3(s) and the correspondingpayoff V3(x; s) . In other words, those arguments work out <strong>for</strong> any candidate <strong>for</strong> the payoff V (x; s)satisfying (2.10)-(2.13) associated with the stopping boundary s 7! g(s) satisfying (2.20). Thekey point about the optimal (maximal) stopping boundary s 7! g3(s) is to show (4.20). To obtainthis, it is clearly seen from (4.22) that we should look <strong>for</strong> the maximal s 7! g(s) satisfying (2.20)which stays below the diagonal. This offers a somewhat loose but indicative analytic argument<strong>for</strong> the maximality principle to hold.15


Remark 4.4The referee kindly pointed out that the stopping times (4.2), which appear optimal in ourproblem, have been studied earlier. Due to their significant role in the Skorokhod embeddingproblem [1], such stopping times are sometimes referred to as Azéma-Yor stopping times. We alsolearned that Jacka [7] makes use of these stopping times in optimal stopping.5. The maximal <strong>inequality</strong>In this section we shall apply the results obtained in the previous section and derive a maximal<strong>inequality</strong> <strong>for</strong> <strong>geometric</strong> <strong>Brownian</strong> <strong>motion</strong>. To the best of our knowledge this <strong>inequality</strong> has notbeen recorded earlier.Theorem 5.1Let B = (B t ) t0 be standard <strong>Brownian</strong> <strong>motion</strong>, and let < 0 and > 0 be given andfixed. Then the <strong>inequality</strong> is valid:(5.1) E max0t exp B t +0 22whenever is a stopping time <strong>for</strong> B .Proof. Recalling (2.1)+(2.4) and (4.1), we see that:(5.2) E x;x0S1 cE() + V (x; x)t! 1 1 0 22 + 22 exp 0( 2 02) 22 2 E() 0whenever is a stopping time <strong>for</strong> B (with finite expectation) being given and fixed. By (4.4)and (4.7) we obtain the estimate:(5.3) V (x; x) = 2c1 2 2 xg3(x)!10 logxg3(x)!10 1! 1101 x + 2c 2c (101)1 2 2 log 11 2 2 x+ x<strong>for</strong> allx > 2c(101)=1 2 2 . Introduce the function:(5.4) F (c) = cE() + 1101 x + 2c1 2 2 log 2c (101)1 2 2 1Then from (5.2) and (5.3) we find:(5.5) E x;x0S1 inf F (c) .c>0x.Solving F 0 (c) = 0 we find that Fattains its minimal value on ]0;1[ at the point: 1(5.6) c3 = x 12 22(101) exp 01 2 2E() 0 .216


( Note that x > 2c3(101)=1 2 2 as needed.) Moreover, it is easily computed that:(5.7) F (c3) = x10 1101 101 exp 01 2 22Recalling that 1 = 1 0 2= 2 and inserting (5.7) into (5.5) gives:(5.8) E x;x0S1 x 1 0 22 + 22 exp 0( 2 02) 2By the representation (2.1) this reduces to (5.1), and the proof is complete.E() 0 1!2 2 E() 0 1!..Remark 5.2The <strong>inequality</strong> (5.1) might be viewed as a refinement and extension of the equality (2.7)obtained by Doob’s theorem (2.6).Remark 5.3We find it interesting to record the following consequence of Theorem 4.1 ( the payoff is finiteif and only if the drift is strictly negative):(5.9) supEmax exp B t +0 20t 2which is to be compared with the well-known fact:(5.10) supEexpB +0 22t0 c!= +10 c!= 1 .( The supremums in (5.9) and (5.10) are taken over all stopping times <strong>for</strong> B .)From the proof of Theorem 4.1 it is clear what sequence of stopping times n <strong>for</strong> B thatthe supremum in (5.9) over equals +1 . If s 7! g3;(s) denotes the optimal stopping boundary<strong>for</strong> (4.1) with drift , and 3; denotes the corresponding optimal stopping strategy, then it isgood enough to take n equal to:(5.11) 3;01=n = inf8 t > 0 j X t g3;01=n(S t ) g<strong>for</strong> all n 1 . (Here X t and S t are given by (2.1) and (2.4) with any s x > 0 .) We were notable to find a simpler sequence of stopping times over which the supremum in (5.9) would be +1 .REFERENCES[1] AZEMA, J: and YOR, M: (1979): Une solution simple au probleme de Skorokhod: Sem:Probab: XIII, Lecture Notes in Math: 721, Springer (90-115).[2] BLACK, F: and SCHOLES, M: (1973): The pricing of options and corporate liabilities: J:Political Economy 81 (637-659).[3] BURKHOLDER, D: L: and GUNDY, R: F: (1970): Extrapolation and interpolation of quasi-17


linear operators on martingales: Acta Math: 124 (249-304).[4] DUBINS, L: E: and SCHWARZ, G: (1965): On continuous martingales: Proc: Nat: Acad: Sci:U.S.A: 53 (913-916).[5] DUBINS, L: E:, SHEPP, L: A: and SHIRYAEV, A: N: (1993): Optimal stopping rules andmaximal inequalities <strong>for</strong> Bessel processes: Theory Probab: Appl: 38 (226-261).[6] GRAVERSEN, S. E. and PESKIR, G. (1995). On the Russian option: The expected waitingtime. Research Report No. 325, Dept. Theoret. Statist. Aarhus, (11 pp). Theory Probab.Appl. 42, 1997 (564-575).[7] JACKA, S: D: (1991): Optimal stopping and best constants <strong>for</strong> Doob-like inequalities I: Thecase p = 1 : Ann: Probab: 19 (1798-1821).[8] REVUZ, D: and YOR, M: (1991): Continuous Martingales and <strong>Brownian</strong> Motion: Springer-Verlag.[9] SHEPP, L: A: and SHIRYAEV, A: N: (1993): The Russian option: Reduced regret: Ann:Appl: Probab: 3 (631-640).Svend Erik GraversenDepartment of Mathematical SciencesUniversity of Aarhus, DenmarkNy Munkegade, DK-8000 Aarhusmatseg@imf.au.dkGoran PeskirDepartment of Mathematical SciencesUniversity of Aarhus, DenmarkNy Munkegade, DK-8000 Aarhushome.imf.au.dk/gorangoran@imf.au.dk18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!