12.07.2015 Views

Local Behaviour of First Passage Probabilities - MIMS - The ...

Local Behaviour of First Passage Probabilities - MIMS - The ...

Local Behaviour of First Passage Probabilities - MIMS - The ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Local</strong> <strong>Behaviour</strong> <strong>of</strong> <strong>First</strong> <strong>Passage</strong> <strong>Probabilities</strong>R. A. Doney<strong>First</strong> version: 12 June 2010Research Report No. 1, 2010, Probability and Statistics GroupSchool <strong>of</strong> Mathematics, <strong>The</strong> University <strong>of</strong> Manchester


2 ResultsNotation In what follows the phrase "S is an a.s.r.w.", (asymptotically stablerandom walk) will have the following meaning. S = (S n ; n 0) is a 1-dimensional random walk with S 0 = 0 and S n =P n1 X r for n 1 where X 1 ; X 2 ; are i.i.d. with F (x) = P (X 1 x) : S is either non-lattice, or it takes values on the integers and is aperiodic: there is a monotone increasing continuous function c(t) such that theprocess de…ned by X (n)t = S [nt] =c n converges weakly as n ! 1 to astable process Y = (Y t ; t 0): the process Y has index 2 (0; 2]; and := P (Y 1 > 0) 2 (0; 1):Remark 1 <strong>The</strong> case = 1; 2 (1; 2] is the spectrally negative case, and wewill sometimes need to treat this case separately. If < 1 then < 2 and theLévy measure <strong>of</strong> Y has a density equal to c + x 1 on (0; 1) with c + > 0;and then we can also assume that the norming sequence satis…esBut if = 1 we will havenF (c n ) ! 1 as n ! 1: (7)nF (c n ) ! 0 as n ! 1: (8)Here are our main results, where we recall that h y () denotes the densityfunction <strong>of</strong> the passage time over level y > 0 <strong>of</strong> the process Y: We will alsoadopt the convention that both x and are restricted to the integers in thelattice case.<strong>The</strong>orem 2 Assume that S is an asrw. <strong>The</strong>n(A) uniformly for x such that x=c n ! 0;P (T x = n) v U(x)P (T = n) v U(x)n 1 L(n) as n ! 1 : (9)(B) uniformly in x n := x=c(n) 2 [D 1 ; D]; for any D > 1;If, in addition, < 1; andP (T x = n) v n 1 h xn (1) as n ! 1: (10)f x := P (S 1 2 [x; x + )) is regularly varying as x ! 1; (11)then(C) uniformly for x such that x=c n ! 1;P (T x = n) v F (x) as n ! 1: (12)3


From this we get immediately a strengthening <strong>of</strong> (2).Corollary 3 If S is an a.s.r.w. the estimateholds uniformly as x=c n ! 0:P (T x > n) v U(x)n L(n) as n ! 1Remark 4 In view <strong>of</strong> (3) the result (10) might seem obvious. However (3)could also be written as P (max rn S r x) v R x nm(y)dy; where m denotes the0density function <strong>of</strong> sup t1 Y s ; and in the recent paper [22], Wachtel has shownthat the obvious local version <strong>of</strong> this is only valid under an additional hypothesis.Remark 5 <strong>The</strong> asymptotic behaviour <strong>of</strong> h x (1) has been determined in [16], andis given byh x (1) v k 1 x as x # 0; h x (1) v k 2 x as x ! 1:(We mention here that k 1; k 2 ; will denote particular …xed positive constantswhereas C will denote a generic positive constant whose value can change fromline to line.) It is therefore possible to compare the exact results in (9) and(12) with the behaviour <strong>of</strong> n 1 h xn (1) when x=c n ! 0 or x=c n ! 1: It turnsout that the ratio <strong>of</strong> the two can tend to 0; or a …nite constant, or oscillate,depending on the s.v. functions involved and the exact behaviour <strong>of</strong> x; except inthe special case that c n v Cn : (Here, and throughout, we write 1= = :) Infact, if F (x) v C=(x L 0 (x)); one can check that, when x=c n ! 1;nF (x)h xn (1) v L 0(c n )L 0 (x) ;and <strong>of</strong> course L 0 is asymptotically constant only in the aforementioned specialcase. Similarly, the RHS <strong>of</strong> (9) only has the same asymptotic behaviour asn 1 h xn (1) in this same special case.Remark 6 In the spectrally negative case = 1; without further assumptionswe know little about the asymptotic behaviour <strong>of</strong> F ; so it is clear that (C) doesn’tgenerally hold in this case, and indeed it is somewhat surprising that parts (A)and (B) do hold.3 PreliminariesThroughout this section it will be assumed that S is an a.s.r.w.. With ( 0 ; H 0 ) :=(0; 0) we write ( ; H) = (( n ; H n ); n 0) for the bivariate renewal process <strong>of</strong>strict ladder times and heights, so that 1 = T and H 1 = S T is the …rst ladderheight; we also write and H for 1 and H 1 . It is known that there aresequences a n and b n such that ( n =a n ; H n =b n ) converges in distribution to abivariate law whose marginals are positive stable laws with parameters and4


(ii) Since the probability <strong>of</strong> the limiting conditioning event is positive, thisfollows from the weak convergence <strong>of</strong> X (n) to Y:Until further notice we assume we are in the lattice case, and x; y; z willbe assumed to take non-negative integer values only.We will write1X1Xg(m; y) = P (T n = m; H n = y) and g(y) = P (H n = y)n=0for the bivariate renewal mass function <strong>of</strong> ( ; H) and the renewal mass function<strong>of</strong> H respectively: Our pro<strong>of</strong>s are based on the following obvious representation:P (T x = n + 1) = X y0P (T x > n; S n = x y)F (y); x 0; n > 0; (16)n=0To exploit this we need good estimates <strong>of</strong> P (T x > n; S n > x y); and we derivethese from the formulaXy^xnXP (S n = x y; T x > n) = g(r; x z)g (n r; y z); (17)z=0 r=0where g denotes the bivariate mass function in the weak downgoing ladderprocess <strong>of</strong> S: Formula (17), which extends a result originally due to Spitzer,follows by decomposing the event on the LHS according to the time and position<strong>of</strong> the maximum, and using the well-known duality result:g (m; u) = P (S m = u; > m): (18)(See Lemma 2.1 in [4]). Of course we also haveg(m; u) = P (S m = u; > m); (19)and our main tool in estimating P (S n = x y; T x > n) will be the following estimatesfor g and g : <strong>The</strong> results for g are established in [21], where they are statedas estimates for the conditional probability P (S m = uj > m): (See <strong>The</strong>orems5 and 6 in [21].) <strong>The</strong> results for g can be derived by applying those resultsto S; and then using the calculation given on page 100 <strong>of</strong> [3] to deduce theresult for the weak ladder process. Recall that V (x) = P 1 P xm=0 u=0g (m; u),and write f for the density <strong>of</strong> Y 1 ; where Y is the limiting stable process. Alsop and ~p stand for the densities <strong>of</strong> Z (1)1 and Z ~ (1)1 ; the stable meanders <strong>of</strong> length1 at time 1 corresponding to Y and Y:Lemma 10 Uniformly in x 1 and x 0; respectively,c n g(n; x)P ( > n) = p(x=c n)+o(1) and c ng (n; x)P ( > n) = ~p(x=c n)+o(1) as n ! 1: (20)Also, uniformly as x=c n ! 0;g(n; x) vf(0)U(x 1)nc nfor x 1 and g (n; x) vf(0)V (x)nc nfor x 0: (21)6


From this we can deduce the following result, which is a minor extension <strong>of</strong>Lemma 20 in [21].Lemma 11 Given any constant C 1 there exists a constant C 2 such that for alln 1 and 0 x C 1 c ng(n; x) C 2U(x); and g (n; x) C 2V (x):nc nnc n(22).Pro<strong>of</strong>. Observe that on any interval [c n ; C 1 c n ]; p(x=c n ) is bounded, andU(x) U(c n ); so by Lemma 7 we see that the ratioP ( > n)= U(x) nP ( > n)U(c n )c n nc nis also bounded above. A similar pro<strong>of</strong> works for g :Just as these local estimates for the distribution <strong>of</strong> S n on the event > nplayed a crucial rôle in the pro<strong>of</strong> <strong>of</strong> (5) in [21], we need similar information onthe event T x > n: This is given in the following result, where for x > 0 we writeq x () for the density <strong>of</strong> P (Y 1 2 x : sup t1 Y t < x): We also write x=c n = x nand y=c n = y n :Proposition 12 (i) Uniformly as x n _ y n ! 0P (S n = xy; T x > n) vU(x)f(0)V (y)nc n: (23)(ii) For any D > 1; uniformly for y n 2 [D 1 ; D];P (S n = x y; T x > n) v U(x)P ( > n)~p(y n)c nas n ! 1 and x n ! 0; (24)and uniformly for x n 2 [D 1 ; D];P (S n = x y; T x > n) v V (y)P ( > n)p(x n)c nas n ! 1 and y n ! 0: (25)(iii) For any D > 1; uniformly for x n 2 [D 1 ; D] and y n 2 [D 1 ; D];P (S n = x y; T x > n) v q x n(y n )c nas n ! 1: (26)<strong>The</strong> pro<strong>of</strong> <strong>of</strong> this result is given in the next section. We can repeat theargument used in Lemma 11 to get the following corollary.Corollary 13 Given any constant C 1 there exists a constant C 2 such that forall n 1 and 0 x C 1 c n ; 0 y C 1 c n ;P (S n = xy; T x > n) C 2U(x)f(0)V (y)nc n:7


It is apparent that we will also need information about the behaviour <strong>of</strong>g(n; x); or equivalently <strong>of</strong> P (S n = x; > n); in the case x=c n ! 1. Fortunatelythis has been obtained recently in [19], and we quote Propositions 11and 12 therein as (28) and (30). <strong>The</strong> related unconditional results (27) and (29)have been proved in special cases in [12], [13], and [19], and the general resultscan be deduced from <strong>The</strong>orem 2.1 <strong>of</strong> [9].Proposition 14 If S is an asrw with < 1; then, uniformly for x such thatx=c n ! 1;P (S n > x) v nP (S 1 > x) as n ! 1; and (27)P (S n > x; > n) v 1 P (S n > x)P ( > n) as n ! 1: (28)If, additionally, (11) holds, thenP (S n 2 [x; x + )) v nf x as n ! 1 (29)andP (S n 2 [x; x + ); > n) v 1 nf x P ( > n) as n ! 1: (30)3.1 Some identities for stable processes.Proposition 15 (i) For any stable process Y which has < 1 there are positiveconstants k 7 and k 8 such that the following identities hold:Z 1h x (t) = k 7 q x (w)w dw and (31)Z1q u (v) = k 800u^v Z0u(t; u z)u (1 t; v z)dzdt; (32)where u and u denote the bivariate renewal densities for the increasing ladderprocesses <strong>of</strong> Y and Y:(ii) If = 1 there is a positive constant k 9 such thatp(x) = k 9 h x (1): (33)Pro<strong>of</strong>. All three results are special cases <strong>of</strong> results for Lévy processes. <strong>The</strong>general version <strong>of</strong> (31) is given in [15], and (32) follows from the followingobservation, which is a minor extension <strong>of</strong> <strong>The</strong>orem 20, p176 <strong>of</strong> [5].Assume X is a Lévy process which is not compound Poisson. <strong>The</strong>n there isa constant k 7 > 0 such that for x > 0 and w < x;P (X t 2 dw; t < T x )dt = k 7Z xy=w + Z ts=0U(ds; dy)U (dt s; y dw); (34)where U and U are the bivariate renewal measures in the increasing ladderprocesses <strong>of</strong> X and X : Clearly it su¢ ces to prove that the Laplace transform,8


in t; <strong>of</strong> the LHS <strong>of</strong> (34) is the same as that <strong>of</strong> the RHS, which isk 7Z x= k 7Z xZ 1y=w + t=0Z 1y=w + t=0e qt Z ts=0e qt U(dt; dy)U(ds; dy)U (dt s; y dw)Z 1s=0e qs U (ds; y dw): (35)Note that if we introduce an independent Exp(q) random variable e q the Wiener-Hopf factorisation allows us to write X eq = S eqe Seq ; where S denotes thesupremum process <strong>of</strong> X and S ~ denotes an independent copy <strong>of</strong> the supremumprocess S <strong>of</strong> X: Let and denote the bivariate Laplace exponents <strong>of</strong> theladder processes <strong>of</strong> X and X :<strong>The</strong>n, using the identity (q; 0) (q; 0) = q=k 7which follows from the Wiener-Hopf factorisation, (see e.g. (3), p166 <strong>of</strong> [5]),Z 1t=0But (1), p 163 <strong>of</strong> [5] givessoe qt P (X t 2 dw; t < T x )dtE(e Seq )(q; 0)== q 1 P (X eq 2 dw; e q < T x )= q 1 P (S eq x; S eqe Seq 2 dw)Z x= q 1 P (S eq 2 dy)P (S eq2 y dw)wZ + xP (S eq 2 dy) P (S eq2 y dw)= k 7 :w (q; 0) (q; 0)+Z11(q; ) =P (S eq 2 dy)=(q; 0)0Z 10Z 10e (qt+y) U(dt; dy);e qt U(dt; dy):Using the analogous expression for P (S eq2 y dw); (35) is immediate, and then(34) follows. Specializing this to the stable case then gives (32). Finally, if wewrite n for the characteristic measure <strong>of</strong> the excursions away from zero <strong>of</strong> X I;with I denoting the in…mum process <strong>of</strong> X, then p t (dx) := n(" t 2 dx)=n( > t)is a probabilty measure which coincides with that <strong>of</strong> the meander <strong>of</strong> length t attime t in the stable case. (Here denotes the life length <strong>of</strong> the generic excursion":) In the special case <strong>of</strong> spectrally negative Lévy processes, we havep t (dx) = Cxt 1 P (X t 2 dx)=n( > t)= Ch x (t)dx=n( > t):<strong>The</strong> …rst equality here comes from Cor 4 in [2], and the second is the Lévyversion <strong>of</strong> the ballot theorem (see Corollary 3, p 190 <strong>of</strong> [5]). Specialising to thestable case and t = 1 gives (33). (I owe this observation to Loic Chaumont.)9


4 Pro<strong>of</strong> <strong>of</strong> Proposition 12Pro<strong>of</strong>. We will be applying the results in Lemma 10 to formula (17) and wewrite the RHS <strong>of</strong> (17) as P 1 + P 2 + P 3 ; where with 2 (0; 1=2);andP i = X Xg(r; x z)g (n r; y z); 1 i 3;y^xr2A i z=0A 1 = f0 r ng; A 2 = fn < r (1 )ng; and A 3 = f(1 )n < r ng:(i) We introduce d(); an increasing and continuous function which satis…esd(n) = nc n ; and with bc standing for the integer part function, use Lemma 10to writeXy^xP 1 f(0)=bncXz=0 r=0V (y z)g(r; x z)d(n r)y^xbncf(0) X XV (y z) g(r; x z)d(n(1 ))z=0r=0f(0) Xy^xV (y z)[u(x z)d(n(1 ))z=01Xr=bnc+1g(r; xz)]:We can apply Lemma 10 again to get the estimate, uniform for w=c n ! 0;1Xr=bnc+1Since we know thatg(r; w) 1Xr=bnc+1nu(n)lim infn!1 U(n) > 0;f(0)U(w) Cf(0) U(w):rc r(see e.g. <strong>The</strong>orem 8.7.4 in [6]) we see that this is o(u(w)). Also we haveXy^xbncXz=0 r=0V (y z)g(r; x z)d(n r) 1d(n)Xy^xbncXz=0 r=0c ng(r; x z)V (y z);so we see thatd(n)P 1limn; f(0) P y^x= 1; (36)V (y z)u(x z)z=010


where lim n; () = 1 is shorthand for lim !0 lim sup n!1 () = lim !0 lim inf n!1 () =1: Similarly, once we observe thatP 3 ==Xy^xnXz=0 r=n bncXy^xbncXz=0 r=0g(r; x z)g (n r; y z)g(n r; x z)g (r; y z);an entirely analogous argument givesd(n)P 3limn; f(0) P y^x= 1; (37)U(x z 1)v(y z)z=0where v is the renewal mass function in the down-going ladder height process.Noting thatU(x z 1)v(y z)+V (y z)u(x z) = U(x z)V (y z) U(x z 1)V (y z 1);we get the formulaXy^xfU(x z 1)v(y z) + V (y z)u(x z)g = V (y)U(x);z=0and the result will follow by letting n ! 1 and then # 0 provided d(n)P 2 =o(V (y)U(x)) for each …xed > 0: In fact, using Lemma 10 again,d(n)P 2 = d(n) X Xy^xg(r; x z)g (n r; y z)A 2z=0Xy^xXf(0) 2 V (y z)U(x z)d(n)d(r)d(n r)z=0 A 2(y ^ x)f(0)2 d(n)nV (y)U(x)d(bnc) 2= O((y ^ x)V (y)U(x)) = o(V (y)U(x));c nand the result follows.(ii) In this case we can assume WLOG that y ^ x = x = o(y); so thatLemma 10 gives g (n r; y z) v P ( > n r)~p(y=c n r )=c n r uniformly forr 2 A 1 and 0 z x: With e denoting a continuous and monotone interpolant<strong>of</strong> c m =P ( > m), and noting that ~p() is uniformly continuous and boundedaway from zero and in…nity on [D 1 ; D]; see [16], we can use a similar argument11


to that in (i) to show thatP 1 =bncX0xXz=0~p (y)e(n(1 ))~p(y=c n r )g(r; x z)(1 + o(1))e(n r)!xXu(xz=0= U(x)~p (y)(1 + o(1));e(n(1 ))z)(1 + o(1))where ~p (y) = sup 0rn ~p(y=c n r ) = ~p(y n ) + "(n; ) and lim n; "(n; ) =0: In the same way we get P 1 U(x)~p (y)=e(n)(1 + o(1)), where ~p (y) =inf 0rn ~p(y=c n r ); and we deduce thate(n)P 1lim= 1: (38)n; ~p(y n )U(x)Since inff~p(y) : y 2 [D 1 ; D] > 0; the result will follow if we can show that forany …xed > 0e(n)(P 2 + P 3 )lim sup= 0: (39)n!1 U(x)However (37) still holds, but note now thatXy^xU(x 1 z)v(y z) =z=0xXU(x 1 z)v(y z)z=0 U(x)(V (y) V (y x 1)) = o(U(x)V (y)):and since the analogue <strong>of</strong> (14) holds, viz V (c n ) v k 10 =P ( > n); we see that e(n)P 3 U(x)V (y) e(n)= oU(x)d(n)U(x)1= o(nP ( > n)P ( > n) ) = o(1):Finallye(n)P 2 = e(n) X xXg(r; x z)g (n r; y z)A 2z=0 f(0)e(n) X A 2xXz=0U(x z)~p((y z)=c n r )d(r)e(n r)Ce(n)nxU(x)d(bnc)e(bnc) = O(xU(x) ) = o(U(x)):Thus (39) is established, and the result (24) follows. Since (25) is (24) for Swith x and y interchanged, modi…ed to take account <strong>of</strong> the di¤erence betweenstrict and weak ladder epochs, we omit it’s pro<strong>of</strong>.c n;12


(iii) In this case it is P 2 that dominates. To see this, note that if we denoteby b() a continuous interpolant <strong>of</strong> c n =P ( > n); we have b(n)e(n) v nc 2 n:<strong>The</strong>n using Lemma 10 twice gives, uniformly for x n ; y n 2 (D 1 ; D) and for any…xed 2 (0; 1=2);X Xx^yc n P 2 = c nA 2z=0X Xx^y= c nA 2z=0p((x z)=c r )~p((y z)=c n r )b(r)e(n r)p((x z)=c r )~p((y z)=c n r )b(r)e(n r)+ o(c n (x ^ y) X A 21b(r)e(n r) )+ o(1):Making the change <strong>of</strong> variables r = nt; y = c n z; recalling that p and ~p areuniformly continuous on compacts, and thatb(r)e(n r)b(n)e(n)! t 1+ (1 t) 1 uniformly on A 2 ; we see that for each …xed > 0 we get the uniform estimatec n P 2 = I (x n ; y n ) + o(1); whereI (u; v) =1Zu^v Z0p( u zt )~p( v z(1 t) )t 1+ (1 t) dtdz:If we introduce p t (z) = t p(zt ); which is the density function <strong>of</strong> Z t , themeander <strong>of</strong> length t at time t; according to Lemma 8 <strong>of</strong> [16] we have that therenewal measure <strong>of</strong> the increasing ladder process <strong>of</strong> Y has a joint density which isgiven by u(t; z) = Ct 1 p t (z) = Ct + 1 p(zt ): Similarly for the decreasingladder process we have u (t; z) = C t + ~p(zt ); and (32) in Proposition 15givesZ 1I 0 (u; v) = Cso we conclude that0u^v ZTurning to P 1 ; if K = sup y0 ~p(y); we havebncXc n P 1 v c n00Kc ne(n(1Xx^yz=0u(t; u z)u (1 t; v z)dzdt = Cq u (v);lim lim c n P 2= C: (40)!0 n!1 q xn (y n )))~p((y z)=c n r )g(x z; r)e(n r)bncX0xXg(xz=0z; r) CP ( > n) (bnc);where is the renewal function in the increasing ladder time process. SinceT 2 D(; 1); we know that (bnc) v (n) and P ( > n) (n) ! C; (see e.g.13


p 361 <strong>of</strong> [6]), so we conclude thatlim lim sup c n P 1 = 0:!0 n!1Exactly the same argument applies to P 3 , and since q u (v) is clearly boundedbelow by a positive constant for u; v 2 [D 1 ; D] we have shown that (26) holds,except that the RHS is multiplied by some constant C: However if C 6= 1, bysumming over y we easily get a contradiction, and this …nishes the pro<strong>of</strong>.5 Pro<strong>of</strong> <strong>of</strong> <strong>The</strong>orem 25.1 Pro<strong>of</strong> when x=c n ! 0:Pro<strong>of</strong>. As already indicated, the pro<strong>of</strong> involves applying the estimates in Proposition12 to the representation (16), which we recall isP (T x = n + 1) = X y0P (S n = x y; T x > n)F (y); x 0; n > 0: (41)In the case < 1, given " > 0 we can …nd K " and n " such that nF (K " c n ) "for n n " and, using (23) from Proposition 12,P (S n = xy; T x > n) 2U(x)f(0)V (y)nc nfor all x_y "c n and n n " : (42)We can then use (24) <strong>of</strong> Proposition 12 to show that we can also assume, increasingthe value <strong>of</strong> n " if necessary, that for x "c n and y 2 ("c n ; K " c n );1 " c nP (S n = x y; T x > n)U(x)P ( > n)~p(y n ) 1 + " for all n n " : (43)For …xed " it is clear that as n ! 1XZ K"Z K"~p(y n )F (y)=c n v ~p(z)F (c n z)dz v n 1 ~p(z)z dz: (44)"c""n n)F (y) = 0;"#0 n!1 U(x)P ( = n)y2[0;"c n][[K "c n;1)(45)it will follow from (43) that P (T x = n) v U(x)k 11 1 P ( = n): Since this holdsin particular for x = 0; we see that k 11 = ; so it remains only to verify (45).Note …rst thatXP (S n = x y; T x > n)F (y) F (K " c n )P (T x > n)y2[K "c n;1)"n 1 P (T x > n);14


so one part <strong>of</strong> (45) will follow if we can show thatObviously, for any B > 0P (T x > n) CU(x)P ( > n): (46)P (T x > n) = X y0P (S n = x y; T x > n)X0yBc nP (S n = x y; T x > n) + P (S n < x Bc n ; T x > n):Since x=c n ! 0; we can use the invariance principle in Proposition 9 to …x Blarge enough to ensure that P (S n < x Bc n jT x > n) 1=2 for all su¢ cientlylarge n: We can also use Corollary 13 to getXP (S n = x y; T x > n) CU(x) XV (y)nc n0yBc n 0yBc nand thus (46) is established. = 1:) Now (42) giveslim lim sup"#0 n!1 CU(x)Bc nV (Bc n ) CU(x)nc n nP ( > n) CU(x)P ( > n);1U(x)P ( = n)(Note that this pro<strong>of</strong> is also valid for the caseXy2[0;"c n]2f(0)lim lim sup"#0 n!1 nc n P ( = n)P (S n > xXy2[0;"c n]y; T x > n)F (y)F (y)V (y);and since < 1; (105) <strong>of</strong> [21] shows that this is zero.In the case = 1; a di¤erent pro<strong>of</strong> is required. We make use <strong>of</strong> theobservation in [21] that there is a sequence n # 0 with n c n ! 1 and suchthat nF ( n c n ) ! 0: This givesXP (S n = x y; T x > n)F (y) P (T x > n)F ( n c n )y> nc n= o(U(x)P ( > n)=n) = o(U(x)P ( = n));where we have used (46). Using (23) <strong>of</strong> Proposition 12 givesXy nc nP (S n = xy; T x > n)F (y) v f(0)U(x)!(n)nc nas n ! 1;uniformly in x, where !(n) = P 0y nc nV (y)F (y): But in [21], it is shownthat nc n P ( = n) v f(0)!(n), so we deduce from (41) that P (T x = n + 1) vU(x)P ( = n); as required.15


5.2 Pro<strong>of</strong> when x=c n = O(1)Pro<strong>of</strong>. Again we treat the case < 1 …rst; and start by noting that for anyB > 0;XP (S n = x y; T x > n)F (y) F (Bc n ) v 1 as n ! 1;nB y>Bc nuniformly in x 0: Also by Corollary 13, for b > 0,Xybc nP (S n = xy; T x > n)F (y) CU(x) P ybc nV (y)F (y)nc n: (47)Since V F 2 RV ( ); P yzV (y)F (y) v zV (z)F (z)=(1 ); and we see thatwhen x Dc n ;U(x) P ybc nV (y)F (y)c nv b 1 U(x)V (c n )F (c n ) Cb 1 D U(c n )V (c n )=n Cb 1 D ;where we have used (15) from Corollary 8. We conclude the pro<strong>of</strong> by showingthatlim lim n XP (S n = x y; T x > n)F (y) = C; (48)b#0; B"1 n!1 h xn (1)bc n n)F (y) nF ( n c n ) ! 0;y> nc nandnX0y nc nP (S n = xv nP ( > n)p(x n)c nvXy; T x > n)F (y)0y nc nV (y)F (y)np(x n)!(n)v p(x n)P ( = n)v p(x n );P ( > n)nc n P ( > n)and the result follows from the identity (33) in Proposition 15, since again therewould be a contradiction if k 9 6= 1.16


5.3 Pro<strong>of</strong> when x=c n ! 1Pro<strong>of</strong>. This time we write P (T x = n + 1) = P 41 P (i) ; where P (i) =P fA (i) \ (T x = n + 1)g andA (1) = (S n x); A (2) = (x < S n x Kc n );A (3) = (x Kc n < S n x c n ); and A (4) = (S n > x c n ):We note …rst that for 2 (0; 1);P (1)lim supn!1 F (x)P (1)lim infn!1 F (x)P (S n x)F ((1lim supn!1 F (x))x)= (1 ) ;P (max rn S r x; jS n j x)F ((1 + )x)lim infn!1F (x)= (1 + ) : (49)Next, using (27),P (2)lim supn!1 F (x)P (S n > x)F (Kc n )lim supn!1 F (x)nF (x)F (c n )lim sup= Cn!1 F (x)K (K) : (50)To deal with the next term, we use (27) again to see that for any …xed K;P (x Kc n < S n x) is uniformly o(nF (x)): Since P (3) F (c n )P (x Kc n n) vnP ( > n)x 1 F (x); so we can assume thatxg(n; x)P ( > n)sup< 1: (52)n>0; x>Kc n F (x)17


<strong>The</strong>n for large enough n and x > Kc nP (4) ===nX0bcX ncy=0bc ncCF (x)xP ( > n)CF (x)xP ( > n)CF (x)xP ( > n)CF (x)xP ( > n)Xz=0nX0ybcX ncy=0bcX ncz=0bcX ncz=0g(n m; x y)g (m; z)F (y + z)bcX ncy=0bc ncXz=0bc ncXy=0bcX ncw=zbc ncXz=0yzyg (m; z)F (y + z)v(z)F (y + z)v(z)F (y + z)v(z)F (w):Now a summation by parts and the fact that V F 2 RV (y ! 1) shows that asyX yXyXv(z)F (w) v V (z)F (z) v yV (y)F (y)=(1z=0 w=zz=0):So for all large enough n we have the boundP (4) CF (x)1 c n V (c n )nxP ( > n)CF (x) 1 c nnxP ( > n)P ( > n) v C1 c n F (x) (53)x<strong>The</strong> result follows from (49)-(53) and appropriate choice <strong>of</strong> ; K; and :Remark 16 <strong>The</strong> assumption (11) is not strictly necessary for (52) to hold,and this is the only point where we use this assumption. In fact, if the followingslightly weaker version <strong>of</strong> (52),c n g(n; x)P ( > n)sup< 1; (54)n>0; x>Kc n F (x)were to hold, then (53) would hold with c n replacing x in the denominator, andthe pro<strong>of</strong> would still be valid, by choosing small.18


6 <strong>The</strong> non-lattice caseWe indicate here the main di¤erences between the pro<strong>of</strong> in the lattice and nonlatticecases. <strong>First</strong>, we haveG(n; dy) : =G (n; dy) : =1XP (T r = n; H r 2 dy) = P (S n 2 dy; r=0> n);1XP (T r = n; H r 2 dy) = P ( S n 2 dy; > n);r=0and the following analogue <strong>of</strong> Lemma 10 is given in <strong>The</strong>orems 3 and 4 <strong>of</strong> [21].Lemma 17 For any 0 > 0; uniformly in x 0 and 0 < 0 ;c n G(n; [x; x + ])P ( > n)Also, uniformly as x=c n ! 0;G(n; [x; x+]) v f(0) R x+U(w)dwxnc n= p(x=c n )+o(1) and c nG (n; [x; x + ])P ( > n)= ~p(x=c n )+o(1) as n ! 1:(55)and G (n; [x; x+]) v f(0) R x+V (w)dwx:nc n(56)Remark 18 Again, only the results for G are given in [21], but it is easy to getthe reults for G : Actually the result in [21] has U(w ) rather than U(w) in(56), but clearly the two integrals coincide. Finally the uniformity in is notmentioned in [21], but a perusal <strong>of</strong> the pro<strong>of</strong> shows that this is true, essentiallybecause it holds in Stone’s local limit theorem. See e.g. <strong>The</strong>orem 8.4.2 in [6].In writing down the analogues <strong>of</strong> (16) and (17) care is required with the thelimits <strong>of</strong> integration, since the distribution <strong>of</strong> S n and the renewal measures arenot necessarily di¤use. <strong>The</strong>se analogues areZP (T x = n + 1) = P (S n 2 x dy; T x > n)F (y); (57)and for w 0P (S n 2 x dw; T x > n) =nXZr=0[0;1)[0;x)\[0;w]<strong>The</strong> key result, the analogue <strong>of</strong> Proposition 12, isG(r; x dz)G (n r; dw z): (58)Proposition 19 Fix > 0: <strong>The</strong>n (i) uniformly as x n _ y n ! 0;P (S n 2 (x y ; x y]; T x > n) v U(x)f(0) R y+V (w)dwy: (59)nc n19


(ii) For any D > 1; uniformly for y n 2 [D 1 ; D];P (S n 2 (x y ; x y]; T x > n) v U(x)P ( > n)~p(y n)c nas n ! 1 and x n ! 0;and uniformly for x n 2 [D 1 ; D];P (S n 2 (x y ; x y]; T x > n) v V (y)P ( > n)p(x n)c nas n ! 1 and y n ! 0:(iii) For any D > 1; uniformly for x n 2 [D 1 ; D] and y n 2 [D 1 ; D];(60)(61)P (S n 2 (x y ; x y]; T x > n) v q x n(y n )as n ! 1: (62)c nOnce we have these results, we deduce <strong>The</strong>orem 2 by applying them to amodi…ed version <strong>of</strong> (57), viz1XP (S n 2 (x (n + 1); x n]; T x > n)F (n) P (T x = n + 1)01XP (S n 2 (x (n + 1); x n]; T x > n)F ((n + 1));0and letting ! 0: So the key step is establishing Proposition 19, and weillustrate how this can be done by proving (59).Pro<strong>of</strong>. We want to apply Lemma 17 to (58), but technically the problem isthat we can’t do this directly, as we did in the lattice case. <strong>The</strong> …rst step is toget an integrated form <strong>of</strong> (58), and it is useful to separate o¤ the term r = 0 ;so that for x; y 0;P (S n 2 (x y ; x y]; T x > n) = G (n; [fy xg + ; y + x))1 fxy+g + P ~ ;(63)wherenXZ Z~P =G(r; x dz)G (n r; dw z);==r=1nXZr=1nXZr=1yw


An asymptotic lower bound is given byr>nf(0)d(n)bncXr=1Z0z


After reading o¤ the asymptotic behaviour <strong>of</strong> the …rst term in (64) from(56), the pro<strong>of</strong> is now completed by using (65), (66), (67), and the followingresult.Lemma 20 For x; y 0 and > 0 the following identity holdsZZU(x z)dzV (dw z) + U(x dz)V (w z)dw0z


References[1] V. I. Afanasyev, C. Boingho¤, G. Kersting, and V. A. Vatutin. Limit theoremsfor weakly subcritical branching processes in random environment.Preprint, (2010).[2] L. Alili and L. Chaumont. A new ‡uctuation identity for Lévy processesand some applications. Bernoulli, 7, 557-569, (2001).[3] L. Alili and R. A. Doney. Wiener-Hopf factorization revisited and someapplications. Stochastics and Stochastic Reports, 66, 87-102, (1999).[4] L. Alili and R. A. Doney. Martin boundaries associated with a killed randomwalk. Ann. I. H. Poincaré, 37, 313-338, (2001).[5] J. Bertoin. Lévy processes. Cambridge University Press, Cambridge, (1996).[6] N. H. Bingham, C. M. Goldie, and J. L. Teugels. Regular Variation. CambridgeUniversity Press, Cambridge, (1987).[7] A. Bryn-Jones and R. A. Doney. A functional limit theorem for randomwalk conditioned to stay non-negative. J. Lond. Math. Soc., 74, 244-258,(2006).[8] L. Chaumont and R. A. Doney. Invariance principles for local times at thesupremum <strong>of</strong> random walks and Lévy processes. Ann. Probab., to appear,(2010).[9] D. Denisov, A. B. Dieker, and V. Shneer. Large deviations for randomwalks under sub-exponentiality: the big-jump domain. Ann. Probab., 36,1946-1991, (2008).[10] R. A. Doney. On the exact asymptotic behaviour <strong>of</strong> the distribution <strong>of</strong>ladder epochs. Stoch. Proc. Appl., 12, 203-214, (1982).[11] R. A. Doney. Conditional limit theorems for asymptotically stable randomwalks. Probab. <strong>The</strong>ory Related Fields, 70, 351-360, (1985).[12] R. A. Doney. A large deviation local limit theorem. Math. Proc. CambridgePhilos. Soc. 105, 575-577, (1989).[13] R. A. Doney. One-sided local large deviation and renewal theorems in thecase <strong>of</strong> in…nite mean. Probab. <strong>The</strong>ory Related Fields, 107, 451-465, (1997).[14] R. A. Doney and P. E. Greenwood. On the joint distribution <strong>of</strong> ladder variables<strong>of</strong> random walk. Probab. <strong>The</strong>ory Relat. Fields, 94, 457-472, (1993).[15] R. A. Doney and V. Rivero. <strong>First</strong> passage densities for Lévy processes. Inpreparation.[16] R. A. Doney and M. S. Savov. <strong>The</strong> asymptotic behaviour <strong>of</strong> densities relatedto the supremum <strong>of</strong> a stable process. Ann. Probab., 38, 316-326, (2010).23


[17] M. S. Eppel. A local limit theorem for …rst passage time. Siberian Math.J., 20, 181-191, (1979).[18] K.B. Erickson. <strong>The</strong> strong law <strong>of</strong> large numbers when the mean is unde…ned.Trans. Amer. Math. Soc. 185, 371-381, (1973).[19] E. Jones., Large deviations <strong>of</strong> random walks and Lévy processes. <strong>The</strong>sis,University <strong>of</strong> Manchester, (2009).[20] H. Kesten. Ratio <strong>The</strong>orems for random walks II. J. d’Analyse Math. 11,323-379, (1963).[21] V. A. Vatutin and V. Wachtel. <strong>Local</strong> probabilities for random walks conditionedto stay positive, Probab. <strong>The</strong>ory Related Fields, 143, 177-217,(2009).[22] V. Wachtel. <strong>Local</strong> limit theorem for the maximum <strong>of</strong> asymptotically stablerandom walks. Preprint, (2010).24

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!