**on**

**on** **smooth** manifolds.

Vladimir Piterbarg

Moscow Lom**on**osov State University

Faculty

Moscow 119899 Russia

eMail piter@mech.math.msu.su

Sinisha Stamatovich

University **on**tenegro

Cetinjski put bb

81000 Podgorica, Yugoslavia

eMail sins.pmf@rc.pmf.cg.ac.yu

06 November, 1998

Key words and phrases. **on**s, **on**,

exact asymptotics.

1991 Mathematics Subject Classi cati**on**. 60G15 60G60

Research partially supported by DFG{RFFI grant 436RUS113/467/7 and RFFI Grants 97-01-

00648, 98-01-00524

0

Abstract

The double sum method **on** **on**s

for **on****on**s is developed. Asymptotic

behaviors **on****on**ary

**on** **smooth** manifold are evaluated. In particular, **smooth** **on**

**smooth** manifolds are c**on**sidered.

1 Introducti**on**

The double-sum method is **on**e

**on**

therein. Until recently **on**ly **on**sidered. It can be

seen from [7] and the present paper that the investigati**on** **on**

Furthermore, there are examples when the need for the asymptotic behaviour for

**on**

test **on****on**al distributi**on**s. The asymptotic

decisi**on** rules are based **on** tail distributi**on**s

**on** spheres or products **on**e

might have tohave asymptotic behaviour **on**s for **on**

In this paper we extend the double sum method to study

**on****on**s. We evaluate asymptotic behavior **on**

( tD t)-stati**on**ary **on** **smooth** manifold, as de ned below.

In particular, **smooth** **on** **smooth** manifolds are c**on**sidered.

2 De niti**on**s, auxiliary results, main results

Let the collecti**on** 1::: k **on**

l1:::lk **on**s

is called a structure, [7]. For any vector t =(t1:::tn) > its structural module is de ned

by

jtj =

kX

i=1

0

@ E(i) X

j=E(i;1)+1

t 2

j

1

A

i

2

(1)

where E(i) = Pi j=0 lj, j =1 ::: k. The structure de nes a decompositi**on**

Rn into the direct sum Rn = Lk i=1 Rli ,suchthat the restricti**on**

**on** either

For u>0 denote by Gi u the homothety

i =1:::k, respectively, andbygu, the superpositi**on**

i=1 Gi u.

It is clear that for any t 2 Rn ,

jgutj = u ;2 jtj : (2)

1

Let (t), t 2 R n , be a **on**tinuous paths, the expected value and

the covariance functi**on** are given by

E (t) =;jtj Cov( (t) (s)) = jtj + jsj ;jt ; sj (3)

respectively. Thus (t) can be represented as a sum

drifted fracti**on**al Brownian moti**on**s (Levy-Sh**on**berg **on** R l i , with parameters

i.

To proceed, we need a generalizati**on** **on**stant. De ne the functi**on**

**on** measurable subsets

(

H (B) = exp sup

t2B

)

(t) : (4)

Let D be a **on****on** di erence

between a matrix and the corresp**on**ding linear transformati**on**. Next, for any S > 0,

we denote by

[0S] k = ft : 0 t i S i =1:::k t i =0 i = k +1::ng

a cube **on** k generated by the rst k coordinates in R n . In [2] it is proved

that there exists a positive limit

0

De niti**on** 1 Let an -structure is given **on** R n . We say that X(t), t 2 T R n

has a local ( D t)-stati**on**ary structure, or X(t) is locally ( D t)-stati**on**ary, if for any

">0 there exists a positive (") such that for any s 2 T **on**e can nd a **on**

matrix D s such that the covariance functi**on** r(t 1 t 2)

1 ; (1 + ")jD s(t 1 ; t 2)j r(t 1 t 2) 1 ; (1 ; ")jD s(t 1 ; t 2)j (10)

provided jjt 1 ; sjj < (") and jjt 2 ; sjj < (").

It is c**on**venient for the reader to cite here four theorems which are in our use, in

suitable to our purposes forms. Before that we need some notati**on**s. Let L be a kdimensi**on**al

subspace **on**al coordinate systems in R n and in L,

let (x 1:::xk) > be the coordinate presentati**on**

its coordinate presentati**on** in R n . Denote by M = M(L) the corresp**on**ding transiti**on**

matrix,

(x 0 1 :::x0 n) > = M(x 1:::xk) >

that is M =(@x 0 i =@xj i =1:::n j =1:::k).

Next, for a matrix G

sum

the dimensi**on**

coordinate systems in L and R n are orthog**on**al, V (M) =1:

Theorem 1 (Theorem 7.1, [7]) Let X(t), t 2 R n ,be a

eld such that for some , 0 < 2 and a **on**

functi**on** satis es

r(t) =1;jjDtjj + o(jjDtjj ) as t ! 0 (11)

Then for any k, 0 < k n, every subspace L

A L, and every functi**on** w(u) with w(u)=u = o(1) as u !1,

P

(

as u !1 provided

with A the closure

sup X(t) >u+ w(u)

t2A

)

= H (k) V (DM(L))mes L(A)u 2k

= (12)

(u + w(u))(1 + o(1)) (13)

r(t ; s) < 1 for all t s 2 A t 6= s (14)

Theorem 2 ( Theorem 1, [4]). Let X(t), t 2 R n , be a

( D t)-stati**on**ary eld, with >0 and a c**on**tinuous matrix functi**on** D t. Let M R n

be a **smooth** compact **on** k, 0 u; c

t2M

= H (k) u 2k

)

= (15)

(u ; c)

Z

M V (D tM t) dt(1 + o(1)) (16)

as u !1, where M t = M(T t) with T t the tangent subspace taken to M at the point

t and dt is an element

3

Theorem 3 (The Borell-Sudakov-Tsirels**on** inequality.) Let X(t), t 2 T , be a measurable

**on** an arbitrary set T , and let numbers , m, a be

de ned by relati**on**s,

and

Then for any x,

2 = sup

t2T

P

(

P

VarX(t) < 1 m =supEX(t)

< 1

t2T

(

sup X(t) ; EX(t) a

t2T

sup X(t) >x

t2T

)

2

)

1

: (17)

2

x ; m ; a : (18)

Theorem 4 (Slepian inequality.) Let X(t), Y (t), t 2 T , be separable

**on** an arbitrary set T , and suppose that for all t s 2 T ,

Then for all x,

VarX(t) =VarY (t) EX(t) =EY (t)

P

and (19)

Cov(X(t)X(s)) Cov(Y (t)Y(s)):

(

sup X(t) + O(jjt ; t 0jj 2+ ) as t ! t 0 (21)

for some > 0 and positive matrix B. Then

( )

P sup X(t) >u

t2M

= (22)

=

k=2

p detM > BM V (D t0 M) H (k) u 2k ; k

2 (u ; m(t 0))(1 + o(1))

as u !1, where M = M(T t0) and T t0 is the tangent subspace toM taken at the point

t 0.

Theorem 6 Let M R n be a **smooth** k-dimensi**on**al compact, 0

all t 2Mand r(t s) < 1 for all t s 2 M, t 6= s. Let the expectati**on** m(t) = EX(t)

is same as in Theorem 5. Then

( )

P sup X(t) >u =

t2M

q

1

V ( 2

=

(23)

At0M) p u

detM > BM k

2 (u ; m(t0))(1 + o(1))

as u !1 with M as in Theorem 5 and A t0 the covariance matrix **on**al

projecti**on** **on**to the tangent subspace

to the M taken at the point t 0.

3 Pro

Pro**on**e changes gu **on** g u (u), the lemma

immediately follows from Lemma 6.1, [7]. Sec**on**d, observe that we can write guT =

g u (u)(I uT ), where I u is a linear transformati**on** **on**

homotheties

and I uT tends to T in Euclidean distance. Third, note that H (T ) is c**on**tinuous in

T in the topology

by Euclidean distance. To prove that, observe that is a.s. c**on**tinuous and H (T )

H (K) < 1, for all T K, and use the dominated c**on**vergence theorem. These

observati**on**s imply the Lemma asserti**on**. 2

Pro

M 0 be a neighbourhood **on**e-to-**on**e projected **on**

T t0. We denote by P the corresp**on**ding **on**e-to-**on**e projector so that PM 0 is the image

**on** PM 0 a eld ~ X(~t) =X(t), ~t = P t. It is

clear, that E X(~t) ~ = m(t) = m(P ;1~t): We denote by ~r(~t ~s) = r(t s) the covariance

functi**on**

Due to the local stati**on**ary structure,

2

**on**e can nd 0 = (") > 0 such that for all ~t 1 ~t 2 2 T t0 \ S( 0 t 0), where S( 0 t 0) is

exp n ;(1 + ")jjD t0(~t 1 ; ~t 2)jj o

~r(~t 1 ~t 2) exp n ;(1 ; ")jjD t0(~t 1 ; ~t 2)jj o : (24)

We also can assume 0 to be so small that we could let M 0 = P ;1 [T t0 \ S( 0 t 0)] and

think

M 1 = MnM 0. Since m(t) isc**on**tinuous,

sup m(t) =m(t0) ; c0 t2M1

with c 0 > 0. By Theorem 2, for X 0(t) =X(t) ; m(t) we have,

P

(

sup X(t) >u

t2M1

)

= P

(

sup X0(t)+m(t) >u

t2M1

5

)

P

(

= H (k) u 2k

for any c1 with 0 u

t2M0

)

= P

(

sup

~ t2PM0

~X(~t) >u

)

: (26)

Introduce a **on**ary **on**

Since (24) by Slepian inequality,

P

(

sup

~ t2PM0

rH(t) =expf;(1 + 2")jjDt0tjj g:

~X(~t) >u

)

P

(

sup

t2PM0

X H(~t)+ ~m(~t) >u

)

: (27)

Clear that without loss

that the tangent plane Tt0 is now a tangent subspace and t0 = ~t 0 = 0. From this

point **on** we restrict ourselves by the k-dimensi**on**al subspace Tt0 and will drop the

\tilde". Let now S = S(0 ) be a ball in Tt0

= (u) =u ;1=2 log 1=2 u, this choice will be clear later **on**. For all su ciently large u

we have S PM0, and there exists a positive c1, such that

P

(

sup

v2S c\PM0

P

(

X H(v)+ ~m(v) >u

sup

v2S c\PM0

P

(

)

XH(v) >u; ~m(t 0)+c 1 2 (u)

sup

v2PM0

)

X H(v) >u; ~m(t 0)+c 1 2 (u)

)

: (28)

Applying Theorem 1 to the latter probability and making elementary calculati**on**s we

get

P

(

sup

v2Sc\PM0 XH(v)+ ~m(v) >u

)

= o ( (u ; m(t0))) as u !1: (29)

Turn now to the ball S. Let v1 =(v11:::vn1), ...,vk =(v1k:::vnk) be an orth**on**ormal

basis in Tt0 given in the coordinates **on**sider the

cubes

0 = u ;2= [0T] k ;2= k

l = u =1 [l T(l +1)T ]

l =(l 1:::lk) 2 Z k T > 0

6

We have,

X

X

(

) X

P fAig; i2L

ij2L0 P fAiAjg P sup XH(v)+ ~m(v) >u

v2S

i6=j

i2L0 P fAig (30)

where Ai = n o

supv2 XH(v)+ ~m(v) >u , L0 is the set

i

and L is the set

P

(

sup XH(v)+ ~m(v) >u

v2 i

P

(

sup

v2 l

)

XH(v)+m(t0) ; min jj

v2 i

p Bvjj2 + w1(u) >u

Here uw 1(u) ! 0 as u !1because

By Lemma 1 and the equivalence

t = ~t + O(jj~tjj 2 ) as t ! 0

(recall that wehaveassumed t0 = ~t 0 = 0), there exists a functi**on** 1(u), with 1(u) ! 0

as u !1,suchthat for all su ciently large u and every i 2 L0 ,

(

)

P sup XH(v)+ ~m(v) >u

v2 l

(1 + 1(u))H (1 + ") 1= Dt0[0T] k

)

:

(31)

u ; m(t0) + min jj

v2 l

p Bvjj2 + w1(u) : (32)

Using similar arguments, we get, that there exists 2(u) with 2(u) ! 0 as u ! 1,

such that for all su

(

ciently large u and every i 2 L,

)

P sup XH(v)+ ~m(v)u

v2 l

(1 ; 2(u))H (1 + ") 1= Dt0[0T] k

u ; m(t0) + min jj

v2 l

p Bvjj2 + w2(u) (33)

where uw 2(u) ! 0 as u !1.

Now, in accordance with (30), we sum right-hand parts

and L, respectively. Using (7), we get for all su ciently large u,

X

i2L 0

u ; m(t0) + min jj

v2 i

p Bvjj2 + w1(u) (1 + 0 1(u)) (u ; m(t 0))T ;k u 2k=

X

i2L 0

exp ;u min jj

v2 i

p Bvjj2 + o (1=u) T k u ;2k= (34)

where 0

1(u) ! 0 as u ! 1. Changing variables w = p ut and using the dominated

c**on**vergence, we get

X

i2L0 exp ;u min jj

v2 i

p Bvjj2 + o (1=u) =

Z

= T ;k

T t0

expf;Bw wgdwu 2k= ;k=2 (1 + o(1)) (35)

7

as u !1. Note that dw means here k-dimensi**on**al volume unite in T t0. Similarly,

X

i2L

= T ;k

exp ;u min

v2 i

Z

T t0

jj p Bvjj 2 + o (1=u) =

expf;Bw wgdwu 2k= ;k=2 (1 + o(1)) (36)

as u ! 1. In order to compute the integral R T expf;Bw wgdw we note that

t0

w = Mt, where t denotes the vector w presented in the orthog**on**al coordinate system

Z

T t0

expf;Bw wgdw =

Thus for all su ciently large u,

X

i2L 0

and

X

i2L

=

Z

T t0

expf;BMtMtg dt =

k=2

q det(M > BM) =: e : (37)

PfAig (1 + 00

1 (u))H (1 + ")1= Dt0[0t] k e T ;k 2k= ;k=2

u

PfAig (1 ; 00

1(u))H (1 + ") 1= Dt0[0t] k e T ;k 2k= ;k=2

u

(u ; m(t 0)) (38)

(u ; m(t 0)) (39)

where 00

1 (u) ! 0asu !1.

Now we are in a positi**on** to analyze the double sum in the left-hand part

We begin with the estimati**on**

with

P

(

sup XH(t)+ ~m(t) >usup XH(t)+ ~m(t) >u

t2 1

t2 2

;2= k

1 = u =1 [S 1 T 1 ] S usup

t2 1

t2 2

P

)

~m(t) (u) =1; c(u)

u :

X H(t)+ ~m(t) >u

(

sup XH(t) >u (u) sup XH(t) >u (u)

t2 1

t2 2

8

)

)

: (40)

Introduce a scaled

t0 t). Note

that

P

(

= P

sup

t2 1

8

<

:

)

XH(t) >u (u) sup XH(t) >u (u) =

t2 2

sup

t2(1+") 1= D t0 K1

We have for the covariance functi**on**

(t) >u (u) sup

t2(1+") 1= D t0 K2

r (t) =1;jjtjj + o(jjtjj ) as t ! 0:

(t) >u (u)

Hence there exists " 0, " 0 > 0, such that for all t 2 B(" 0=5) = ft : jjtjj 0 and all t, jjDt0tjj jjtjj. The variance

2

Y (t s) =2+2r (t ; s), hence for all t 2 K0 1, s 2 K0 2 we have,

This follows that

4 ; 4jjt ; sjj 2 (t s) 4 ;jjt ; sjj : (44)

inf

(ts)2K 0

1

provided " 0 is su ciently small, and

sup

(ts)2K 0

1

K 0

2

K 0

2

2 (t s) 4 ; 4"0 > 2 (45)

2 (t s) 4 ; u ;2 (1 + 2") (K1K 2)=:h(u K 1K 2) (46)

For the standardised eld Y (t s) =Y (t s)= (t s) we have,

P

8

<

: sup

(ts)2K 0

1

P

8

<

K 0

2

: sup

(ts)2K 0

1

Y (t s) > 2u (u)

K 0

2

9

=

Y (t s) > 2u (u)h ;1=2 (u K 1K 2)

9

9

=

: (47)

Algebraic calculati**on**s give

E(Y (t s) ; Y (t 1 s 1)) 2

16(jjt ; t 1jj + jjs ; s 1jj ): (48)

Let 1(t), 2(t), t 2 R n be two independent identically distributed homogeneous

**on**s equal zero and covariance functi**on**s equal

r (t) =exp(;32jjtjj a ):

(t s) = 1

p 2 ( 1(t)+ 2(s)) (t s) 2 R n R n :

is homogeneous, its covariance functi**on** is

r (t s) = 1

2 (exp(;32jjtjja + exp(;32jjsjj a ) : (49)

As far as for the covariance functi**on** r (t s t 1 s 1)

for all (t s) (t 1 s 1) 2 K 0 1

Thus by Slepian inequality,

P

r (t s t 1 s 1) 1 ; 8(jjt ; t 1jj a + jjs ; s 1jj a ) (50)

8

<

: sup

(ts)2K 0

1

P

K0 2, for these (t s) (t 1 s 1) we also have that

r (t s t 1 s 1) r (t ; t 1 s ; s 1):

8

<

K 0

2

: sup

(ts)2K 0

1

Further, for su ciently large u,

Y (t s) > 2u (u)h ;1=2 (u K 1K 2)

K 0

2

9

=

(t s) > 2u (u)h ;1=2 (u K 1K 2)

4u 2 2 (u)h ;1 (u K 1K 2) u 2 2 (u)+ 5

Using the last two relati**on**s, Lemma 1 and (7) we get,

P

(

sup XH(t)+ ~m(t0 + t)u sup XH(t)+ ~m(t0 + t)u

t2 1

t2 2

C (u (u))H (16(D t0 K 1 D t0 K 2)) exp ; 10

C 1

kY

=1

(T 1 ; S 1 )

kY

(T 2 ; S 2 )exp ;

10

=1

9

=

: (51)

(K 1K 2): (52)

)

(K 1K 2)

(K 1K 2) (u (u)) (53)

which holds for all su ciently large u and a c**on**stant C1, independent

order to estimate H (16(Dt0K1 Dt0K2)) we use here Lemmas 6.4 and 6.2 from [6].

10

Now turn to the double sum P ij2L 0 P(A iA j). We brake itinto two sums. The rst

**on**e, denote it by 1, isthe sum over all **on**

between any two **on**d **on**e, denote it by 2, is the

sum over all neighbouring cubes. Denote

Using (53) we get,

xi =minjj

t2 i

p Btjj i 2 L 0 :

P(AiAj) C T 2k exp ; T ( max ji ; j j;1) (u (u)) =: ij (54)

10 1 k

where (u) = 1 ; c(u), c(u) = maxfmax t2 i ~m(t 0 + t) max t2 j ~m(t 0 + t)g. This

estimati**on** holds for all members

and approximating the sum by an integral, we get

1

2 X

i2L 0

X

j2L 0 i6=jx i max t2 j ~m(t 0+t). Denote

Clear,

0

i = u 2

[i 1Ti 1T + p T ]

PfA iA jg P

(

8

<

sup i

+P

: sup 00

i

k

=2 [i T(i nu +1)T ] and

)

XH(t) >u (u)

X H(t) >u (u) sup j

X H(t) >u (u)

00

i = i n 0 i :

9

=

: (56)

Using now Lemma 1, (56), (53) and approximating the sum by an integral, we get for

all su ciently large u,

2

C 2 T k;1=2 e u 2k ; k

2 (u;m(t 0))+C 3 T k e u 2k ; k

2 exp ; T

10

Taking into account (38), (39), (55) and (57), we get for all positive T ,

H (1 + 2")D t0[0t] k

lim sup

u!1

;

(u;m(t 0)): (57)

T k

;C1 T k m T

exp ; ; C2T 10

;1=2 ; C3T k u 2k ; k m T

2 exp ;

10

P fsupt2S XH(t)+ ~m(t0 + t) >ug

lim inf

u!1 e u 2k ; k

2 (u ; m(t0)) P fsupt2S XH(t)+ ~m(t0 + t) >ug

e u 2k ; k

2 (u ; m(t 0))

H (1+2")D t0[0t] k

T k

11

: (58)

Now, letting T go to in nity and using (29), we obtain, that

P

(

sup XH(~t)+ ~m(~t) >u

t2S

)

=(1+2") k e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))(1 + o(1)) (59)

as u !1.

Let now XH(t),t 2 Rn , be a homogeneous

functi**on** rH(t) =exp(;(1 ; 2")jjDt0tjj ). From Theorem 4we have,

( ) (

)

P sup ~X(~t) >u P sup XH(~t)+ ~m(~t) >u : (60)

t2S

t2S

Proceeding the above estimati**on**s for the latter probability, we get,

(

P sup

t2

X )

H(~t)+ ~m(~t >u

=(1; 2") k e V (Dt0M)H (k) u 2k ; k

2 (u ; m(t0))(1 + o(1)) (61)

as u !1.

Now we collect (25), (27), (59), (60) and (61), and get

(1 ; 2") k

lim inf

u!1

lim sup

u!1

P fsup t2M X(t) >ug

e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))

P fsup t2M X(t) >ug

e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))

(1 + 2") k : (62)

It follows from this the asserti**on**

Pro

Theorem 5. Using Tailor expansi**on**, we get

~X(~t) =X(t) =X(t 0) + (gradX(t 0)) > (t ; t 0)+o(jjt ; t 0jj) t ! t 0: (63)

From here it follows that

~X(~t) ; ~ X(~t 0)=( g

gradX(t 0)) > (~t ; ~t 0)+o(jj~t ; ~t 0jj) ~t ! ~t 0 (64)

where g

grad is the orthog**on**al projecti**on** **on**to the tangent

subspace T t0 to the M at the point t 0. From (64) by algebraic calculati**on**s it follows

that

~r(~t ; ~t 0)=1; 1

2 (~t ; ~t 0) > A t0(~t ; ~t 0)+o(jj~t ; ~t 0jj) ~t ! ~t 0 (65)

where A t0 is the covariance matrix

gradX(t 0). Note that the matrix

q At0 =2 is just the matrix D t0 from Theorem 5. Now the pro

the pro

12

References

[1] Robert J. Adler, **on** sets, tube formulae, and maxima

Manuscript. Techni**on**, Israel and University

1998.

[2] Belyaev Yu., Piterbarg V., Asymptotic behaviour

**on**s **on**g high level. in: Excursi**on**s

Fi

[3] Fatalov V. R., Piterbarg V. I., The Laplace method for probability measures in

Banach spaces. Russian Math. Surveys, 1995, 50:6,1151-1239.

[4] Mikhaleva T. L., Piterbarg V. I. **on**

with c**on**stant variance **on** a **smooth** manyfold. Probability Theory and Appl.,

1966, 41, No2, 438-451.

[5] Borell, C., The Brunn-Minkovski inequality in Gauss space. Invent. Math., 1976,

30, 207-216.

[6] Pickands, J., III. Upcrossings probabilities for stati**on**ary

Amer. Math. Soc., 1969, 145, 51-73.

[7] Piterbarg V. I. Asymptotic Methods in the Theory

Fi**on**s **on**ographs, AMS, Providence, Rhode

Island, 1996.

[8] Piterbarg V. I., Tyurin Yu. Homogeneity testing

**on** a shere. Math. Methods

[9] Piterbarg V. I., Tyurin Yu. A Multidimensi**on**al Rank Correlati**on**: A

Field **on** aTorus.Probability Theory and Apllicati**on**s, 1999, to appear

[10] Slepian D. The **on**e-sided barrir problem for the

J., 1962, 41, 463-501.

13