On maximum of Gaussian non-centered elds indexed on smooth ...

wiasberlin

On maximum of Gaussian non-centered elds indexed on smooth ...

ong>Onong> ong>maximumong> ong>ofong> ong>Gaussianong> ong>nonong>-ong>centeredong> ong>eldsong>

ong>indexedong> on smooth manifolds.

Vladimir Piterbarg

Moscow Lomonosov State University

Faculty ong>ofong> Mechanics and Mathematics

Moscow 119899 Russia

eMail piter@mech.math.msu.su

Sinisha Stamatovich

University ong>ofong> Montenegro

Cetinjski put bb

81000 Podgorica, Yugoslavia

eMail sins.pmf@rc.pmf.cg.ac.yu

06 November, 1998

Key words and phrases. ong>Gaussianong> ong>eldsong>, large excursions, ong>maximumong> tail distribution,

exact asymptotics.

1991 Mathematics Subject Classi cation. 60G15 60G60

Research partially supported by DFG{RFFI grant 436RUS113/467/7 and RFFI Grants 97-01-

00648, 98-01-00524

0


Abstract

The double sum method ong>ofong> evaluation ong>ofong> probabilities ong>ofong> large deviations

for ong>Gaussianong> processes with ong>nonong>-zero expectations is developed. Asymptotic

behaviors ong>ofong> the tail ong>ofong> ong>nonong>-ong>centeredong> locally stationary ong>Gaussianong> ong>eldsong> ong>indexedong>

on smooth manifold are evaluated. In particular, smooth ong>Gaussianong> ong>eldsong> on

smooth manifolds are considered.

1 Introduction

The double-sum method is one ong>ofong> the main tools in studying asymptotic behavior

ong>ofong> maxima distribution ong>ofong> ong>Gaussianong> processes and ong>eldsong>, see [1], [7], [3] and references

therein. Until recently only ong>centeredong> processes have been considered. It can be

seen from [7] and the present paper that the investigation ong>ofong> ong>nonong>-ong>centeredong> ong>Gaussianong>

ong>eldsong> can be performed with similar techniques, which, however, are far from trivial.

Furthermore, there are examples when the need for the asymptotic behaviour for

ong>nonong>-ong>centeredong> ong>eldsong> arises. In [8], [9] statistical procedures have been introduced to

test ong>nonong>-parametric hypotheses for multi-dimensional distributions. The asymptotic

decision rules are based on tail distributions ong>ofong> maxima ong>ofong> ong>Gaussianong> ong>eldsong> ong>indexedong>

on spheres or products ong>ofong> spheres. In order to estimate power ong>ofong> the procedures one

might have tohave asymptotic behaviour ong>ofong> tail maxima distributions for ong>nonong>-ong>centeredong>

ong>Gaussianong> ong>eldsong>.

In this paper we extend the double sum method to study ong>Gaussianong> processes with

ong>nonong>-zero expectations. We evaluate asymptotic behavior ong>ofong> the tail ong>ofong> ong>nonong>-ong>centeredong> locally

( tD t)-stationary ong>Gaussianong> eld ong>indexedong> on smooth manifold, as de ned below.

In particular, smooth ong>Gaussianong> ong>eldsong> on smooth manifolds are considered.

2 De nitions, auxiliary results, main results

Let the collection 1::: k ong>ofong> positive numbers be given, as well as the collection

l1:::lk ong>ofong> positive integers such that Pk i=1 li = n. We set l0 =0. This two collections

is called a structure, [7]. For any vector t =(t1:::tn) > its structural module is de ned

by

jtj =

kX

i=1

0

@ E(i) X

j=E(i;1)+1

t 2

j

1

A

i

2

(1)

where E(i) = Pi j=0 lj, j =1 ::: k. The structure de nes a decomposition ong>ofong> the space

Rn into the direct sum Rn = Lk i=1 Rli ,suchthat the restriction ong>ofong> the structural module

on either ong>ofong> R li is just Euclidean norm taken to the degree i, i =1:::k, respectively.

For u>0 denote by Gi u the homothety ong>ofong> the subspace Rli ;2= i with the coe cient u ,

i =1:::k, respectively, andbygu, the superposition ong>ofong> the homotheties, gu = k

i=1 Gi u.

It is clear that for any t 2 Rn ,

jgutj = u ;2 jtj : (2)

1


Let (t), t 2 R n , be a ong>Gaussianong> eld with continuous paths, the expected value and

the covariance function are given by

E (t) =;jtj Cov( (t) (s)) = jtj + jsj ;jt ; sj (3)

respectively. Thus (t) can be represented as a sum ong>ofong> independent multi-parameter

drifted fractional Brownian motions (Levy-Shonberg ong>eldsong>) ong>indexedong> on R l i , with parameters

i.

To proceed, we need a generalization ong>ofong> the Pickands' constant. De ne the function

on measurable subsets ong>ofong> R n ,

(

H (B) = exp sup

t2B

)

(t) : (4)

Let D be a ong>nonong>-degenerated matrix n n, throughoutwemake no notation di erence

between a matrix and the corresponding linear transformation. Next, for any S > 0,

we denote by

[0S] k = ft : 0 t i S i =1:::k t i =0 i = k +1::ng

a cube ong>ofong> dimension k generated by the rst k coordinates in R n . In [2] it is proved

that there exists a positive limit

0


De nition 1 Let an -structure is given on R n . We say that X(t), t 2 T R n

has a local ( D t)-stationary structure, or X(t) is locally ( D t)-stationary, if for any

">0 there exists a positive (") such that for any s 2 T one can nd a ong>nonong>-degenerated

matrix D s such that the covariance function r(t 1 t 2) ong>ofong> X(t) satis es

1 ; (1 + ")jD s(t 1 ; t 2)j r(t 1 t 2) 1 ; (1 ; ")jD s(t 1 ; t 2)j (10)

provided jjt 1 ; sjj < (") and jjt 2 ; sjj < (").

It is convenient for the reader to cite here four theorems which are in our use, in

suitable to our purposes forms. Before that we need some notations. Let L be a kdimensional

subspace ong>ofong> R n , for xed orthogonal coordinate systems in R n and in L,

let (x 1:::xk) > be the coordinate presentation ong>ofong> a point x 2 L, and (x 0 1 :::x0 n) > be

its coordinate presentation in R n . Denote by M = M(L) the corresponding transition

matrix,

(x 0 1 :::x0 n) > = M(x 1:::xk) >

that is M =(@x 0 i =@xj i =1:::n j =1:::k).

Next, for a matrix G ong>ofong> size n k we denote by V (G), the square root ong>ofong> the

sum ong>ofong> squares ong>ofong> all minors ong>ofong> order k. This invariant transforms the volume when

the dimension ong>ofong> vectors is changed, that is dt = V (G) ;1 dGt. Note that since both

coordinate systems in L and R n are orthogonal, V (M) =1:

Theorem 1 (Theorem 7.1, [7]) Let X(t), t 2 R n ,be a ong>Gaussianong> homogeneous ong>centeredong>

eld such that for some , 0 < 2 and a ong>nonong>-degenerated matrix D its covariance

function satis es

r(t) =1;jjDtjj + o(jjDtjj ) as t ! 0 (11)

Then for any k, 0 < k n, every subspace L ong>ofong> Rn with dimL = k, any Jordan set

A L, and every function w(u) with w(u)=u = o(1) as u !1,

P

(

as u !1 provided

with A the closure ong>ofong> A.

sup X(t) >u+ w(u)

t2A

)

= H (k) V (DM(L))mes L(A)u 2k

= (12)

(u + w(u))(1 + o(1)) (13)

r(t ; s) < 1 for all t s 2 A t 6= s (14)

Theorem 2 ( Theorem 1, [4]). Let X(t), t 2 R n , be a ong>Gaussianong> ong>centeredong> locally

( D t)-stationary eld, with >0 and a continuous matrix function D t. Let M R n

be a smooth compact ong>ofong> dimension k, 0 u; c

t2M

= H (k) u 2k

)

= (15)

(u ; c)

Z

M V (D tM t) dt(1 + o(1)) (16)

as u !1, where M t = M(T t) with T t the tangent subspace taken to M at the point

t and dt is an element ong>ofong> volume ong>ofong> M.

3


Theorem 3 (The Borell-Sudakov-Tsirelson inequality.) Let X(t), t 2 T , be a measurable

ong>Gaussianong> process ong>indexedong> on an arbitrary set T , and let numbers , m, a be

de ned by relations,

and

Then for any x,

2 = sup

t2T

P

(

P

VarX(t) < 1 m =supEX(t)

< 1

t2T

(

sup X(t) ; EX(t) a

t2T

sup X(t) >x

t2T

)

2

)

1

: (17)

2

x ; m ; a : (18)

Theorem 4 (Slepian inequality.) Let X(t), Y (t), t 2 T , be separable ong>Gaussianong> processes

ong>indexedong> on an arbitrary set T , and suppose that for all t s 2 T ,

Then for all x,

VarX(t) =VarY (t) EX(t) =EY (t)

P

and (19)

Cov(X(t)X(s)) Cov(Y (t)Y(s)):

(

sup X(t) + O(jjt ; t 0jj 2+ ) as t ! t 0 (21)

for some > 0 and positive matrix B. Then

( )

P sup X(t) >u

t2M

= (22)

=

k=2

p detM > BM V (D t0 M) H (k) u 2k ; k

2 (u ; m(t 0))(1 + o(1))

as u !1, where M = M(T t0) and T t0 is the tangent subspace toM taken at the point

t 0.

Theorem 6 Let M R n be a smooth k-dimensional compact, 0


all t 2Mand r(t s) < 1 for all t s 2 M, t 6= s. Let the expectation m(t) = EX(t)

is same as in Theorem 5. Then

( )

P sup X(t) >u =

t2M

q

1

V ( 2

=

(23)

At0M) p u

detM > BM k

2 (u ; m(t0))(1 + o(1))

as u !1 with M as in Theorem 5 and A t0 the covariance matrix ong>ofong> the orthogonal

projection ong>ofong> the gradient vector ong>ofong> the eld X(t) in point t 0 onto the tangent subspace

to the M taken at the point t 0.

3 Proong>ofong>s.

Proong>ofong> ong>ofong> Lemma 1. First, observe that if one changes gu on g u (u), the lemma

immediately follows from Lemma 6.1, [7]. Second, observe that we can write guT =

g u (u)(I uT ), where I u is a linear transformation ong>ofong> R n ,which also is a superposition ong>ofong>

homotheties ong>ofong> R ki with coe cients tending to 1 as u !1. Thus Iu tends to identity,

and I uT tends to T in Euclidean distance. Third, note that H (T ) is continuous in

T in the topology ong>ofong> the space ong>ofong> measurable subsets ong>ofong> a compact, say K, generated

by Euclidean distance. To prove that, observe that is a.s. continuous and H (T )

H (K) < 1, for all T K, and use the dominated convergence theorem. These

observations imply the Lemma assertion. 2

Proong>ofong> ong>ofong> Theorem 5. Let T t0 be the tangent plane to M taken at the point t 0. Let

M 0 be a neighbourhood ong>ofong> t 0 in M, so small that it can be one-to-one projected on

T t0. We denote by P the corresponding one-to-one projector so that PM 0 is the image

ong>ofong> M 0. The eld X(t), t 2M, generates on PM 0 a eld ~ X(~t) =X(t), ~t = P t. It is

clear, that E X(~t) ~ = m(t) = m(P ;1~t): We denote by ~r(~t ~s) = r(t s) the covariance

function ong>ofong> X(~t). ~ Choose an arbitrary " 2 (0 1).

Due to the local stationary structure,

2

one can nd 0 = (") > 0 such that for all ~t 1 ~t 2 2 T t0 \ S( 0 t 0), where S( 0 t 0) is

ong>centeredong> at t 0 ball with radius 0, we have

exp n ;(1 + ")jjD t0(~t 1 ; ~t 2)jj o

~r(~t 1 ~t 2) exp n ;(1 ; ")jjD t0(~t 1 ; ~t 2)jj o : (24)

We also can assume 0 to be so small that we could let M 0 = P ;1 [T t0 \ S( 0 t 0)] and

think ong>ofong> PM 0 as ong>ofong> a ball in T t0 ong>centeredong> at ~t 0 = t 0, with the same radius. Denote

M 1 = MnM 0. Since m(t) iscontinuous,

sup m(t) =m(t0) ; c0 t2M1

with c 0 > 0. By Theorem 2, for X 0(t) =X(t) ; m(t) we have,

P

(

sup X(t) >u

t2M1

)

= P

(

sup X0(t)+m(t) >u

t2M1

5

)


P

(

= H (k) u 2k

for any c1 with 0 u

t2M0

)

= P

(

sup

~ t2PM0

~X(~t) >u

)

: (26)

Introduce a ong>Gaussianong> stationary ong>centeredong> eld XH(t), t 2 R n ,withcovariance function

Since (24) by Slepian inequality,

P

(

sup

~ t2PM0

rH(t) =expf;(1 + 2")jjDt0tjj g:

~X(~t) >u

)

P

(

sup

t2PM0

X H(~t)+ ~m(~t) >u

)

: (27)

Clear that without loss ong>ofong> generality we can put the origin ong>ofong> Rn at the point t0, so

that the tangent plane Tt0 is now a tangent subspace and t0 = ~t 0 = 0. From this

point on we restrict ourselves by the k-dimensional subspace Tt0 and will drop the

\tilde". Let now S = S(0 ) be a ball in Tt0 ong>centeredong> at zero with radius with

= (u) =u ;1=2 log 1=2 u, this choice will be clear later on. For all su ciently large u

we have S PM0, and there exists a positive c1, such that

P

(

sup

v2S c\PM0

P

(

X H(v)+ ~m(v) >u

sup

v2S c\PM0

P

(

)

XH(v) >u; ~m(t 0)+c 1 2 (u)

sup

v2PM0

)

X H(v) >u; ~m(t 0)+c 1 2 (u)

)

: (28)

Applying Theorem 1 to the latter probability and making elementary calculations we

get

P

(

sup

v2Sc\PM0 XH(v)+ ~m(v) >u

)

= o ( (u ; m(t0))) as u !1: (29)

Turn now to the ball S. Let v1 =(v11:::vn1), ...,vk =(v1k:::vnk) be an orthonormal

basis in Tt0 given in the coordinates ong>ofong> Rn . In the coordinate system, consider the

cubes

0 = u ;2= [0T] k ;2= k

l = u =1 [l T(l +1)T ]

l =(l 1:::lk) 2 Z k T > 0

6


We have,

X

X

(

) X

P fAig; i2L

ij2L0 P fAiAjg P sup XH(v)+ ~m(v) >u

v2S

i6=j

i2L0 P fAig (30)

where Ai = n o

supv2 XH(v)+ ~m(v) >u , L0 is the set ong>ofong> multi-indexes i with i\S 6=

i

and L is the set ong>ofong> multi-indexes i with i S. Using (21), we have

P

(

sup XH(v)+ ~m(v) >u

v2 i

P

(

sup

v2 l

)

XH(v)+m(t0) ; min jj

v2 i

p Bvjj2 + w1(u) >u

Here uw 1(u) ! 0 as u !1because ong>ofong> the choice ong>ofong> (u) and the remainder in (21).

By Lemma 1 and the equivalence

t = ~t + O(jj~tjj 2 ) as t ! 0

(recall that wehaveassumed t0 = ~t 0 = 0), there exists a function 1(u), with 1(u) ! 0

as u !1,suchthat for all su ciently large u and every i 2 L0 ,

(

)

P sup XH(v)+ ~m(v) >u

v2 l

(1 + 1(u))H (1 + ") 1= Dt0[0T] k

)

:

(31)

u ; m(t0) + min jj

v2 l

p Bvjj2 + w1(u) : (32)

Using similar arguments, we get, that there exists 2(u) with 2(u) ! 0 as u ! 1,

such that for all su

(

ciently large u and every i 2 L,

)

P sup XH(v)+ ~m(v)u

v2 l

(1 ; 2(u))H (1 + ") 1= Dt0[0T] k

u ; m(t0) + min jj

v2 l

p Bvjj2 + w2(u) (33)

where uw 2(u) ! 0 as u !1.

Now, in accordance with (30), we sum right-hand parts ong>ofong> (32) and (33) over L 0

and L, respectively. Using (7), we get for all su ciently large u,

X

i2L 0

u ; m(t0) + min jj

v2 i

p Bvjj2 + w1(u) (1 + 0 1(u)) (u ; m(t 0))T ;k u 2k=

X

i2L 0

exp ;u min jj

v2 i

p Bvjj2 + o (1=u) T k u ;2k= (34)

where 0

1(u) ! 0 as u ! 1. Changing variables w = p ut and using the dominated

convergence, we get

X

i2L0 exp ;u min jj

v2 i

p Bvjj2 + o (1=u) =

Z

= T ;k

T t0

expf;Bw wgdwu 2k= ;k=2 (1 + o(1)) (35)

7


as u !1. Note that dw means here k-dimensional volume unite in T t0. Similarly,

X

i2L

= T ;k

exp ;u min

v2 i

Z

T t0

jj p Bvjj 2 + o (1=u) =

expf;Bw wgdwu 2k= ;k=2 (1 + o(1)) (36)

as u ! 1. In order to compute the integral R T expf;Bw wgdw we note that

t0

w = Mt, where t denotes the vector w presented in the orthogonal coordinate system

ong>ofong> Tt0, recall that in this case V (M) =1. Hence

Z

T t0

expf;Bw wgdw =

Thus for all su ciently large u,

X

i2L 0

and

X

i2L

=

Z

T t0

expf;BMtMtg dt =

k=2

q det(M > BM) =: e : (37)

PfAig (1 + 00

1 (u))H (1 + ")1= Dt0[0t] k e T ;k 2k= ;k=2

u

PfAig (1 ; 00

1(u))H (1 + ") 1= Dt0[0t] k e T ;k 2k= ;k=2

u

(u ; m(t 0)) (38)

(u ; m(t 0)) (39)

where 00

1 (u) ! 0asu !1.

Now we are in a position to analyze the double sum in the left-hand part ong>ofong> (30).

We begin with the estimation ong>ofong> the probability

with

P

(

sup XH(t)+ ~m(t) >usup XH(t)+ ~m(t) >u

t2 1

t2 2

;2= k

1 = u =1 [S 1 T 1 ] S usup

t2 1

t2 2

P

)

~m(t) (u) =1; c(u)

u :

X H(t)+ ~m(t) >u

(

sup XH(t) >u (u) sup XH(t) >u (u)

t2 1

t2 2

8

)


)

: (40)


Introduce a scaled ong>Gaussianong> homogeneous eld (t) = XH((1 + 2") ;1= D ;1

t0 t). Note

that

P

(

= P

sup

t2 1

8

<

:

)

XH(t) >u (u) sup XH(t) >u (u) =

t2 2

sup

t2(1+") 1= D t0 K1

We have for the covariance function ong>ofong> ,

(t) >u (u) sup

t2(1+") 1= D t0 K2

r (t) =1;jjtjj + o(jjtjj ) as t ! 0:

(t) >u (u)

Hence there exists " 0, " 0 > 0, such that for all t 2 B(" 0=5) = ft : jjtjj 0 and all t, jjDt0tjj jjtjj. The variance ong>ofong> Y equals

2

Y (t s) =2+2r (t ; s), hence for all t 2 K0 1, s 2 K0 2 we have,

This follows that

4 ; 4jjt ; sjj 2 (t s) 4 ;jjt ; sjj : (44)

inf

(ts)2K 0

1

provided " 0 is su ciently small, and

sup

(ts)2K 0

1

K 0

2

K 0

2

2 (t s) 4 ; 4"0 > 2 (45)

2 (t s) 4 ; u ;2 (1 + 2") (K1K 2)=:h(u K 1K 2) (46)

For the standardised eld Y (t s) =Y (t s)= (t s) we have,

P

8

<

: sup

(ts)2K 0

1

P

8

<

K 0

2

: sup

(ts)2K 0

1

Y (t s) > 2u (u)

K 0

2

9

=


Y (t s) > 2u (u)h ;1=2 (u K 1K 2)

9

9

=


: (47)


Algebraic calculations give

E(Y (t s) ; Y (t 1 s 1)) 2

16(jjt ; t 1jj + jjs ; s 1jj ): (48)

Let 1(t), 2(t), t 2 R n be two independent identically distributed homogeneous ong>Gaussianong>

ong>eldsong> with expectations equal zero and covariance functions equal

ong>Gaussianong> eld

r (t) =exp(;32jjtjj a ):

(t s) = 1

p 2 ( 1(t)+ 2(s)) (t s) 2 R n R n :

is homogeneous, its covariance function is

r (t s) = 1

2 (exp(;32jjtjja + exp(;32jjsjj a ) : (49)

As far as for the covariance function r (t s t 1 s 1) ong>ofong> the eld Y we have

for all (t s) (t 1 s 1) 2 K 0 1

Thus by Slepian inequality,

P

r (t s t 1 s 1) 1 ; 8(jjt ; t 1jj a + jjs ; s 1jj a ) (50)

8

<

: sup

(ts)2K 0

1

P

K0 2, for these (t s) (t 1 s 1) we also have that

r (t s t 1 s 1) r (t ; t 1 s ; s 1):

8

<

K 0

2

: sup

(ts)2K 0

1

Further, for su ciently large u,

Y (t s) > 2u (u)h ;1=2 (u K 1K 2)

K 0

2

9

=


(t s) > 2u (u)h ;1=2 (u K 1K 2)

4u 2 2 (u)h ;1 (u K 1K 2) u 2 2 (u)+ 5

Using the last two relations, Lemma 1 and (7) we get,

P

(

sup XH(t)+ ~m(t0 + t)u sup XH(t)+ ~m(t0 + t)u

t2 1

t2 2

C (u (u))H (16(D t0 K 1 D t0 K 2)) exp ; 10

C 1

kY

=1

(T 1 ; S 1 )

kY

(T 2 ; S 2 )exp ;

10

=1

9

=


: (51)

(K 1K 2): (52)

)

(K 1K 2)

(K 1K 2) (u (u)) (53)

which holds for all su ciently large u and a constant C1, independent ong>ofong>u, K1, K2. In

order to estimate H (16(Dt0K1 Dt0K2)) we use here Lemmas 6.4 and 6.2 from [6].

10


Now turn to the double sum P ij2L 0 P(A iA j). We brake itinto two sums. The rst

one, denote it by 1, isthe sum over all ong>nonong>-neighbouring cubes (that is, the distance

between any two ong>ofong> them is are positive), and the second one, denote it by 2, is the

sum over all neighbouring cubes. Denote

Using (53) we get,

xi =minjj

t2 i

p Btjj i 2 L 0 :

P(AiAj) C T 2k exp ; T ( max ji ; j j;1) (u (u)) =: ij (54)

10 1 k

where (u) = 1 ; c(u), c(u) = maxfmax t2 i ~m(t 0 + t) max t2 j ~m(t 0 + t)g. This

estimation holds for all members ong>ofong> the rst sum and all su ciently large u. Using it

and approximating the sum by an integral, we get

1

2 X

i2L 0

X

j2L 0 i6=jx i max t2 j ~m(t 0+t). Denote

Clear,

0

i = u 2

[i 1Ti 1T + p T ]

PfA iA jg P

(

8

<

sup i

+P

: sup 00

i

k

=2 [i T(i nu +1)T ] and

)

XH(t) >u (u)

X H(t) >u (u) sup j

X H(t) >u (u)

00

i = i n 0 i :

9

=


: (56)

Using now Lemma 1, (56), (53) and approximating the sum by an integral, we get for

all su ciently large u,

2

C 2 T k;1=2 e u 2k ; k

2 (u;m(t 0))+C 3 T k e u 2k ; k

2 exp ; T

10

Taking into account (38), (39), (55) and (57), we get for all positive T ,

H (1 + 2")D t0[0t] k

lim sup

u!1

;

(u;m(t 0)): (57)

T k

;C1 T k m T

exp ; ; C2T 10

;1=2 ; C3T k u 2k ; k m T

2 exp ;

10

P fsupt2S XH(t)+ ~m(t0 + t) >ug

lim inf

u!1 e u 2k ; k

2 (u ; m(t0)) P fsupt2S XH(t)+ ~m(t0 + t) >ug

e u 2k ; k

2 (u ; m(t 0))

H (1+2")D t0[0t] k

T k

11

: (58)


Now, letting T go to in nity and using (29), we obtain, that

P

(

sup XH(~t)+ ~m(~t) >u

t2S

)

=(1+2") k e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))(1 + o(1)) (59)

as u !1.

Let now XH(t),t 2 Rn , be a homogeneous ong>centeredong> ong>Gaussianong> eld with the covariance

function rH(t) =exp(;(1 ; 2")jjDt0tjj ). From Theorem 4we have,

( ) (

)

P sup ~X(~t) >u P sup XH(~t)+ ~m(~t) >u : (60)

t2S

t2S

Proceeding the above estimations for the latter probability, we get,

(

P sup

t2

X )

H(~t)+ ~m(~t >u

=(1; 2") k e V (Dt0M)H (k) u 2k ; k

2 (u ; m(t0))(1 + o(1)) (61)

as u !1.

Now we collect (25), (27), (59), (60) and (61), and get

(1 ; 2") k

lim inf

u!1

lim sup

u!1

P fsup t2M X(t) >ug

e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))

P fsup t2M X(t) >ug

e V (D t0 M)H (k) u 2k ; k

2 (u ; m(t 0))

(1 + 2") k : (62)

It follows from this the assertion ong>ofong> Theorem. 2

Proong>ofong> ong>ofong> Theorem 6. Let ~ X(~t) be the eld as it is de ned in the proong>ofong> ong>ofong>

Theorem 5. Using Tailor expansion, we get

~X(~t) =X(t) =X(t 0) + (gradX(t 0)) > (t ; t 0)+o(jjt ; t 0jj) t ! t 0: (63)

From here it follows that

~X(~t) ; ~ X(~t 0)=( g

gradX(t 0)) > (~t ; ~t 0)+o(jj~t ; ~t 0jj) ~t ! ~t 0 (64)

where g

grad is the orthogonal projection ong>ofong> the gradient ong>ofong> the eld X onto the tangent

subspace T t0 to the M at the point t 0. From (64) by algebraic calculations it follows

that

~r(~t ; ~t 0)=1; 1

2 (~t ; ~t 0) > A t0(~t ; ~t 0)+o(jj~t ; ~t 0jj) ~t ! ~t 0 (65)

where A t0 is the covariance matrix ong>ofong> the vector g

gradX(t 0). Note that the matrix

q At0 =2 is just the matrix D t0 from Theorem 5. Now the proong>ofong> repeats up to all details

the proong>ofong> ong>ofong> Theorem 5. 2

12


References

[1] Robert J. Adler, ong>Onong> excursion sets, tube formulae, and maxima ong>ofong> random ong>eldsong>,

Manuscript. Technion, Israel and University ong>ofong> North Carolina. August 24,

1998.

[2] Belyaev Yu., Piterbarg V., Asymptotic behaviour ong>ofong> the mean number ong>ofong> A-points

ong>ofong> excursions ong>ofong> ong>Gaussianong> eld beyong high level. in: Excursions ong>ofong> Random

Fiong>eldsong>, Yu. Belyaev, ed. Moscow University, 1972, 62-88 (in Russian).

[3] Fatalov V. R., Piterbarg V. I., The Laplace method for probability measures in

Banach spaces. Russian Math. Surveys, 1995, 50:6,1151-1239.

[4] Mikhaleva T. L., Piterbarg V. I. ong>Onong> the ong>maximumong> distribution ong>ofong> a ong>Gaussianong> eld

with constant variance on a smooth manyfold. Probability Theory and Appl.,

1966, 41, No2, 438-451.

[5] Borell, C., The Brunn-Minkovski inequality in Gauss space. Invent. Math., 1976,

30, 207-216.

[6] Pickands, J., III. Upcrossings probabilities for stationary ong>Gaussianong> process. Trans.

Amer. Math. Soc., 1969, 145, 51-73.

[7] Piterbarg V. I. Asymptotic Methods in the Theory ong>ofong> ong>Gaussianong> Processes and

Fiong>eldsong> Translations ong>ofong> Mathematical Monographs, AMS, Providence, Rhode

Island, 1996.

[8] Piterbarg V. I., Tyurin Yu. Homogeneity testing ong>ofong> two samples: ong>Gaussianong> eld

on a shere. Math. Methods ong>ofong> Statist., 1993, 2, 147-164.

[9] Piterbarg V. I., Tyurin Yu. A Multidimensional Rank Correlation: A ong>Gaussianong>

Field on aTorus.Probability Theory and Apllications, 1999, to appear

[10] Slepian D. The one-sided barrir problem for the ong>Gaussianong> noise. Bell System Techn.

J., 1962, 41, 463-501.

13

More magazines by this user
Similar magazines