24.02.2013 Views

Optimality

Optimality

Optimality

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

196 V. H. de la Peña, R. Ibragimov and S. Sharakhmetov<br />

to the study of convergence of the measures of intertemporal dependence of the time<br />

series, including the above multivariate Pearson coefficient φ, the relative entropy δ,<br />

the divergence measures D ψ and the mean information for discrimination between<br />

the dependence and independence I(f0, f; Φ). We obtain the following Theorem 7.2<br />

which deals with the convergence in distribution of m-dimensional statistics of time<br />

series.<br />

Let h : R m → R be an arbitrary function of m arguments, Y be some r.v. and<br />

let ψ be a convex function increasing on [1,∞) and decreasing on (−∞, 1) with<br />

ψ(1) = 0. In what follows, D → represents convergence in distribution. In addition,<br />

{ξn i } and{ξt} stand for dependent copies of{X n i } and{Xt}.<br />

Theorem 7.2. For the double array {Xn i }, i = 1, . . . , n, n = 0,1, . . . let func-<br />

tionals φ2 n,n = φ2 Xn 1 ,Xn 2 ,...,Xn n , δn,n = δXn 1 ,Xn 2 ,...,Xn n , Dψ n,n = D ψ<br />

Xn 1 ,Xn 2 ,...,Xn, ρ(q) n,n =<br />

n<br />

ρ (q)<br />

X n 1 ,Xn 2 ,...,Xn n , q∈ (0, 1),Hn,n = (1/2ρ (q)<br />

n,n) 1/2 , n = 0, 1,2, . . . denote the corresponding<br />

distances. Then, as n→∞, if<br />

n�<br />

i=1<br />

ξ n i<br />

D<br />

→ Y<br />

and either φ 2 n,n→ 0, δn,n→ 0, D ψ n,n→ 0, ρ (q)<br />

n,n→ 0 orHn,n→ 0 as n→∞, then<br />

as n→∞,<br />

n�<br />

i=1<br />

X n i<br />

D<br />

→ Y.<br />

For a time series{Xt} ∞ t=0 let the functionals φ 2 t = φ 2 Xt,Xt+1,...,Xt+m−1 , δt =<br />

δXt,Xt+1,...,Xt+m−1, D ψ<br />

t = D ψ<br />

, ρ(q)<br />

Xt,Xt+1,...,Xt+m−1 t = ρ (q)<br />

, q∈ (0, 1),<br />

Xt,Xt+1,...,Xt+m−1<br />

Ht = (1/2ρ (q)<br />

t ) 1/2 , t = 0, 1,2, . . . denote the m-variate Pearson coefficient, the<br />

relative entropy, the multivariate divergence measure associated with the function<br />

ψ, the generalized Tsallis entropy and the Hellinger distance for the time series,<br />

respectively.<br />

Then, if, as t→∞,<br />

h(ξt, ξt+1, . . . , ξt+m−1) D → Y<br />

and either φ 2 t → 0, δt→ 0, D ψ<br />

t → 0, ρ (q)<br />

t → 0 orHt→ 0 as t→∞, then, as<br />

t→∞,<br />

h(Xt, Xt+1, . . . , Xt+m−1) D → Y.<br />

From the discussion in the beginning of the present section it follows that in the<br />

case of Gaussian processes{Xt} ∞ t=0 with (Xt, Xt+1, . . . , Xt+m−1)∼N(µt,m,Σt,m),<br />

the conditions of Theorem 7.2 are satisfied if, for example,|Rt,m(2Im− Rt,m)|→1<br />

or|Σt,m|/ �m−1 i=0 σ2 t+i → 1, as t → ∞, where Rt,m denote correlation matrices<br />

corresponding to Σt,m and (σ2 t , . . . , σ2 t+m−1) = diag(Σt,m). In the case of processes<br />

{Xt} ∞ t=1 with distributions of r.v.’s X1, . . . , Xn, n≥1, having generalized Eyraud–<br />

Farlie–Gumbel–Morgenstern copulas (3.3) (according to [70], this is the case for any<br />

time series of r.v.’s assuming two values), the conditions of the theorem are satisfied<br />

if, for example, φ2 t = �m �<br />

c=2 i1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!