14.11.2014 Views

Estimating the Codifference Function of Linear Time Series Models ...

Estimating the Codifference Function of Linear Time Series Models ...

Estimating the Codifference Function of Linear Time Series Models ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

and similarly for <strong>the</strong> imaginary part. Let us denote<br />

⎛<br />

EYi k = ⎝ Eφ ⎞<br />

1(s i , k)<br />

Eφ 2 (s i , k) ⎠ =<br />

Eφ 3 (s i , k)<br />

⎛<br />

⎝ Φ 1(s i , k)<br />

Φ 2 (s i , k)<br />

Φ 3 (s i , k)<br />

Notice that Φ(u, v; k) = E(exp(i(uX t+k + vX t ))), u, v ∈ R. Using mean value <strong>the</strong>orem, we can<br />

expand <strong>the</strong> codifference function into<br />

( ) Re ˆτ ∗ (s, k)<br />

Im ˆτ ∗ = λ { L k<br />

(s, k)<br />

1 + ¯L k 2Z k }<br />

N<br />

(28)<br />

where<br />

with<br />

L k 1 =<br />

(<br />

ReL<br />

k<br />

1<br />

ImL k 1<br />

⎛<br />

ReL k 1 = ⎜<br />

⎝<br />

Re ln EY k<br />

1<br />

Re ln EY k<br />

2<br />

.<br />

.<br />

Re ln EY k<br />

r<br />

)<br />

, Z k N =<br />

(<br />

ReZ<br />

k<br />

N<br />

ImZ k N<br />

⎞<br />

⎛<br />

⎟<br />

⎠ , Reϕk N = ⎜<br />

⎝<br />

)<br />

ReY k<br />

1<br />

ReY k<br />

2<br />

.<br />

.<br />

ReY k<br />

r<br />

⎞<br />

⎠<br />

=<br />

(<br />

Reϕ<br />

k<br />

N − Reψ k N<br />

Imϕ k N − Reψk N<br />

⎞<br />

⎛<br />

⎟<br />

⎠ , Reψk N = ⎜<br />

⎝<br />

)<br />

ReEY k<br />

1<br />

ReEY k<br />

2<br />

.<br />

.<br />

ReEY k<br />

r<br />

and similarly for <strong>the</strong> imaginary parts, and where and ¯L k 2 = ( ¯d k ij ) i,j=1,...,6 denotes Jacobian <strong>of</strong> (26),<br />

which is evaluated at c ( ∥ ∥ ∥ c − ψ<br />

k N < ∥ϕ k<br />

N − ψN<br />

k ∥ ). From <strong>the</strong> assumption C2, we obtain<br />

k−1<br />

∑<br />

Φ 3 (s i , k) = Φ(s i , −s i ; k) = exp(− σ α |s i c j | α −<br />

and Φ 1 (s i , k) = Φ 2 (s i , k), i.e.,<br />

j=0<br />

Φ(s i , 0; k) = Φ(0, −s i ; k) = exp(−<br />

⎞<br />

⎟<br />

⎠<br />

∞∑<br />

σ α |s i (c j+k − c j )| α ) (29)<br />

j=0<br />

∞∑<br />

σ α |s i c j | α ) (30)<br />

From identities (29)-(30) and fur<strong>the</strong>r applying <strong>the</strong> assumption C1, we obtain that <strong>the</strong> elements<br />

<strong>of</strong> Re ψN k are always strictly greater than 0. Therefore, with a probability convergent to 0, <strong>the</strong><br />

elements <strong>of</strong> Re ϕ k N will be less than or equal to 0. Hence, without changing <strong>the</strong> limiting distribution<br />

( ) Y<br />

<strong>of</strong> <strong>the</strong> estimator, we can restrict <strong>the</strong> definition <strong>of</strong> <strong>the</strong> real and <strong>the</strong> imaginary components <strong>of</strong><br />

X<br />

in (26) only in <strong>the</strong> right half plane where <strong>the</strong> elements <strong>of</strong> Re(ϕ k N ) > 0, and equal to 0 in <strong>the</strong> o<strong>the</strong>r<br />

case. Thus, we can conclude that <strong>the</strong> Jacobian matrix ¯L k 2 is well defined here. By Theorem 2.1,<br />

¯L k 2 will converge in probability to L k 2, where<br />

j=0<br />

L k 2 = ∇Lk 1<br />

Here ∇g denotes <strong>the</strong> Jacobian <strong>of</strong> g. From (29), (30), we have <strong>the</strong> following identities<br />

Re Φ(s i , −s i ; k) = E cos(s i (X t+k − X t )) = Φ(s i , −s i , k) (31)<br />

Re Φ(s i , 0; k) = E cos(s i X t+k ) = Φ(s i , 0, k) (32)<br />

Re Φ(0, −s i ; k) = E cos(−s i X t ) = Φ(0, −s i , k) (33)<br />

and ImΦ(s i , −s i ; k) = E sin(s i (X t+k −X t )) = 0, ImΦ(s i , 0; k) = E sin(s i X t+k ) = 0 and ImΦ(0, −s i ; k) =<br />

E sin(−s i X t ) = 0. Using <strong>the</strong>se identities, after some algebra we directly obtain<br />

( )<br />

L k 2 = Ir d k 0<br />

0 I r d k (34)<br />

12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!