14.11.2014 Views

Estimating the Codifference Function of Linear Time Series Models ...

Estimating the Codifference Function of Linear Time Series Models ...

Estimating the Codifference Function of Linear Time Series Models ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

B The limit distribution <strong>of</strong> <strong>the</strong> sample codifference function<br />

In this part, we will derive <strong>the</strong> asymptotic distribution <strong>of</strong> <strong>the</strong> sample codifference function <strong>of</strong> linear<br />

processes. The pro<strong>of</strong> will be given as a series <strong>of</strong> propositions, where <strong>the</strong> main results are presented<br />

in Theorem B.4 and also <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Theorem 2.2 at <strong>the</strong> end <strong>of</strong> this part. The pro<strong>of</strong> will follow<br />

closely an approach for obtaining <strong>the</strong> limiting distribution <strong>of</strong> <strong>the</strong> sample ACF in <strong>the</strong> classical case,<br />

e.g., Theorem 7.2.1 in Brockwell and Davis (1987).<br />

For notational simplicity, instead <strong>of</strong> working with ˆτ(s i , −s i ; k), i = 1, . . .,r, in <strong>the</strong> following<br />

first we will consider <strong>the</strong> similar estimator ˆτ ∗ (s i , −s i ; k),<br />

ˆτ ∗ (s i , −s i ; k) = − lnφ ∗ (s i , −s i ; k) + lnφ ∗ (s i , 0; k) + lnφ ∗ (0, −s i ; k) (25)<br />

where φ ∗ (u, v; k) = N −1 ∑ N<br />

t=1 exp(i(uX t+k+vX t )), u, v ∈ R. The required result will be presented<br />

in Theorem B.4.<br />

Proposition B.1 Let X t , t ∈ Z be <strong>the</strong> stationary linear process (1), satisfying conditions C1 and<br />

C2. Then if p ≥ 0 and q ≥ 0,<br />

(( ) ( ))<br />

Re ˆτ<br />

lim Ncov ∗ (s, p) Re ˆτ<br />

N→∞ Im ˆτ ∗ ,<br />

∗ (s, q)<br />

(s, p) Im ˆτ ∗ = λL p<br />

(s, q)<br />

2 V pqL q 2 λT<br />

where <strong>the</strong> matrices λ,L k 2 , k = p, q and V pq are given in (27), (34) and (36) below. Here cov(X, Y )<br />

denotes <strong>the</strong> covariance between X and Y .<br />

Pro<strong>of</strong>.To obtain a complete variance-covariance structure <strong>of</strong> <strong>the</strong> estimator, we consider <strong>the</strong> following<br />

representation <strong>of</strong> ˆτ ∗ (s, k)<br />

⎛<br />

Re ˆτ ∗ ⎞<br />

(s 1 , −s 1 , k)<br />

Re ˆτ ∗ (s 2 , −s 2 , k)<br />

( )<br />

.<br />

( )<br />

Re ˆτ ∗ (s, k)<br />

Im ˆτ ∗ =<br />

Re ˆτ ∗ (s r , −s r , k)<br />

Y<br />

(s, k)<br />

Im ˆτ ∗ (s 1 , −s 1 , k)<br />

= λ<br />

(26)<br />

X<br />

Im ˆτ ∗ (s 2 , −s 2 , k)<br />

⎜ . ⎟<br />

⎝ . ⎠<br />

Im ˆτ ∗ (s r , −s r , k)<br />

where<br />

and<br />

⎛<br />

Y = ⎜<br />

⎝<br />

λ =<br />

( )<br />

Ir ⊗ λ 1 0<br />

0 I r ⊗ λ 1<br />

λ 1 = ( 1 1 −1 )<br />

Re ln Y k<br />

1<br />

Re ln Y k<br />

2<br />

.<br />

Re ln Y k<br />

r<br />

⎞ ⎛<br />

⎟<br />

⎠ ,X = ⎜<br />

⎝<br />

Imln Y k<br />

1<br />

Imln Y k<br />

2<br />

.<br />

Imln Y k<br />

r<br />

Here I r denotes <strong>the</strong> matrix identity <strong>of</strong> size r, where we denote<br />

⎛ ⎞ ⎛<br />

Yi k = ⎝ φ∗ (0, −s i ; k)<br />

φ ∗ (s i , 0; k) ⎠ = ⎝ φ 1(s i , k)<br />

φ 2 (s i , k)<br />

φ ∗ (s i , −s i ; k) φ 3 (s i , k)<br />

and <strong>the</strong> logarithm function is defined componentwise, i.e., we have<br />

⎛<br />

Re ln Yi k = ⎝ Re ln φ ⎞<br />

1(s i , k)<br />

Re ln φ 2 (s i , k) ⎠<br />

Re ln φ 3 (s i , k)<br />

⎞<br />

⎟<br />

⎠<br />

⎞<br />

⎠<br />

(27)<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!