17.08.2013 Views

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapitre 3. Estimating the asymptotic variance of LSE of weak V<strong>ARMA</strong> models 94<br />

3.4 Explicit expressions for I and J<br />

We now give expressions for the matricesI andJ involved in the asymptotic variance<br />

Ω of the QMLE. In these expressions, we separate terms depending from the param<strong>et</strong>er<br />

θ0 from terms depending on the weak noise second-order structure. L<strong>et</strong> the matrix<br />

M := E<br />

Id 2 (p+q) ⊗e ′ <br />

⊗2<br />

t<br />

involving the second-order moments of (<strong>et</strong>). For (i1,i2) ∈ {1,...,d 2 (p+q)} ×<br />

{1,...,d 3 (p+q)}, l<strong>et</strong> Mi1i2 the (i1,i2)-th block of M of size d 2 (p + q) × d 3 (p + q).<br />

Note that the block matrix M is not block diagonal. For j1 ∈ {1,...,d}, we have<br />

Mi1i2 = 0 d 2 (p+q)×d 3 (p+q) if i2 = d(i1 − 1) + j1 and Mi1i2 = 0 d 2 (p+q)×d 3 (p+q) otherwise.<br />

For (k,k ′ ) ∈ {1,...,d 2 (p+q)} × {1,...,d 3 (p+q)} and j2 ∈ {1,...,d}, the (k,k ′ )-th<br />

element of Mi1i2 as of the form σ 2 j1j2 if k′ = d(k−1)+j2 and zero otherwise and where<br />

σ 2 j1j2 = Eej1tej2t = Σ(j1,j2). We are now able to state the following proposition, which<br />

provi<strong>des</strong> a form for J = J(θ0,Σe0), in which the terms depending on θ0 (through the<br />

matrices λi) are distinguished from the terms depending on the second-order moments<br />

of (<strong>et</strong>) (through the matrix M) and the terms of the noise variance of the multivariate<br />

innovation process (through the matrix Σe0).<br />

PropositionA 2. Under Assumptions A1–A8, we have<br />

vecJ = 2 <br />

M{λ ′ i ⊗λ ′ i}vecΣ −1<br />

e0 ,<br />

where the λi’s are defined by (3.4).<br />

In view of (3.3), we have<br />

I = Varas<br />

1<br />

√ n<br />

i≥1<br />

n<br />

t=1<br />

Υt =<br />

+∞<br />

h=−∞<br />

Cov(Υt,Υt−h), (3.5)<br />

where<br />

Υt = ∂ <br />

logd<strong>et</strong>Σe +e<br />

∂θ<br />

′ t (θ)Σ−1<br />

<br />

e <strong>et</strong>(θ) . (3.6)<br />

θ=θ0<br />

The existence of the sum of the right-hand side of (3.5) is a consequence of A7 and of<br />

Davyvov’s inequality (1968) (see e.g. Lemma 11 in Boubacar Mainassara and Francq,<br />

2009). As the matrix J, we will decompose I in terms involving the V<strong>ARMA</strong> param<strong>et</strong>er<br />

θ0 and terms involving the distribution of the innovations <strong>et</strong>. L<strong>et</strong> the matrices<br />

Mij,h := E e ′ t−h ⊗Id2 (p+q) ⊗e ′ <br />

′<br />

t−j−h ⊗ e t ⊗ Id2 (p+q) ⊗e ′ <br />

t−i .<br />

The terms depending on the V<strong>ARMA</strong> param<strong>et</strong>er are the matrices λi defined in (3.4)<br />

and l<strong>et</strong> the matrices<br />

Γ(i,j) =<br />

+∞<br />

h=−∞<br />

Mij,h

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!