17.08.2013 Views

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapitre 5. Model selection of weak V<strong>ARMA</strong> models 163<br />

5.5.2 Other decomposition of the discrepancy<br />

In Section 5.5.1, the minimal discrepancy (contrast) has been approximated by<br />

−2ElogLn( ˆ θn) (the expectation is taken under the true model X). Note that studying<br />

this average discrepancy is too difficult because of the dependance b<strong>et</strong>ween ˆ θn andX. An<br />

alternative slightly different but equivalent interpr<strong>et</strong>ation for arriving at the expected<br />

discrepancy quantity E∆( ˆ θn), as a criterion for judging the quality of an approximating<br />

model, is obtained by supposing ˆ θn be the QMLE of θ based on the observation X and<br />

l<strong>et</strong> Y = (Y1,...,Yn) be independent observation of a process satisfying the V<strong>ARMA</strong><br />

representation (5.1) (i.e. X and Y independent observations satisfying the same process).<br />

Then, we may be interested in approximating the distribution of (Yt) by using<br />

Ln(Y, ˆ θn). So we consider the discrepancy for the approximating model (model Y ) that<br />

uses ˆ θn and, thus, it is generally easier to search a model that minimizes<br />

C( ˆ θn) = −2EY logLn( ˆ θn), (5.10)<br />

where EY denotes the expectation under the candidate model Y . Since ˆ θn and Y are<br />

independent, C( ˆ θn) is the same quantity as the expected discrepancy E∆( ˆ θn). A model<br />

minimizing (5.10) can be interpr<strong>et</strong>ed as a model that will do globally the best job on<br />

an independent copy of X, but this model may not be the best for the data at hand.<br />

The average discrepancy can be decomposed into<br />

where<br />

and<br />

C( ˆ θn) = −2EX logLn( ˆ θn)+a1 +a2,<br />

a1 = −2EX logLn(θ0)+2EX logLn( ˆ θn)<br />

a2 = −2EY logLn( ˆ θn)+2EX logLn(θ0).<br />

The QMLE satisfies logLn( ˆ θn) ≥ logLn(θ0) almost surely, thus a1 can be interpr<strong>et</strong>ed<br />

as the average over-adjustment (over-fitting) of this QMLE. Now, note that<br />

EX logLn(θ0) = EY logLn(θ0), thus a2 can be interpr<strong>et</strong>ed as an average cost due to the<br />

use of the estimated param<strong>et</strong>er instead of the optimal param<strong>et</strong>er, when the model is<br />

applied to an independent replication of X. We now discuss the regularity conditions<br />

needed for a1 and a2 to be equivalent to Tr I11J −1<br />

<br />

11 in the following Proposition.<br />

PropositionA 4. Under Assumptions A1–A9, a1 and a2 are both equivalent to<br />

Tr I11J −1<br />

11 , as n → ∞.<br />

Proof of Proposition 4 : Using a Taylor expansion of the quasi log-likelihood, we<br />

obtain<br />

−2logLn(θ0) = −2logLn( ˆ θn)+n( ˆ θ (1)<br />

n −θ(1)<br />

0 )′ J11( ˆ θ (1)<br />

n −θ(1)<br />

0 )+oP(1).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!