17.08.2013 Views

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

THÈSE Estimation, validation et identification des modèles ARMA ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapitre 5. Model selection of weak V<strong>ARMA</strong> models 157<br />

5.3 General multivariate linear regression model<br />

L<strong>et</strong> Zt = (Z1t,...,Zdt) ′ be a d-dimensional random vector of response variables,<br />

Xt = (X1t,...,Xkt) ′ be a k-dimensional input variables and B = (β1,...,βd) be a<br />

k × d matrix. We consider a multivariate linear model of the form Zit = X ′ tβi + ǫit,<br />

i = 1,...,d, orZ ′ t = X′ tB+ǫ′ t ,t = 1,...,n, where theǫt = (ǫ1t,...,ǫdt) ′ are uncorrelated<br />

and identically distributed random vectors with variance Σ = Eǫtǫ ′ t . The i-th column<br />

of B (i.e. βi) is the vector of regression coefficients for the i-th response variable.<br />

Now, given the n observations Z1,...,Zn and X1,...,Xn, we define the n × d data<br />

matrix Z = (Z1,...,Zn) ′ , the n × k matrix X = (X1,...,Xn) ′ and the n × d matrix<br />

ε = (ǫ1,...,ǫn) ′ . Then, we have the multivariate linear model Z = XB +ε. Now, it is<br />

well known that the QMLE of B is the same as the LSE and, hence, is given by<br />

ˆB = (X ′ X) −1 X ′ Z, that is, ˆ βi = (X ′ X) −1 X ′ Zi, i = 1,...,d,<br />

where Zi = (Zi 1,...,Zi n) ′ is the i-th column of Z. We also have<br />

ˆε := Z−X ˆ B = MXZ = ε−X(X ′ X) −1 X ′ ε = MXε,<br />

where MX = In −X(X ′ X) −1 X ′ is a projection matrix. The usual unbiased estimator<br />

of the error covariance matrix Σ is<br />

Σ ∗ =<br />

=<br />

1<br />

n−k ˆε′ ˆε = 1<br />

n−k (Z−Xˆ B) ′ (Z−X ˆ B)<br />

n 1<br />

(Zt −<br />

n−k<br />

ˆ B ′ Xt)(Zt − ˆ B ′ Xt) ′<br />

t=1<br />

or Σ∗ = (n − k) −1n t=1ˆǫtˆǫ ′ t , where the ˆǫt = Zt − ˆ B ′ Xt are the residual vectors. Note<br />

that the gaussian quasi-likelihood is given by<br />

1<br />

Ln(B,Σ;Z) =<br />

(2π) d/2√ <br />

exp −<br />

d<strong>et</strong>Σe<br />

1<br />

n<br />

(Zt −B<br />

2<br />

′ Xt) ′ Σ −1 (Zt −B ′ <br />

Xt) ,<br />

whose maximization shows that the QML estimators of B is equal to ˆ B and that of Σ is<br />

ˆΣ := n −1 n<br />

t=1 ˆǫtˆǫ ′ t = (n−k)n−1 Σ ∗ . Because Σ ∗ is an unbiased estimator of the matrix<br />

Σ, by definition we have E{Σ∗ } = Σ, we then deduce that<br />

n<br />

n−k E<br />

<br />

ˆΣ<br />

1<br />

=<br />

n−k Eˆε′ ˆε = 1<br />

n−k Eε′ MXε = Σ. (5.3)<br />

5.4 Kullback-Leibler discrepancy<br />

Assume that, with respect to a σ-finite measure µ, the true density of the observationsX<br />

= (X1,...,Xn) isf0, and that some candidate modelmgives a densityfm(·,θm)<br />

t=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!