25.06.2013 Views

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

200 8. Tests <strong>de</strong> copule gaussienne<br />

the 95% probability level, values of λν(ρ) ≤ 5% are undistinguishable from 0, which means th<strong>at</strong> the<br />

Stu<strong>de</strong>nt’s copula can be approxim<strong>at</strong>ed by a Gaussian copula.<br />

The <strong>de</strong>scription of a Stu<strong>de</strong>nt’s copula relies on two param<strong>et</strong>ers : the correl<strong>at</strong>ion m<strong>at</strong>rix ρ, as in the<br />

Gaussian case, and in addition the number of <strong>de</strong>grees of freedom ν. The estim<strong>at</strong>ion of the param<strong>et</strong>er ν<br />

is r<strong>at</strong>her difficult and this has an important impact on the estim<strong>at</strong>ed value of the correl<strong>at</strong>ion m<strong>at</strong>rix. As a<br />

consequence, the Stu<strong>de</strong>nt’s copula is more difficult to calibr<strong>at</strong>e and use than the Gaussian copula.<br />

3 Testing the Gaussian copula hypothesis<br />

In view of the central role th<strong>at</strong> the Gaussian paradigm has played and still plays in particular in finance, it<br />

is n<strong>at</strong>ural to start with the simplest choice of <strong>de</strong>pen<strong>de</strong>nce b<strong>et</strong>ween different random variables, namely the<br />

Gaussian copula. It is also a n<strong>at</strong>ural first step as the Gaussian copula imposes itself in an approach which<br />

consists in (1) performing a nonlinear transform<strong>at</strong>ion on the random variables into Normal random variables<br />

(for the marginals) which is always possible and (2) invoking a maximum entropy principle (which<br />

amounts to add the least additional inform<strong>at</strong>ion in the Shannon sense) to construct the multivariable distribution<br />

of these Gaussianized random variables (Sorn<strong>et</strong>te <strong>et</strong> al. 2000a, Sorn<strong>et</strong>te <strong>et</strong> al. 2000b, An<strong>de</strong>rsen<br />

and Sorn<strong>et</strong>te 2001).<br />

In the sequel, we will <strong>de</strong>note by H0 the null hypothesis according to which the <strong>de</strong>pen<strong>de</strong>nce b<strong>et</strong>ween<br />

two (or more) random variables X and Y can be <strong>de</strong>scribed by the Gaussian copula.<br />

3.1 Test St<strong>at</strong>istics<br />

We now <strong>de</strong>rive the test st<strong>at</strong>istics which will allow us to reject or not our null hypothesis H0 and st<strong>at</strong>e the<br />

following proposition:<br />

PROPOSITION 1<br />

Assuming th<strong>at</strong> the N-dimensionnal random vector x = (x1, · · · , xN) with distribution function F and<br />

marginals Fi, s<strong>at</strong>isfies the null hypothesis H0, then, the variable<br />

where the m<strong>at</strong>rix ρ is<br />

z 2 =<br />

N<br />

j,i=1<br />

follows a χ 2 -distribution with N <strong>de</strong>grees of freedom.<br />

Φ −1 (Fi(xi)) (ρ −1 )ij Φ −1 (Fj(xj)), (20)<br />

ρij = Cov[Φ −1 (Fi(xi)), Φ −1 (Fj(xj))], (21)<br />

To proove the proposition above, first consi<strong>de</strong>r an N-dimensionnal random vector x = (x1, · · · , xN).<br />

L<strong>et</strong> us <strong>de</strong>note by F its distribution function and by Fi the marginal distribution of each xi. L<strong>et</strong> us now<br />

assume th<strong>at</strong> the distribution function F s<strong>at</strong>isfies H0, so th<strong>at</strong> F has a Gaussian copula with correl<strong>at</strong>ion<br />

m<strong>at</strong>rix ρ while the Fi’s can be any distribution function. According to theorem 1, the distribution F can<br />

be represented as :<br />

F (x1, · · · , xN) = Φρ,N(Φ −1 (F1(x1)), · · · , Φ −1 (FN(xN))) . (22)<br />

L<strong>et</strong> us now transform the xi’s into Normal random variables yi’s :<br />

yi = Φ −1 (Fi(xi)) . (23)<br />

8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!