25.06.2013 Views

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

202 8. Tests <strong>de</strong> copule gaussienne<br />

The sample covariance m<strong>at</strong>rix ˆρ is estim<strong>at</strong>ed by the expression :<br />

which allows us to calcul<strong>at</strong>e the variable<br />

ˆz 2 (k) =<br />

ˆρ = 1<br />

T<br />

2<br />

i,j=1<br />

T<br />

ˆy(i) · ˆy(i) t<br />

i=1<br />

(31)<br />

ˆyi(k) ( ˆ<br />

ρ −1 )ij ˆyj(k) , (32)<br />

as <strong>de</strong>fined in (27) for k ∈ {1, · · · , T }, which should be asymptotically distributed according to a χ 2 -<br />

distribution if the Gaussian copula hypothesis is correct.<br />

The usual way for comparing an empirical with a theor<strong>et</strong>ical distribution is to measure the distance<br />

b<strong>et</strong>ween these two distributions and to perform the Kolmogorov test or the An<strong>de</strong>rson-Darling (An<strong>de</strong>rson<br />

and Darling 1952) test (for a b<strong>et</strong>ter accuracy in the tails of the distribution). The Kolmogorov distance<br />

is the maximum local distance along the quantile which most often occur in the bulk of the distribution,<br />

while the An<strong>de</strong>rson-Darling distance puts the emphasis on the tails of the two distributions by a suitable<br />

normaliz<strong>at</strong>ion. We propose to complement these two distances by two additional measures which are<br />

<strong>de</strong>fined as averages of the Kolmogorov distance and of the An<strong>de</strong>rson-Darling distance respectively:<br />

Kolmogorov : d1 = max |Fz2(z z<br />

2 ) − Fχ2(z 2 )|<br />

<br />

(33)<br />

average Kolmogorov : d2 = |Fz2(z 2 ) − Fz2(z 2 )| dFχ2(z 2 ) (34)<br />

An<strong>de</strong>rson − Darling : d3 = max<br />

z<br />

average An<strong>de</strong>rson − Darling : d4 =<br />

<br />

|F z 2(z 2 ) − F χ 2(z 2 )|<br />

<br />

F χ 2(z 2 )[1 − F χ 2(z 2 )]<br />

(35)<br />

|Fz2(z 2 ) − Fχ2(z2 )|<br />

<br />

Fχ2(z2 )[1 − Fχ2(z2 dFχ2(z )]<br />

2 ) (36)<br />

The Kolmogorov distance d1 and its average d2 are more sensitive to the <strong>de</strong>vi<strong>at</strong>ions occurring in the bulk<br />

of the distributions. In contrast, the An<strong>de</strong>rson-Darling distance d3 and its average d4 are more accur<strong>at</strong>e<br />

in the tails of the distributions. We present our st<strong>at</strong>istical tests for these four distances in or<strong>de</strong>r to be as<br />

compl<strong>et</strong>e as possible with respect to the different sensitivity of the tests.<br />

The distances d2 and d4 are not of common use in st<strong>at</strong>istics, so l<strong>et</strong> us justify our choice. One usually<br />

uses distances similar to d2 and d4 but which differ by the square instead of the modulus of F z 2(z 2 ) −<br />

F χ 2(z 2 ) and lead respectively to the ω-test and the Ω-test, whose st<strong>at</strong>itics are theor<strong>et</strong>ically known. The<br />

main advantage of the distances d2 and d4 with respect to the more usual distances ω and Ω is th<strong>at</strong> they<br />

are simply equal to the average of d1 and d3. This averaging is very interesting and provi<strong>de</strong>s important<br />

inform<strong>at</strong>ion. In<strong>de</strong>ed, the distances d1 and d3 are mainly controlled by the point th<strong>at</strong> maximizes the<br />

argument within the max(·) function. They are thus sensitive to the presence of an outlier. By averaging,<br />

d2 and d4 become less sensitive to outliers, since the weight of such points is only of or<strong>de</strong>r 1/T (where<br />

T is the size of the sample) while it equals one for d1 and d3. Of course, the distances ω and Ω also<br />

perform a smoothing since they are averaged quantities too. But they are the average of the square of<br />

d1 and d3 which leads to an un<strong>de</strong>sired overweighting of the largest events. In fact, this weight function<br />

is chosen as a convenient analytical form th<strong>at</strong> allows one to <strong>de</strong>rive explicitely the theor<strong>et</strong>ical asymptotic<br />

st<strong>at</strong>istics for the ω and Ω-tests. In contrast, using the modulus of F z 2(z 2 ) − F χ 2(z 2 ) instead of its<br />

square in the expression of d2 and d4, no theor<strong>et</strong>ical test st<strong>at</strong>istics can be <strong>de</strong>rived analytically. In other<br />

10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!