25.06.2013 Views

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

statistique, théorie et gestion de portefeuille - Docs at ISFA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

empirical analog FN(x), estim<strong>at</strong>ed from<br />

· a sample of N realiz<strong>at</strong>ions, is evalu<strong>at</strong>ed as follows:<br />

[FN(x) − F(x)]<br />

ADS = N 2<br />

dF(x) (28)<br />

F(x)(1 − F(x))<br />

= −N − 2<br />

N<br />

∑<br />

1<br />

{wk log(F(yk)) + (1 − wk)log(1 − F(yk))}, (29)<br />

where wk = 2k/(2N + 1), k = 1...N and y1 ... yN is its or<strong>de</strong>red sample. If the sample is drawn<br />

from a popul<strong>at</strong>ion with distribution function F(x), the An<strong>de</strong>rson-Darling st<strong>at</strong>istics (ADS) has a standard<br />

AD-distribution free of the theor<strong>et</strong>ical df F(x) (An<strong>de</strong>rson and Darling 1952), similarly to the χ 2 for the<br />

χ 2 -st<strong>at</strong>istic, or the Kolmogorov distribution for the Kolmogorov st<strong>at</strong>istic. It should be noted th<strong>at</strong> the ADS<br />

weights the squared difference in eq.(28) by 1/F(x)(1 − F(x)) which is nothing but the inverse of the variance<br />

of the difference in square brack<strong>et</strong>s. The AD distance thus emphasizes more the tails of the distribution<br />

than, say, the Kolmogorov distance which is d<strong>et</strong>ermined by the maximum absolute <strong>de</strong>vi<strong>at</strong>ion of Fn(x) from<br />

F(x) or the mean-squared error, which is mostly controlled by the middle of range of the distribution. Since<br />

we have to insert the estim<strong>at</strong>ed param<strong>et</strong>ers into the ADS, this st<strong>at</strong>istic does not obey any more the standard<br />

AD-distribution: the ADS <strong>de</strong>creases because the use of the fitting param<strong>et</strong>ers ensures a b<strong>et</strong>ter fit to the sample<br />

distribution. However, we can still use the standard quantiles of the AD-distribution as upper boundaries<br />

of the ADS. If the observed ADS is larger than the standard quantile with a high significance level (1 − ε),<br />

we can then conclu<strong>de</strong> th<strong>at</strong> the null hypothesis F(x) is rejected with significance level larger than (1 − ε). If<br />

we wish to estim<strong>at</strong>e the real significance level of the ADS in the case where it does not exceed the standard<br />

quantile of a high significance level, we are forced to use some other m<strong>et</strong>hod of estim<strong>at</strong>ion of the significance<br />

level of the ADS, such as the bootstrap m<strong>et</strong>hod.<br />

In the following, the estim<strong>at</strong>es minimizing the An<strong>de</strong>rson-Darling distance will be refered to as AD-estim<strong>at</strong>es.<br />

The maximum likelihood estim<strong>at</strong>es (ML-estim<strong>at</strong>es) are asymptotically more efficient than AD-estim<strong>at</strong>es for<br />

in<strong>de</strong>pen<strong>de</strong>nt d<strong>at</strong>a and un<strong>de</strong>r the condition th<strong>at</strong> the null hypothesis (given by one of the four distributions (22-<br />

25), for instance) corresponds to the true d<strong>at</strong>a gener<strong>at</strong>ing mo<strong>de</strong>l. When this is not the case, the AD-estim<strong>at</strong>es<br />

provi<strong>de</strong> a b<strong>et</strong>ter practical tool for approxim<strong>at</strong>ing sample distributions compared with the ML-estim<strong>at</strong>es.<br />

We have d<strong>et</strong>ermined the AD-estim<strong>at</strong>es for 18 standard significance levels q1 ...q18 given in table 6. The<br />

corresponding sample quantiles corresponding to these significance levels or thresholds u1 ...u18 for our<br />

samples are also shown in table 6. Despite the fact th<strong>at</strong> thresholds uk vary from sample to sample, they<br />

always correspon<strong>de</strong>d to the same fixed s<strong>et</strong> of significance levels qk throughout the paper and allows us to<br />

compare the goodness-of-fit for samples of different sizes.<br />

4.3 Empirical results<br />

The An<strong>de</strong>rson-Darling st<strong>at</strong>istics (ADS) for five param<strong>et</strong>ric distributions (Weibull or Str<strong>et</strong>ched-Exponential,<br />

Generalized Par<strong>et</strong>o, Gamma, exponential and Par<strong>et</strong>o) are shown in table 7 for two quantile ranges, the first<br />

top half of the table corresponding to the 90% lowest thresholds while the second bottom half corresponds<br />

to the 10% highest ones. For the lowest thresholds, the ADS rejects all distributions, except the Str<strong>et</strong>ched-<br />

Exponential for the Nasdaq. Thus, none of the consi<strong>de</strong>red distributions is really a<strong>de</strong>qu<strong>at</strong>e to mo<strong>de</strong>l the d<strong>at</strong>a<br />

over such large ranges. For the 10% highest quantiles, only the exponential mo<strong>de</strong>l is rejected <strong>at</strong> the 95%<br />

confi<strong>de</strong>nce level. The Str<strong>et</strong>ched-Exponential distribution is the best, just before the Par<strong>et</strong>o distribution and<br />

the Incompl<strong>et</strong>e Gamma th<strong>at</strong> cannot be rejected. We now present an analysis of each case in more d<strong>et</strong>ails.<br />

17<br />

81

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!