01.06.2015 Views

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

48 <strong>Actuarial</strong> <strong>Modelling</strong> <strong>of</strong> <strong>Claim</strong> <strong>Counts</strong><br />

In a seminal paper, Simar (1976) gave a detailed description <strong>of</strong> the nonparametric<br />

maximum likelihood estimator <strong>of</strong> F˜,<br />

as well as an algorithm for its computation. The<br />

nonparametric maximum likelihood estimator has a discrete distribution, and Simar (1976)<br />

obtained an upper bound for the size <strong>of</strong> its support.<br />

Walhin & Paris (1999) showed that, although the nonparametric maximum likelihood<br />

estimator is powerful for evaluation <strong>of</strong> functionals <strong>of</strong> claim counts, it is not suitable for<br />

ratemaking, because it is purely discrete. For this reason, Denuit & Lambert (2001)<br />

proposed a smoothed version <strong>of</strong> the nonparametric maximum likelihood estimator. This<br />

approach is somewhat similar to the route followed by Carrière (1993b), who proposed to<br />

smooth the Tucker-Lindsay moment estimator with a LogNormal kernel.<br />

Young (1997) applied nonparametric density estimation techniques to estimate F˜.<br />

Because the actuary only observes claim numbers and not the conditional mean, an estimation<br />

<strong>of</strong> the underlying risk parameter relating to the ith policy <strong>of</strong> the portfolio is the average claim<br />

number x i (i.e. the total number <strong>of</strong> claims generated by this policy divided by the length<br />

<strong>of</strong> the exposure period). Therefore, given a kernel K, Young (1997) suggested estimating<br />

dF by<br />

d̂F˜t =<br />

n∑<br />

i=1<br />

( )<br />

w i t − xi<br />

K<br />

h i h i<br />

in which h i is a positive parameter called the bandwidth and w i is a weight (taken to be the<br />

number <strong>of</strong> years the ith policy is in force divided by the total number <strong>of</strong> policy-years for the<br />

collective). Young (1997) suggested using the Epanechnikov kernel and determined the h i s<br />

in order to minimize the mean integrated squared error (by reference to a Normal prior).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!