01.06.2015 Views

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

Actuarial Modelling of Claim Counts Risk Classification, Credibility ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

122 <strong>Actuarial</strong> <strong>Modelling</strong> <strong>of</strong> <strong>Claim</strong> <strong>Counts</strong><br />

contracts that belong to a (more or less) heterogeneous portfolio, where there is limited or<br />

irregular claim experience for each contract but ample claim experience for the portfolio.<br />

<strong>Credibility</strong> theory can be seen as a set <strong>of</strong> quantitative tools that allows the insurers to perform<br />

experience rating, that is, to adjust future premiums based on past experience. In many<br />

cases, a compromise estimator is derived from a convex combination <strong>of</strong> a prior mean and<br />

the mean <strong>of</strong> the current observations. The weight given to the observed mean is called the<br />

credibility factor (since it fixes the extent to which the actuary may be confident in the<br />

data).<br />

3.1.3 Limited Fluctuation Theory<br />

There are different types <strong>of</strong> credibility mechanisms: limited fluctuations credibility and<br />

greatest accuracy credibility. Limited fluctuation credibility theory was developed in the early<br />

part <strong>of</strong> the 20th century in connection with workers compensation insurance by Mowbray<br />

(1914). It provides a mechanism for assigning full or partial credibility to a policyholder’s<br />

experience. In the former case, the policy is rated on the basis <strong>of</strong> its own claims history,<br />

whereas in the latter case, a weighted average <strong>of</strong> past experience and grand mean is used<br />

by the insurer. Although the limited fluctuation approach provides simple solutions to the<br />

problem, it suffers from a lack <strong>of</strong> theoretical justification. We will not consider this approach<br />

in this book. Instead, we will consider the greatest accuracy credibility theory formalized by<br />

Bühlmann (1967,1970).<br />

3.1.4 Greatest Accuracy <strong>Credibility</strong><br />

The idea behind greatest accuracy credibility theory can be summarized as follows: Tariff<br />

cells include policyholders with similar underwriting characteristics; each <strong>of</strong> them is viewed<br />

as homogeneous with respect to the underwriting characteristics used by the insurance<br />

company. Of course, the risks in the cell are not truly homogeneous: there still remains some<br />

heterogeneity in each <strong>of</strong> the tariff cells, as explained in the preceding chapters. To reflect this<br />

heterogeneity, the relative risk level <strong>of</strong> each policyholder in the rating cell is characterized by<br />

a risk parameter , but the value <strong>of</strong> varies by policyholder. If = 50 % then the expected<br />

number <strong>of</strong> claims reported by this policyholder is half <strong>of</strong> the claim frequency corresponding<br />

to the rating cell, whereas if = 300 % then the expected number <strong>of</strong> claims for this individual<br />

is three times the claim frequency <strong>of</strong> the rating cell. Of course, even if assuming the existence<br />

<strong>of</strong> such a is reasonable, it is not observable and the actuary can never know its true value<br />

for a given policyholder.<br />

Because varies by policyholder, there is a distribution function F giving the proportion<br />

<strong>of</strong> policyholders in the portfolio with relative risk level less than or equal to a certain<br />

threshold. Stated another way, F represents the probability that a policyholder picked at<br />

random from the portfolio has a risk parameter that is less than or equal to . The connection<br />

with the random effect introduced in the statistical models <strong>of</strong> Chapter 2 to account for the<br />

residual heterogeneity is now clear: this random effect becomes the random risk parameter<br />

for a policyholder picked at random from the portfolio (the distribution function <strong>of</strong> is<br />

F ). Even if the risk parameter remains unknown, the distribution function F can be<br />

estimated from data, as explained in Chapter 2. Once estimated, the heterogeneity model can<br />

be used to perform prediction on longitudinal data and allows for experience rating in motor

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!