Credibility theory

Credibility theory is a branch of actuarial science used to quantify how unique a particular outcome will be compared to an outcome deemed as typical. It was developed originally as a method to calculate the risk premium by combining the individual risk experience with the class risk experience. Credibility can be calculated using two popular approaches, Bayesian and Buhlmann.

Types of Credibility

In Bayesian credibility, we separate each class (B) and assign them a probability (Probability of B). Then we find how likely our experience (A) is within each class (Probability of A given B). Next, we find how likely our experience was over all classes (Probability of A). Finally, we can find the probability of our class given our experience. So going back to each class, we weight each statistic with the probability of the particular class given the experience.

Buhlmann credibility works by looking at the Variance across the population. More specifically, it looks to see how much of the Total Variance is attributed to the Variance of the Expect Values of each class (Variance of the Hypothetical Mean), and how much is attributed to the Expected Variance over all classes (Expected Value of the Process Variance). Say we have a basketball team with a high number of points per game. Sometimes they get 128 and other times they get 130 but always in between the two. Compared to all basketball teams this is a relatively low variance, meaning that they will contribute very little to the Expect Value of the Process Variance. Also, their unusually high point totals greatly increases the variance of the population, meaning that if the league booted them out, they'd have a much more predictable point total for each team (lower variance). So, this team is definitely unique (they contribute greatly to the Variance of the Hypothetical Mean). So we can rate this team's experience with a fairly high credibility. They often/always score a lot (low Expected Value of Process Variance) and not many teams score as much as them (high Variance of Hypothetical Mean).

A non-technical example

Say we have a large box of identical cars, we roll ten of them down a ramp and one after another they all turn left as they go down the ramp, so we would expect the next car to go left. We then roll car number 11 and it goes right, so car number 11 has now earned some credibility, or credible experience. We might decide this car is broken, or special, but definitely worth noticing. Note, however, that we only tried 11 cars. What if we had rolled 100 or 10,000 cars before we found the odd one? This occurrence would seem even stranger.

A likely thing the person rolling cars would do is to then try that car again - just to make sure it actually is broken or special. So we have our special car and decide to find out how unique it really is by rolling it once more down the ramp. If the car goes left we might decide there was a one-time fluke and there is nothing special about this car. If it rolls right we might convince ourselves we have found a unique example, a deviation from the norm. It is advantageous for insurers to look for these deviations.

Quantifying the difference of what we expect to see given similarities, and what we actually observe given uniqueness uses the statistical tools of credibility theory.

Actuarial credibility

Actuarial credibility describes an approach used by actuaries to improve statistical estimates. Although the approach can be formulated in either a frequentist or Bayesian statistical setting, the latter is often preferred because of the ease of recognizing more than one source of randomness through both "sampling" and "prior" information. In a typical application, the actuary has an estimate X based on a small set of data, and an estimate M based on a larger but less relevant set of data. The credibility estimate is ZX + (1-Z)M,[1] where Z is a number between 0 and 1 (called the "credibility weight" or "credibility factor") calculated to balance the sampling error of X against the possible lack of relevance (and therefore modeling error) of M.

When an insurance company calculates the premium it will charge, it divides the policy holders into groups. For example, it might divide motorists by age, sex, and type of car; a young man driving a fast car being considered a high risk, and an old woman driving a small car being considered a low risk. The division is made balancing the two requirements that the risks in each group are sufficiently similar and the group sufficiently large that a meaningful statistical analysis of the claims experience can be done to calculate the premium. This compromise means that none of the groups contains only identical risks. The problem is then to devise a way of combining the experience of the group with the experience of the individual risk to calculate the premium better. Credibility theory provides a solution to this problem.

For actuaries, it is important to know credibility theory in order to calculate a premium for a group of insurance contracts. The goal is to set up an experience rating system to determine next year's premium, taking into account not only the individual experience with the group, but also the collective experience.

There are two extreme positions. One is to charge everyone the same premium estimated by the overall mean \overline{X} of the data. This makes sense only if the portfolio is homogeneous, which means that all risks cells have identical mean claims. However, if the portfolio is heterogeneous, it is not a good idea to charge a premium in this way (overcharging "good" people and undercharging "bad" risk people) since the "good" risks will take their business elsewhere, leaving the insurer with only "bad" risks. This is an example of adverse selection.

The other way around is to charge to group j its own average claims, being \overline{X_j} as premium charged to the insured. These methods are used if the portfolio is heterogeneous, provided a fairly large claim experience. To compromise these two extreme positions, we take the weighted average of the two extremes:

C = z_j\overline{X_j} + (1 - z_j) \overline{X}\,

z_j has the following intuitive meaning: it expresses how "credible" (acceptability) the individual of cell j is. If it is high, then use higher z_j to attach a larger weight to charging the \overline{X_j}, and in this case, z_j is called a credibility factor, and such a premium charged is called a credibility premium.

If the group were completely homogeneous then it would be reasonable to set z_j=0, while if the group were completely heterogeneous then it would be reasonable to set z_j=1. Using intermediate values is reasonable to the extent that both individual and group history is useful in inferring future individual behavior.

For example, an actuary has an accident and payroll historical data for a shoe factory suggesting a rate of 3.1 accidents per million dollars of payroll. She has industry statistics (based on all shoe factories) suggesting that the rate is 7.4 accidents per million. With a credibility, Z, of 30%, she would estimate the rate for the factory as 30%(3.1) + 70%(7.4) = 6.1 accidents per million.

References

Further reading

This article is issued from Wikipedia - version of the Thursday, June 25, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.