18.1 introduction

In this and Chapter 19, we consider a model-based approach to the solution of the credibility problem. This approach, referred to as greatest accuracy credibility theory, is the outgrowth of a classic 1967 paper by Bühlmann [17]. Many of the ideas are also found in Whitney [120] and Bailey [8].

We return to the basic problem. For a particular policyholder, we have observed n exposure units of past claims X = (X1,…, Xn)T. We have a manual rate μ (we no longer use M for the manual rate) applicable to this policyholder, but the past experience indicates that it may not be appropriate [ = n−1 (X1 + ··· + Xn), as well as E(X), could be quite different from μ]. This difference raises the question of whether next year’s net premium (per exposure unit) should be based on μ, on , or on a combination of the two.

The insurer needs to consider the following question: Is the policyholder really different from what has been assumed in the calculation of μ, or is it just random chance that is responsible for the differences between μ and ?

While it is difficult to definitively answer that question, it is clear that no underwriting system is perfect. The manual rate μ has presumably ...

Get Loss Models: From Data to Decisions, 4th Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.