Definition:Frequency-severity model
📐 Frequency-severity model is an actuarial framework that separately estimates how often claims occur ( frequency) and how large each claim tends to be ( severity), then combines the two components to project total expected losses for a book of business. By decomposing loss experience into these two distinct dimensions, the model gives insurers a far more nuanced view than simply looking at aggregate loss totals, revealing whether a deteriorating loss ratio stems from more claims, costlier claims, or both.
⚙️ In practice, actuaries fit statistical distributions to each component independently. Frequency is often modeled using Poisson or negative binomial distributions, reflecting the count nature of claim occurrences, while severity draws on lognormal, Pareto, or gamma distributions to capture the right-skewed nature of claim sizes. Once parameterized, the two distributions are convolved — often through simulation — to produce an aggregate loss distribution. This output drives ratemaking, reserve setting, and reinsurance purchasing decisions. For instance, an excess-of-loss treaty attaching at $1 million per occurrence is priced primarily off the severity tail, while a quota share arrangement responds proportionally to both components.
🎯 The model's real power lies in its ability to isolate and respond to trends. If auto claim frequency drops due to advanced driver-assistance systems but severity climbs because of higher repair costs for sensor-laden vehicles, a carrier using a frequency-severity model can adjust pricing for each driver accordingly, rather than applying a blunt across-the-board change. Insurtech platforms and modern predictive analytics tools have made it easier to update frequency and severity assumptions in near-real time, feeding granular data from telematics, IoT sensors, and claims systems directly into model recalibration. This granularity supports more responsive underwriting and more stable portfolio performance over time.
Related concepts