Definition:Severity distribution

📊 Severity distribution is a statistical model that describes the probability of different loss amounts — how large individual claims are likely to be — within an insurance portfolio. While frequency distribution addresses how often losses occur, severity distribution focuses on the magnitude of each loss event, making the two complementary building blocks of virtually every actuarial pricing and reserving exercise. Common distributional forms used in insurance include the lognormal, Pareto, Weibull, and gamma distributions, each chosen for its ability to fit observed claims data in particular lines of business — heavy-tailed distributions like the Pareto, for example, are often applied to catastrophe and liability lines where extreme losses, though rare, dominate total portfolio cost.

⚙️ Fitting a severity distribution to empirical data typically involves collecting historical loss amounts, adjusting them for inflation and development, and then using statistical techniques such as maximum likelihood estimation or method-of-moments to parameterize a candidate distribution. Actuaries test the goodness of fit through tools like Q-Q plots, the Kolmogorov-Smirnov test, and the Anderson-Darling statistic, often comparing several candidate distributions before selecting the one that best captures both the body and tail of the data. Once calibrated, the severity distribution feeds into aggregate loss models — frequently via Monte Carlo simulation — that combine it with a frequency distribution to produce the full loss distribution used for ratemaking, reserve estimation, reinsurance pricing, and economic capital calculation under frameworks such as Solvency II and the RBC system.

💡 Getting the severity distribution right is one of the highest-leverage decisions an actuary makes, because even small misspecifications in the tail can translate into enormous pricing or reserving errors — particularly in long-tail lines like D&O, medical malpractice, and excess-of-loss reinsurance where a single outsized claim can dwarf hundreds of smaller ones. Regulatory and rating-agency scrutiny has intensified around tail-risk modeling: internal models submitted to supervisors under Solvency II, for instance, must demonstrate that their severity assumptions are well-supported by data and expert judgment. As the industry accumulates richer data through insurtech platforms and telematics, actuaries are increasingly exploring non-parametric and machine-learning approaches that relax traditional distributional assumptions, though parametric severity models remain the lingua franca of insurance pricing worldwide.

Related concepts: