Definition:Frequency and severity
📉 Frequency and severity is the paired analytical framework that forms the backbone of loss modeling, ratemaking, and reserving in the insurance industry, decomposing total incurred losses into two components: how often claims occur ( frequency) and how much each claim costs ( severity). Rather than analyzing aggregate losses as a single figure, this decomposition allows actuaries and underwriters to isolate the distinct drivers behind loss trends — a capability that is essential because the factors influencing claim count often differ fundamentally from those affecting claim size. Virtually every insurer globally, regardless of line of business or regulatory jurisdiction, organizes its loss analysis around this frequency-severity split.
🔧 The mechanics begin with segregating historical claims data into counts and amounts, then fitting statistical models to each component separately. Frequency is typically modeled using count distributions — Poisson, negative binomial, or their zero-inflated variants — while severity is modeled with continuous distributions such as lognormal, gamma, Pareto, or Weibull, depending on the tail behavior of the loss distribution. The expected total loss for a portfolio is then the product of expected frequency and expected severity, a relationship that holds at the portfolio level and can be further segmented by rating class, peril, geography, or policy year. This separation proves especially valuable when trends diverge: for example, in motor insurance, improving vehicle safety technology may reduce frequency while rising medical costs and litigation trends push severity higher, producing a net loss trajectory that only the frequency-severity lens can properly diagnose. Solvency II in Europe, IFRS 17 globally, and U.S. statutory reserving standards all implicitly or explicitly rely on frequency-severity decomposition in their approaches to calculating technical provisions and risk margins.
💡 Beyond its actuarial mechanics, the frequency-severity framework underpins strategic decision-making across the insurance value chain. Reinsurance purchasing decisions hinge on whether a portfolio's risk profile is driven by high-frequency, low-severity attritional losses or by low-frequency, high-severity catastrophic events — the former suggesting quota share structures and the latter pointing toward excess of loss protection. Claims departments use the framework to allocate investigative resources, focusing fraud detection efforts on lines where frequency anomalies appear and deploying specialized large-loss adjusters where severity is the dominant concern. For insurtech companies building predictive models, frequency and severity serve as the target variables around which feature engineering and algorithm selection revolve. In short, this deceptively simple two-part framework remains the most powerful analytical lens the insurance industry possesses for understanding, pricing, and managing risk.
Related concepts: