<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US">
	<id>https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AShapley_value</id>
	<title>Definition:Shapley value - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AShapley_value"/>
	<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Shapley_value&amp;action=history"/>
	<updated>2026-05-13T09:10:59Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://www.insurerbrain.com/w/index.php?title=Definition:Shapley_value&amp;diff=22122&amp;oldid=prev</id>
		<title>PlumBot: Bot: Creating new article from JSON</title>
		<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Shapley_value&amp;diff=22122&amp;oldid=prev"/>
		<updated>2026-03-27T06:15:31Z</updated>

		<summary type="html">&lt;p&gt;Bot: Creating new article from JSON&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;🧮 &amp;#039;&amp;#039;&amp;#039;Shapley value&amp;#039;&amp;#039;&amp;#039; is a concept from cooperative game theory that has found growing application in insurance and [[Definition:Insurtech | insurtech]] as a method for fairly distributing a total outcome — such as a model&amp;#039;s prediction, a shared cost, or a risk contribution — among multiple participating factors. Originally developed by mathematician Lloyd Shapley, the technique calculates each participant&amp;#039;s marginal contribution across all possible orderings of participants, producing a unique allocation that satisfies several desirable fairness properties. In insurance, the Shapley value has become especially relevant in two domains: explaining the outputs of complex [[Definition:Machine learning | machine learning]] models used in [[Definition:Underwriting | underwriting]] and [[Definition:Claims management | claims management]], and allocating [[Definition:Capital requirement | capital requirements]] or [[Definition:Reinsurance | reinsurance]] costs across business units or lines of business.&lt;br /&gt;
&lt;br /&gt;
⚙️ When applied to model explainability — most commonly through the SHAP (SHapley Additive exPlanations) framework — the Shapley value decomposes an individual prediction into the contribution of each input feature. For an insurer using a [[Definition:Predictive model | predictive model]] to set [[Definition:Premium | premiums]] for motor or homeowners&amp;#039; coverage, SHAP values can reveal exactly how much a policyholder&amp;#039;s age, claims history, credit-related variables, or geographic location pushed the predicted [[Definition:Loss cost | loss cost]] up or down relative to the portfolio average. This granularity matters enormously in jurisdictions where regulators require insurers to explain rating decisions — a demand that is intensifying under [[Definition:Fair lending | fair lending]] scrutiny in the United States, the EU&amp;#039;s AI Act provisions, and emerging guidelines from regulators in Singapore and Hong Kong on responsible use of [[Definition:Artificial intelligence | artificial intelligence]]. Beyond explainability, Shapley values also appear in [[Definition:Capital allocation | capital allocation]] exercises where a [[Definition:Chief risk officer (CRO) | chief risk officer]] needs to distribute an aggregate [[Definition:Economic capital | economic capital]] figure across correlated business lines in a way that reflects each line&amp;#039;s true contribution to diversified group risk.&lt;br /&gt;
&lt;br /&gt;
💡 The growing adoption of Shapley values reflects a broader tension in the insurance industry between the power of sophisticated analytics and the obligation to remain transparent and fair. [[Definition:Actuarial science | Actuaries]] and data scientists appreciate that Shapley-based methods provide theoretically grounded, consistent attributions — unlike simpler heuristics that can produce contradictory or misleading explanations. For [[Definition:Regulatory compliance | regulatory compliance]], the ability to present a clear narrative of why a particular [[Definition:Risk classification | risk classification]] decision was made helps insurers defend their models against challenges of unfair [[Definition:Discrimination | discrimination]]. As [[Definition:Insurtech | insurtech]] firms push deeper into [[Definition:Deep learning | deep learning]] and ensemble methods whose inner workings are otherwise opaque, Shapley values offer one of the most rigorous bridges between predictive accuracy and the accountability that policyholders, regulators, and courts increasingly demand.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Related concepts:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
* [[Definition:Machine learning]]&lt;br /&gt;
* [[Definition:Predictive model]]&lt;br /&gt;
* [[Definition:Capital allocation]]&lt;br /&gt;
* [[Definition:Model explainability]]&lt;br /&gt;
* [[Definition:Risk classification]]&lt;br /&gt;
* [[Definition:Artificial intelligence]]&lt;br /&gt;
{{Div col end}}&lt;/div&gt;</summary>
		<author><name>PlumBot</name></author>
	</entry>
</feed>