<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US">
	<id>https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AProxy_discrimination</id>
	<title>Definition:Proxy discrimination - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AProxy_discrimination"/>
	<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Proxy_discrimination&amp;action=history"/>
	<updated>2026-04-29T22:30:11Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://www.insurerbrain.com/w/index.php?title=Definition:Proxy_discrimination&amp;diff=8109&amp;oldid=prev</id>
		<title>PlumBot: Bot: Creating new article from JSON</title>
		<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Proxy_discrimination&amp;diff=8109&amp;oldid=prev"/>
		<updated>2026-03-10T13:43:20Z</updated>

		<summary type="html">&lt;p&gt;Bot: Creating new article from JSON&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;🔎 &amp;#039;&amp;#039;&amp;#039;Proxy discrimination&amp;#039;&amp;#039;&amp;#039; occurs when an [[Definition:Insurance carrier | insurance carrier]] or [[Definition:Insurtech | insurtech]] company uses ostensibly neutral rating factors or data variables that serve as stand-ins — proxies — for protected characteristics such as race, gender, ethnicity, or religion, resulting in unfairly disparate treatment in [[Definition:Underwriting | underwriting]], [[Definition:Rating | rating]], or [[Definition:Claims | claims]] decisions. In an era of increasingly sophisticated [[Definition:Predictive analytics | predictive analytics]] and [[Definition:Machine learning | machine learning]], the insurance industry faces growing scrutiny over whether algorithmic models inadvertently encode biases through correlated variables, even when the protected characteristic itself is never directly used. A zip code, for example, might correlate closely with racial demographics, and using it as a [[Definition:Rating factor | rating factor]] can produce outcomes that mirror explicit racial discrimination.&lt;br /&gt;
&lt;br /&gt;
🧮 The mechanism is subtle and often unintentional. When [[Definition:Actuarial analysis | actuarial]] or data science teams build pricing or risk selection models, they feed in hundreds of variables — credit scores, occupation codes, purchasing behavior, geographic indicators, and more — seeking correlations with loss frequency or severity. A variable that passes traditional [[Definition:Actuarial standard | actuarial standards]] for statistical significance may nonetheless function as a proxy for a protected class if its predictive power derives substantially from its correlation with that class rather than from an independent causal relationship with risk. Detecting proxy effects requires more than reviewing individual variables in isolation; it demands [[Definition:Algorithmic audit | algorithmic auditing]] techniques such as disparate impact testing, counterfactual analysis, and fairness-aware modeling. Regulators in states like Colorado have enacted legislation specifically requiring insurers to test their [[Definition:Algorithm | algorithms]] for proxy discrimination and demonstrate that their models do not produce unfairly discriminatory outcomes.&lt;br /&gt;
&lt;br /&gt;
🛡️ The stakes for the insurance industry are considerable — both reputationally and legally. Regulatory enforcement is intensifying, with the [[Definition:National Association of Insurance Commissioners (NAIC) | NAIC]] developing model frameworks for algorithmic accountability and individual state departments of insurance issuing guidance on acceptable uses of [[Definition:Big data | big data]] and [[Definition:Artificial intelligence (AI) | AI]] in insurance. Companies found to engage in proxy discrimination, even inadvertently, face regulatory penalties, class-action exposure, and erosion of consumer trust. Beyond compliance, addressing proxy discrimination aligns with the industry&amp;#039;s broader commitment to [[Definition:Fair underwriting | fair underwriting]] and equitable access to coverage. Insurers and insurtechs that invest in transparent, auditable models and proactive bias testing position themselves not only to avoid regulatory action but to build more robust and defensible pricing frameworks.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Related concepts&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
* [[Definition:Predictive analytics]]&lt;br /&gt;
* [[Definition:Algorithmic bias]]&lt;br /&gt;
* [[Definition:Unfair discrimination]]&lt;br /&gt;
* [[Definition:Rating factor]]&lt;br /&gt;
* [[Definition:Artificial intelligence (AI)]]&lt;br /&gt;
* [[Definition:Regulatory compliance]]&lt;br /&gt;
{{Div col end}}&lt;/div&gt;</summary>
		<author><name>PlumBot</name></author>
	</entry>
</feed>