<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US">
	<id>https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AProtected_characteristics</id>
	<title>Definition:Protected characteristics - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3AProtected_characteristics"/>
	<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Protected_characteristics&amp;action=history"/>
	<updated>2026-05-15T18:35:15Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://www.insurerbrain.com/w/index.php?title=Definition:Protected_characteristics&amp;diff=22324&amp;oldid=prev</id>
		<title>PlumBot: Bot: Creating definition</title>
		<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Protected_characteristics&amp;diff=22324&amp;oldid=prev"/>
		<updated>2026-03-30T05:39:25Z</updated>

		<summary type="html">&lt;p&gt;Bot: Creating definition&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;🛡️ &amp;#039;&amp;#039;&amp;#039;Protected characteristics&amp;#039;&amp;#039;&amp;#039; are personal attributes — such as race, gender, age, disability, religion, sexual orientation, and ethnicity — that are legally shielded from discrimination, and their treatment poses uniquely complex challenges within the [[Definition:Insurance|insurance]] industry because insurers inherently engage in risk differentiation based on personal data. Unlike most sectors, where differential treatment based on these attributes is straightforwardly prohibited, insurance occupies a regulatory gray zone: the fundamental business model requires distinguishing among individuals based on characteristics that correlate with [[Definition:Risk|risk]], yet some of those correlations overlap with or serve as proxies for protected attributes. The result is an evolving and jurisdiction-specific patchwork of rules governing which characteristics insurers may, may not, or must use in [[Definition:Underwriting|underwriting]] and [[Definition:Pricing|pricing]].&lt;br /&gt;
&lt;br /&gt;
📋 The practical operation of these rules varies significantly across markets. In the European Union, the landmark 2011 Test-Achats ruling by the Court of Justice prohibited gender-based pricing differentials in insurance, a decision that reshaped [[Definition:Motor insurance|motor]] and [[Definition:Life insurance|life insurance]] pricing across the continent. In the United States, regulation is fragmented by state: most states permit age and gender as rating factors in certain lines but prohibit race-based distinctions, while a growing number of jurisdictions restrict the use of [[Definition:Credit score|credit-based insurance scores]] due to concerns about racial disparate impact. In the UK, the [[Definition:Equality Act 2010|Equality Act 2010]] allows insurers to use certain protected characteristics only where actuarially justified by relevant and reliable data. Meanwhile, markets in Asia — including Japan, Hong Kong, and Singapore — maintain their own frameworks, with varying degrees of permissiveness around age and gender in [[Definition:Health insurance|health]] and life products. The rise of [[Definition:Pricing AI|pricing AI]] and [[Definition:Machine learning|machine learning]] has intensified regulatory scrutiny, because complex models can inadvertently use proxy variables — such as postal code, browsing behavior, or occupation — that correlate strongly with protected attributes even when those attributes are excluded as direct inputs.&lt;br /&gt;
&lt;br /&gt;
🔍 Getting this right matters enormously for insurers, both ethically and commercially. Regulatory enforcement is accelerating: the [[Definition:National Association of Insurance Commissioners (NAIC)|NAIC]] has developed model bulletins on algorithmic bias, the [[Definition:Financial Conduct Authority (FCA)|FCA]] has embedded fair treatment of customers into its Consumer Duty framework, and [[Definition:European Insurance and Occupational Pensions Authority (EIOPA)|EIOPA]] has issued guidance on the ethical use of data in insurance pricing. Carriers found to be engaging in unfair discrimination — even unintentionally through opaque algorithmic processes — face regulatory sanctions, reputational damage, and potential litigation. Beyond compliance, there is a growing recognition that fair and inclusive pricing can expand addressable markets and build long-term customer trust. Insurers are investing in [[Definition:Model risk management|model governance]] frameworks, bias testing protocols, and explainability tools to ensure that their [[Definition:Underwriting|underwriting]] and rating practices can withstand both regulatory review and public scrutiny.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Related concepts:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
* [[Definition:Pricing AI]]&lt;br /&gt;
* [[Definition:Algorithmic bias]]&lt;br /&gt;
* [[Definition:Underwriting]]&lt;br /&gt;
* [[Definition:Fair discrimination]]&lt;br /&gt;
* [[Definition:Regulatory compliance]]&lt;br /&gt;
* [[Definition:Credit-based insurance score]]&lt;br /&gt;
{{Div col end}}&lt;/div&gt;</summary>
		<author><name>PlumBot</name></author>
	</entry>
</feed>