<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US">
	<id>https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3ADirected_acyclic_graph_%28DAG%29</id>
	<title>Definition:Directed acyclic graph (DAG) - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.insurerbrain.com/w/index.php?action=history&amp;feed=atom&amp;title=Definition%3ADirected_acyclic_graph_%28DAG%29"/>
	<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Directed_acyclic_graph_(DAG)&amp;action=history"/>
	<updated>2026-05-13T09:17:38Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://www.insurerbrain.com/w/index.php?title=Definition:Directed_acyclic_graph_(DAG)&amp;diff=22013&amp;oldid=prev</id>
		<title>PlumBot: Bot: Creating new article from JSON</title>
		<link rel="alternate" type="text/html" href="https://www.insurerbrain.com/w/index.php?title=Definition:Directed_acyclic_graph_(DAG)&amp;diff=22013&amp;oldid=prev"/>
		<updated>2026-03-27T06:01:17Z</updated>

		<summary type="html">&lt;p&gt;Bot: Creating new article from JSON&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;📋 &amp;#039;&amp;#039;&amp;#039;Directed acyclic graph (DAG)&amp;#039;&amp;#039;&amp;#039; is a visual and mathematical tool used to represent causal assumptions about how variables relate to one another, consisting of nodes (variables) connected by directed edges (arrows) that indicate the assumed direction of causation, with no path that loops back on itself. In insurance analytics, DAGs serve as blueprints for causal reasoning — making explicit the assumptions an [[Definition:Actuarial science | actuary]] or data scientist holds about how [[Definition:Rating factor | rating factors]], policyholder behaviors, market conditions, and [[Definition:Claims | claims]] outcomes are connected. By laying out these assumptions transparently, a DAG reveals which confounders must be adjusted for, which variables should not be conditioned on, and whether a particular causal question is even answerable with the available data.&lt;br /&gt;
&lt;br /&gt;
⚙️ Constructing a DAG requires domain knowledge rather than statistical estimation: the analyst draws arrows based on expert understanding of the data-generating process. Once the graph is in place, formal rules — most notably [[Definition:Do-calculus | do-calculus]] and the backdoor criterion — determine the minimal set of variables that must be controlled for to obtain an unbiased causal estimate. Consider an insurer examining whether a new [[Definition:Telematics | telematics]] program causally reduces [[Definition:Claims frequency | claims frequency]]. The DAG might include nodes for policyholder age, driving experience, program enrollment, driving behavior (a mediator), and claims outcome, with arrows reflecting the analyst&amp;#039;s beliefs about which variables influence which. The graph then reveals, for instance, that conditioning on driving behavior would block the very causal pathway the insurer wants to measure, while failing to adjust for age and driving experience would leave confounding bias intact. This kind of clarity prevents common analytical mistakes that purely data-driven approaches can miss.&lt;br /&gt;
&lt;br /&gt;
💡 As [[Definition:Predictive analytics | predictive modeling]] and [[Definition:Machine learning | machine learning]] become embedded in insurance operations — from [[Definition:Underwriting | underwriting]] to [[Definition:Fraud detection | fraud detection]] — DAGs offer a much-needed bridge between statistical sophistication and interpretability. Regulators across jurisdictions, including [[Definition:Solvency II | Solvency II]] supervisors, the UK&amp;#039;s Financial Conduct Authority, and U.S. state insurance departments, increasingly expect insurers to articulate not just what their models predict but why variables are included and how they relate to the risk being priced. A well-constructed DAG provides exactly this rationale in a form that technical and non-technical stakeholders alike can scrutinize. It also disciplines model-building by preventing analysts from inadvertently introducing [[Definition:Collider bias | collider bias]] or other structural errors that can distort conclusions about [[Definition:Loss ratio | loss ratios]], [[Definition:Adverse selection | adverse selection]], or intervention effectiveness.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Related concepts:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
* [[Definition:Do-calculus]]&lt;br /&gt;
* [[Definition:Counterfactual]]&lt;br /&gt;
* [[Definition:Direct effect]]&lt;br /&gt;
* [[Definition:Controlled direct effect (CDE)]]&lt;br /&gt;
* [[Definition:Covariate balance]]&lt;br /&gt;
* [[Definition:Algorithmic fairness]]&lt;br /&gt;
{{Div col end}}&lt;/div&gt;</summary>
		<author><name>PlumBot</name></author>
	</entry>
</feed>