See It Work.
A bank's data blamed support calls for churn. The causal model revealed the opposite. Walk through all three rungs of Pearl's Ladder.
The Pattern That Fooled a Bank
A bank's analytics team noticed something troubling: customers who called support were three times more likely to close their accounts. The correlation was strong and consistent.
The obvious conclusion: support calls are associated with churn. Maybe the support experience is bad. Maybe calling is a sign of problems. Either way, perhaps the bank should make it harder to reach support.
But something didn't sit right. The support team pushed back. They knew they were helping customers. So the analysts dug deeper.
When they separated satisfied customers from dissatisfied customers, the pattern reversed:
- Among satisfied customers: those who called churned less (3% vs 5%)
- Among dissatisfied customers: those who called churned less (50% vs 70%)
In every segment, calling support reduced churn. But in the aggregate data, callers churned more. How is that possible?
The answer: dissatisfied customers are more likely to call support and more likely to churn. Dissatisfaction was the hidden cause driving both behaviors. The aggregate data blamed support calls for something they didn't cause.
This is Simpson's Paradox, and it's far more common than most analysts realize. The causal model revealed it; correlation alone would have led to exactly the wrong decision.
The correct action: encourage support calls, especially for at-risk customers. What looked like a problem was actually the solution.
Climb the Ladder
Here's how the same data looks at each rung of Pearl's Ladder — from correlation to intervention to counterfactual.

Baseline
With no filters applied, the overall churn rate is 20.2%. This is our baseline — the starting point before we ask any causal questions.

Observation
On Rung 1, we're just looking for associations. Filtering for callers (Calls = 100% True) shows 38.3% churn — nearly double the baseline. But this is Simpson's Paradox: Dissatisfaction confounds the relationship. The aggregate trend will reverse when we intervene.

Intervention
On Rung 2, we intervene: do(Calls=True). Churn drops to 17.1% — less than the 20.2% baseline. The paradox resolves: calls don't cause churn, they prevent it. Correlation misled us; causation corrects it.

Abduction
Structured Causal Models add U variables — all of a person's unobserved traits aggregated into a single variable that stays fixed when we imagine alternatives.*
Observing Churn=True, Calls=False shifts priors via Bayesian updating to 54.7% and 39.6%. We've identified this person: someone who churned without calling. Now we can ask: what if they had called?
*U denotes "Unobserved."

Model the Individual
U_D and U_C encode individual traits. We fix U_D=100% True and U_C=100% False — crystallizing the most likely profile of someone who churned without calling. With identity fixed, we can change actions without changing who the person is.

Ask the Counterfactual
The U variables hold identity constant. Now we rewrite history: do(Calls=True). We clear Churn because we're asking what would happen, not what did happen. Same person, different action — different outcome?

Prediction
Churn drops to 50%. Same person, different action, different outcome. Rung 1 misled us (38% churn). Rung 2 corrected direction (17%). Rung 3 identifies the individual (50% — this person was saveable).
We can now find customers like this before they churn and proactively reach out.
Make It Queryable
The technical interface is powerful, but not everyone needs to use it directly. An LLM layer lets anyone on your team ask questions in plain English and get answers grounded in the causal model.
A Week in the Life
What does it actually look like when your team has causal modeling capability? A composite week drawn from real engagements:
The marketing director wants to know if last month's email campaign actually drove conversions or just correlated with a seasonal uptick. An analyst queries the causal model, separates the campaign effect from seasonality, and reports back by lunch: "The campaign drove a 4.2% lift, controlling for seasonal effects."
Legal needs to explain to regulators why a loan application was declined. Pull up the causal model, show exactly which factors contributed, by how much, and why they're causally relevant. Twenty minutes instead of two days.
The executive team is evaluating an acquisition. The analyst runs three scenarios through the causal model: optimistic, neutral, and pessimistic. Each shows expected churn rates with uncertainty bands.
A process change last quarter was supposed to reduce costs but didn't. The counterfactual analysis reveals the change did reduce costs — but supplier price increases masked the effect. Without the change, costs would have risen 8%.
A senior underwriter is retiring next month. She works with an analyst to encode her reasoning into a causal model. Next year's new hires will query that model and get answers reflecting decades of expertise.
Medical Counterfactual
A Medical Counterfactual, Step by Step
"Would this patient have avoided hospitalization if we had prescribed statins?" A full walkthrough of the counterfactual pipeline — from global model to patient-specific answer. See the walkthrough →
Ready to see what this looks like for your data?
A short conversation is the fastest way to find out whether causal networks fit your problem.
Book a Callor email: [email protected]