Risk ≠ Likelihood × Impact — Why the Universal Formula Is Fundamentally Wrong
Causal Risk | First Principles

Risk ≠ Likelihood × Impact

The formula everyone learns first is the formula that makes risk management useless. Here's why — and what to do instead.

L × I = a number, not a model
No mechanism, no reasoning
Counterfactual = real risk AI

The Bottom Line

  • The Misconception: "Risk = Likelihood × Impact" is the foundation of risk management. Virtually every framework teaches it. Virtually every framework is wrong.
  • The Reality: L × I produces a label, not a model. It cannot answer the only question that matters: "What would have happened if we had acted differently?" Without counterfactual reasoning, risk management is between misleading and useless.
  • The Fix: Replace the formula with a causal model — one that encodes mechanisms, not scores. A model that can reason about what would have happened is the minimum standard for anything that deserves the name "risk AI."
1The FormulaEveryone learns it. Almost no one questions it.

Open any risk management textbook, any certification syllabus, any enterprise risk framework. Within the first chapter you'll find it:

Risk = Likelihood × Impact
The most widely taught formula in risk management — and the most misleading.

ISO 31000 teaches it. NIST references it. COSO embeds it. FAIR improves on it but keeps the multiplicative frame. Every GRC platform defaults to it. Every risk register is built around it. It is, without exaggeration, the conceptual foundation of modern risk management.

And it is categorically wrong — not imprecise, not oversimplified, but wrong in a way that makes the entire discipline less useful than it could be. The formula doesn't need refinement. It needs replacement.

The problem isn't the multiplication. The problem is what's missing. L × I produces a number. A number is not a model. And without a model — a representation of how things cause other things — you cannot answer any question that actually matters for risk management.

2What It Actually ComputesA label disguised as a measurement.

Let's be precise about what L × I does and does not give you.

What it computes: a score. Likelihood (1–5) × Impact (1–5) = a number between 1 and 25. That number ranks risks against each other. It tells you that a 20 is "worse" than a 10. It produces a heatmap. It fills a risk register.

What it does not compute: anything you can act on.

Question Can L × I answer it? Why not
"How much should we spend to mitigate this risk?" No No dollar value. "Score = 15" has no unit of currency.
"Would this loss have occurred if we'd implemented the control?" No No causal mechanism. The formula doesn't model how anything happens.
"Which of these two controls gives better ROI?" No No counterfactual comparison. You can't simulate "what if" with a product of ordinals.
"Is this risk acceptable?" No No threshold in meaningful units. "Acceptable below 8" is arbitrary.
"What caused this loss?" No No causal structure. L × I is a snapshot, not a mechanism.
"Is this risk ranked higher than that one?" Yes This is the only question L × I can answer.

One question out of six. And it's the least useful one. You can rank risks all day — the question is whether you can do anything with the ranking. Without a causal model underneath, the answer is no.

The Ordinal Trap

Ordinal scales don't support multiplication. "Medium" × "High" is not a valid mathematical operation — it's a convention that produces the appearance of rigour. A Likelihood of 3 and an Impact of 4 gives you 12. A Likelihood of 4 and Impact of 3 also gives you 12. Are these the same risk? The formula says yes. Any risk professional knows the answer is "it depends on everything the formula ignores."

3The Missing Question"What would have happened if we had acted differently?"

Every consequential risk decision reduces to a counterfactual:

Mitigation. "If we had deployed this control, would the breach have been prevented?" Not "did breaches go down" — that's a correlation. Not "do controls reduce breaches on average" — that's a population statistic. The question is about this breach, this control, this environment. L × I cannot touch this question. It has no mechanism to model the relationship between control and outcome, no way to hold "everything else equal," no way to reason about a specific incident.

Attribution. "Was this loss caused by the vendor failure, or was it already inevitable?" Insurance adjusters, regulators, and courts ask this question constantly. The answer requires a model of how things cause other things — a directed graph of causal relationships, structural equations that encode mechanisms, and exogenous variables that capture what makes this situation unique. L × I offers a score. The court wants a causal chain.

Allocation. "If we move $200K from phishing training to endpoint detection, what happens to our expected loss?" This requires simulating an intervention — changing one variable and propagating the effect through a causal model. L × I has no variables, no propagation, no simulation. It has two numbers and a multiplication sign.

Accountability. "Was the risk management decision defensible?" A board, a regulator, or a plaintiff's attorney asks this after a loss. The answer depends on whether the risk team could have reasoned about what would have happened under different decisions. If the "risk model" was a heatmap of ordinal products, the answer is: there was no model. There was a colouring exercise.

The Counterfactual Standard

A risk model that cannot answer "what would have happened if X had been different?" is not, in any meaningful sense, a model. It is a description. Descriptions tell you what is. Models tell you what would be. The entire purpose of risk management — preventing loss, allocating resources, defending decisions — requires the subjunctive tense. L × I is stuck in the indicative.

4The LadderL × I is stuck on the first rung. Risk management needs the third.

Judea Pearl's Ladder of Causation provides the formal framework for understanding exactly what's wrong with L × I — and exactly what's required to fix it.

Rung Question L × I Causal Model
1. Seeing "What patterns do we see?" This is all L × I does
2. Doing "What happens if we intervene?" No mechanism to model interventions do-calculus
3. Imagining "What would have happened?" No counterfactual capacity SCM with exogenous variables

L × I doesn't fail because it's imprecise. It fails because it is mathematically incapable of crossing from Rung 1 to Rung 2, let alone to Rung 3. Pearl proved this as a theorem: no amount of observational data — no matter how large, how clean, how sophisticated the statistical model — can answer interventional or counterfactual questions without causal assumptions.1

This isn't a limitation that better data fixes. It's a hard boundary. Adding more likelihood estimates and finer impact scales is like adding more rungs to a ladder that's leaning against the wrong wall.

What Each Rung Gives a Risk Manager

Rung 1 tells you: "Risks that scored 20 historically produced larger losses than risks that scored 8." This is a correlation. It tells you where to look. It does not tell you where to invest.

Rung 2 tells you: "If we deploy endpoint detection across the portfolio, expected breach costs decrease by $1.2M/year on average." This is an interventional estimate. It tells you what to do — for the average case.

Rung 3 tells you: "If we had deployed endpoint detection in this environment, this breach would not have occurred — given the specific attack vector, the existing controls, and the network topology." This is a counterfactual. It tells you what would have happened — for this specific case. It's the answer the board, the regulator, and the insurer actually need.

1 Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge University Press. Chapter 1 establishes the formal hierarchy; Chapter 7 proves the impossibility of crossing rungs without structural assumptions.

5What a Real Risk Model DoesMechanisms, not scores. Reasoning, not ranking.

A risk model — as opposed to a risk score — has three components that L × I entirely lacks:

1. Causal structure. A directed acyclic graph that specifies which variables cause which other variables. Not "likelihood and impact are related to risk" — but "phishing success depends on training frequency, email filtering, and user awareness; breach severity depends on data sensitivity, detection time, and response capability; financial loss depends on breach severity, regulatory jurisdiction, and notification costs." Each arrow is a testable claim about mechanism.

2. Structural equations. For each variable in the graph, a function that maps its parents (causes) plus an error term (the unique, unobserved factors) to its value. These equations are not statistical fits — they are representations of how the world works. They say: "if you change this input, here is how the output changes, and here is why."

3. Exogenous variables. The U terms — everything the model doesn't observe but the outcome depends on. In risk: the specific attacker's skill level, the particular configuration of this network, the response team's availability that day. These are what make each incident unique. When you fix U to match a specific case and then change an intervention variable, you get a counterfactual: what would have happened to this incident if the control had been different.

L × I

  • No causal structure
  • No structural equations
  • No exogenous variables
  • No intervention reasoning
  • No counterfactual reasoning
  • Produces: a score

Structural Causal Model

  • Directed graph of mechanisms
  • Equations that encode causation
  • U variables for individual reasoning
  • do-calculus for interventions
  • Abduction → Action → Prediction
  • Produces: answers in dollars, with uncertainty

The difference is not one of degree. It is not that an SCM is "more accurate" than L × I the way a calculator is more accurate than mental arithmetic. The difference is categorical: they answer fundamentally different classes of questions. L × I answers "which risks look bigger?" An SCM answers "what should we do, and what would have happened if we'd done something else?"

6ConsequencesWhat happens when you manage risk with a label instead of a model.

Organizations that rely on L × I as their risk model — rather than recognizing it as a display convention — produce predictable failure patterns:

Misallocation is invisible. Two risks score 12. One has expected annual loss of $2.4M. The other: $180K. The risk register shows them as equal priority. The budget splits evenly. $1.1M goes to the wrong place — and nobody notices, because the framework provides no basis for noticing. The heatmap looks balanced. The portfolio is not.

Controls are unjustifiable. "We spent $400K on this control because the risk was scored High." A regulator, a board member, or an insurer asks: "High compared to what? In what units? What's the expected return?" The risk team cannot answer — not because they're incompetent, but because the framework they were given doesn't produce answers in units that support justification.

Post-incident analysis is theatre. After a loss, the team updates the risk register: likelihood moves from 2 to 4, impact stays at 5, new score is 20. What has been learned? Nothing causal. The team knows the loss happened. They do not know why it happened, which controls would have prevented it, or whether the updated score will predict anything useful. The register is updated. The understanding is not.

AI is impossible. An organization that wants to deploy risk AI — automated reasoning about risk decisions — discovers that its entire risk framework is a spreadsheet of ordinal products. There is nothing for an AI system to reason over. No causal graph to traverse. No structural equations to evaluate. No exogenous variables to condition on. The "model" is a lookup table. You cannot build intelligence on top of a lookup table. You can only build a faster lookup.

Between Misleading and Useless

L × I is misleading when it creates confidence without justification — when the heatmap's colours produce the feeling that risks are understood and managed. It is useless when a decision-maker needs a dollar figure, a causal explanation, or a counterfactual scenario and the framework cannot provide one. Most of the time, it is both: it gives the appearance of rigour while preventing the substance of it.

7What To DoThree steps from labels to reasoning.

Step 1: Acknowledge the limitation. L × I is a display format, not a model. It is useful the way a table of contents is useful — it tells you what topics exist and roughly how important they are. It does not tell you what to do about them. Continue using it for communication if it helps stakeholders orient themselves. Stop using it for decisions.

Step 2: Build causal structure. For each risk in your top 10, draw the causal graph. What causes the risk event? What determines the severity? Which controls affect which pathways? This is not a statistical exercise — it requires domain expertise. The graph encodes how your risk professionals believe the world works. That knowledge is already in their heads. It needs to be in a model.

Step 3: Enable counterfactual reasoning. With causal structure in place, you can answer: "What would this loss have been if we had implemented control X?" "Would this decision have been different if variable Y had been different?" "What is the expected return on this mitigation investment for this specific risk?" These are the questions your board is asking. These are the questions regulators will ask. A causal model can answer them. A heatmap cannot.

Start Here
8ReadingThe foundational literature.

On the Failure of Risk Matrices

  • Cox, L. A. (2008). "What's Wrong with Risk Matrices?" Risk Analysis, 28(2), 497–512.
  • Hubbard, D. W. (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. Wiley.
  • Thomas, P., Bratvold, R. B., & Bickel, J. E. (2014). "The Risk of Using Risk Matrices." SPE Economics & Management, 6(2), 56–66.

On Causal Reasoning in Risk

  • Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge University Press.
  • Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.
  • Bareinboim, E. et al. (2022). On Pearl's Hierarchy and the Foundations of Causal Inference. Technical Report R-60, CausalAI Lab.

On Quantified Risk

  • Freund, J. & Jones, J. (2015). Measuring and Managing Information Risk: A FAIR Approach. Butterworth-Heinemann.
  • Hubbard, D. W. & Seiersen, R. (2016). How to Measure Anything in Cybersecurity Risk. Wiley.