See.
Do.
Imagine.
A Causal FAIR Network
Counterfactual use cases for Factor Analysis of Information Risk,
by Pearl's Ladder of Causation.
From passive observation to interventional policy to full counterfactual reasoning — 11 use cases across the three rungs.
1. The FAIR Model
FAIR (Factor Analysis of Information Risk) is a quantitative methodology that measures cyber risk in financial terms. Developed by Jack Jones and commercialized through RiskLens, FAIR breaks down risk into measurable components: how often threats occur, how likely they succeed, and what losses result.1
Risk Decomposition
Total Primary Loss (TPL)
Direct, immediate losses from security incidents: emergency response, system restoration, forensic investigation, hardware/software replacement.
Total Secondary Loss (TSL)
Cascading, indirect costs that follow: reputation damage, regulatory fines, customer churn, long-term productivity loss.
Key Components
| Component | Formula | Example Value | Meaning |
|---|---|---|---|
| Contact Frequency (CF) | CF | 4.02 ± 1.9 | How often threat agents make contact with assets |
| Probability of Action (PoA) | PoA | 0.617 ± 0.21 | Given contact, probability threat agent takes action |
| Threat Event Frequency (TEF) | CF × PoA | 3.88 ± 3.1 | Actual threat events per period |
| Threat Capability (TC) | TC | 0.606 ± 0.21 | Attacker skill and resources (0–1) |
| Resistance Strength (RS) | RS | 0.635 ± 0.21 | Defensive capability (0–1) |
| Vulnerability (Vul) | TC > RS | 43.2% | Probability of successful exploitation |
| Loss Event Frequency (LEF) | TEF × Vul | 1.27 ± 1.5 | Actual loss events per period |
| Risk | LEF × LM | 372 ± 290 | Annual risk exposure ($K) |
Traditional FAIR uses point estimates or simple ranges. FAIR+BN extends this with Bayesian networks that model uncertainty explicitly, update beliefs as new evidence arrives, and capture dependencies between factors. Instead of a single risk estimate, you get probability distributions with confidence intervals — and a model that automatically refines itself with each observed incident.
Key Capabilities
2. Network Structure
The FAIR+BN model represents risk factors as a Bayesian network — a directed acyclic graph where nodes are risk factors and edges are causal relationships.
The Bayesian FAIR network: nodes represent risk factors, edges show causal relationships.
Cascade Effects
Models how TEF flows through vulnerability to create losses. See the complete causal chain from threat to financial impact.
Uncertainty at Every Step
Every parameter has a probability distribution, not just a point estimate. Know how confident you are.
Probabilistic Queries
Ask questions like: "What's P(Risk > $500K | observed breach)?" Get instant answers.
Association — Seeing
Pure observational queries. You set evidence and read updated beliefs using Bayes' rule over the existing joint distribution. No causal manipulation — just conditioning.
Example: "When I see Resistance Strength = Low, how does my belief about Risk change?"
Intervention — Doing
Beyond observation. You're asking what happens if you force a variable to a particular state — Pearl's do-operator — severing incoming edges to the intervened node. This is the domain of policy decisions and deliberate action.
Example: "If we spend $2M on endpoint detection — do(RS = High) — does P(Risk = High) drop below 20%?"
But where exactly does the $2M go? Resistance Strength isn't a single lever — it decomposes into multiple control dimensions2, each of which can be independently intervened on:
Extended vulnerability model: attacker types, multiple attack vectors, and control dimensions.
Each control has three intervention surfaces: Design Effectiveness (is the control well-designed?), Extent of Deployment (does it cover all assets?), and Operational Effectiveness (is it maintained?). Running do() on each dimension separately reveals which has the highest ROI — e.g., improving Control A's deployment extent may reduce risk more than redesigning Control B.
2 Wang, J., Neil, M., & Fenton, N. (2020). A Bayesian Network Approach for Cybersecurity Risk Assessment Implementing and Extending the FAIR Model. Computers & Security, 89.
Counterfactual — Imagining
The most powerful queries. Condition on what actually happened, then ask what would have happened in an alternate world. Requires the three-step process: abduction → action → prediction.
Example: "We were breached with RS = Low. Would better controls have prevented it — or was the attacker too strong regardless?"
6. Summary
All eleven use cases mapped across the three rungs of causal reasoning.
| Rung | Operation | FAIR Use Case | Key Question |
|---|---|---|---|
| 1 · Seeing | P(Y | X) | Post-breach forensics | What caused this? |
| 1 · Seeing | P(Y | X) | Tornado sensitivity | What's most correlated with risk? |
| 1 · Seeing | P(Y | X) | Threat profiling | What should we expect? |
| 2 · Doing | P(Y | do(X)) | Control investment | What happens if we upgrade? |
| 2 · Doing | P(Y | do(X)) | Regulatory change | What if fines increase? |
| 2 · Doing | P(Y | do(X)) | Threat scenario modeling | Which profile causes more risk? |
| 2 · Doing | P(Y | do(X)) | Secondary loss amplification | How does context change risk? |
| 3 · Imagining | P(Y'ₓ | X, Y) | Post-breach counterfactual | Would better controls have prevented it? |
| 3 · Imagining | P(Y'ₓ | X, Y) | Missed investment | Did our controls actually save us? |
| 3 · Imagining | P(Y'ₓ | X, Y) | Cascading failure analysis | Was it frequency or severity? |
| 3 · Imagining | P(Y'ₓ | X, Y) | Risk appetite boundary | What single change puts us in tolerance? |
7. Implementation
FAIR vs. FAIR+BN: Which Approach?
| Aspect | Traditional FAIR | FAIR+BN |
|---|---|---|
| Uncertainty | Point estimates or simple ranges | Full probability distributions with confidence intervals |
| Updates | Manual recalculation | Automatic Bayesian updating with new evidence |
| Dependencies | Limited modeling of factor interactions | Explicit causal relationships in Bayesian network |
| Complexity | Simpler, more accessible | More sophisticated, requires probabilistic expertise |
| Scenario Analysis | Recalculate entire model | Instant probabilistic queries and what-if testing |
Start with basic FAIR to build expertise. Upgrade to FAIR+BN when your organization has matured data practices, statistical capability, and complex risk scenarios requiring sophisticated modeling.
Software & Tools
- Bayesian Network Software: Bayes Server, GeNIe, Hugin (commercial), or PyMC3/Stan (open-source)
- Programming Environment: Python or R for data preprocessing and model integration
- Spreadsheet Tools: Excel/Google Sheets for basic FAIR calculations
- Visualization: Tableau, Power BI, or Python libraries for presenting results
Data Requirements
| Level | Data Needed | Purpose |
|---|---|---|
| Minimum Viable | 10–20 historical incidents | Estimate initial distributions |
| Recommended | 50+ incidents | Robust probability estimates |
| Ideal | Continuous SIEM/scanner data | Real-time updating |
| External | VERIS, Advisen, threat intel | Industry benchmarks |
Team Expertise
Essential
- Risk analyst familiar with FAIR methodology
- Basic statistics knowledge (distributions, probability)
Recommended
- Data scientist or statistician for Bayesian model development
Helpful
- Security operations for threat validation
- Finance for loss magnitude estimates
Timeline & Budget
| Item | Basic FAIR | FAIR+BN |
|---|---|---|
| Implementation | 2–4 weeks | 2–3 months |
| Ongoing Maintenance | 4–8 hours/month | 4–8 hours/month |
| Software | $0–5K/year | $5K–50K/year (or $0 open-source) |
| Training | $2K–5K | $5K–10K |
| Consulting (optional) | $10K–30K | $15K–100K |
Implementation Roadmap
Month 1–2: Foundation
Complete FAIR Fundamentals certification. Conduct first FAIR analysis on a single, well-understood risk scenario. Present results to leadership.
Month 3–4: Scale
Systematize data collection. Expand to 5–10 key risk scenarios. Refine estimates as you gather more data.
Month 5–6: Advance
Assess readiness for FAIR+BN. Build Bayesian capability. Pilot on one critical system, then expand.
8. Glossary