Work With Me
Training, advisory, and model development for teams building causal AI capability internally.
Who I Am
Marc Vandenplas — 25 years international experience in Causal AI, quantitative financial and operational risk modeling, infosec, infrastructure analytics, operations research, business engineering, and change management — at program and director level.
Credentials: Stanford AI · Johns Hopkins Data Science · MBA
Selected clients: UCSF · Zuckerberg General Hospital · NVIDIA · Amazon · PG&E · Merrill Lynch · Chevron
My Approach
Most consultants deliver a model and disappear. Six months later, no one remembers how it works. I do the opposite: every engagement includes hands-on training so your team can own, explain, and evolve the model independently.
Differentiators
| Capability | Distinction |
|---|---|
| Your team owns the model | Knowledge transfer built into every engagement |
| Tool-agnostic guidance | Recommendations based on requirements, not vendor relationships |
| Executive + technical fluency | I bridge executive strategy and technical implementation |
You Might Be a Good Fit If…
- Your decisions need to be explainable. Regulators, boards, or stakeholders ask "why?" and you need better answers than "the model said so."
- You're tired of analytics that describe but don't prescribe. You have plenty of dashboards; what you need is guidance on what to do.
- You're worried about losing institutional knowledge. Key experts are approaching retirement, or knowledge is siloed in ways that create risk.
- You've been burned by black-box AI. You invested in sophisticated models that no one trusts or uses because no one understands them.
- You need to test scenarios before committing. The cost of being wrong is high, so you want to simulate interventions before you make them.
I work best with mid-market and enterprise organizations (500–10,000 employees) in regulated industries: financial services, healthcare, insurance, and critical infrastructure.
Engagements
Training & Workshops
Half-day to multi-day programs teaching Causal Networks, causal inference, and decision modeling.
On-site or virtual. Hands-on exercises with real tools.
Advisory & Consulting
Strategic guidance on implementing Causal Networks in your organization.
I help you scope projects, select tools, and build internal capability.
Model Development
I build custom Causal Networks for your domain — risk, customer analytics, operations, compliance.
Knowledge transfer to your team included.
Engagement Tiers
| Tier | Scope | Best For |
|---|---|---|
| Workshop | 1–3 days of focused training | Teams learning Causal Network fundamentals |
| Project | Scoped engagement to build a specific model | Organizations with a defined use case |
| Fractional | Ongoing part-time support (4–16 hrs/month) | Organizations building long-term capability |
Common Concerns
If you're skeptical, that's reasonable. Most "AI" promises have underdelivered. Here are the questions I hear most often:
Most enterprise AI is built to find patterns — which is useful for some things, but not for decisions. Pattern-matching tells you what happened; causal models tell you what to do. Different tool, different purpose.
If your team can draw a flowchart showing how your business works — "this causes that, which affects this other thing" — they can learn to build causal models. The underlying math is handled by software. The hard part is knowing which arrows to draw, and that's domain expertise, not programming.
Traditional analytics answers "what happened." Predictive models answer "what might happen next." Causal models answer "what happens if we act" and "what would have happened if we'd acted differently." The difference is intervention.
It does — and that's a feature, not a bug. The process of building a causal model forces your team to articulate their assumptions explicitly. Often, this surfaces hidden disagreements. You can start small: a model with 5–10 variables can still deliver real insight.
You will, at first — and that's okay. Causal models are meant to be revised. The model makes your assumptions visible, so you can see where they break down. A wrong-but-visible assumption is much easier to fix than a wrong-and-hidden one.
Your current analysts can learn this. The concepts aren't harder than regression or A/B testing — they're just different. A typical team gets productive within a few weeks of training.
Toolkit
These are the tools I use and recommend:
Enterprise-grade Causal Networks with full causal ladder support
Accessible, visual network building for research and education
BayesFusion's graphical modeling environment with qualitative extensions
Cloud-based Bayesian network modeling with decision support
Free Bayesian statistics for analysts who want more than p-values
Visual data science without code
The complete R package for structure learning and inference
Decision analysis software for strategic planning
Excel add-in for decision trees
The right tool depends on your use case, team skills, and integration requirements. I help you evaluate options and build capability with whichever platform fits your needs.
Let's Talk
We should have an initial conversation to assess fit — a focused discussion about whether Causal Networks address your specific requirements. We then scope a workshop, project, or fractional engagement. If not a fit, I will recommend a more suitable approach.
Begin the Conversation
Whether evaluating Causal Networks for the first time or prepared to develop production models, I welcome your inquiry.
Book a Callor email: [email protected]