Leading vs lagging indicators
The difference between spotting a problem early and explaining a failure after the fact often comes down to which signals you're watching.
By Karen Frith
Last updated: January 2026
Two kinds of signal
Every program generates signals. Some tell you what has already happened — these are lagging indicators. Others suggest what might be forming — these are leading indicators.
Most program reporting focuses heavily on lagging indicators: milestones hit, budget spent, deliverables completed. These are concrete, measurable, and easy to report. They're also historical. By the time a lagging indicator turns negative, the problem it reflects is already well established.
The limitation of lagging indicators
Lagging indicators are useful for accountability and retrospective analysis. They confirm outcomes. But they offer limited value for prevention.
A program that tracks only lagging indicators is driving by looking in the rearview mirror. It can explain what happened, but it struggles to anticipate what's coming.
This is why programs often feel blindsided by problems that, in retrospect, were forming for months. The signals were there — they just weren't the signals being watched.
What leading indicators look like
Leading indicators are patterns that suggest trajectory, not just position. They're often subtler and harder to quantify:
- —Decision velocity: How quickly are decisions being made? Slowing decision cycles often precede delivery delays.
- —Dependency accumulation: Are teams waiting on each other more frequently? Growing dependencies create fragility.
- —Escalation patterns: Are issues being escalated appropriately? Under-escalation suggests problems are being absorbed rather than addressed.
- —Stakeholder engagement: Is engagement increasing or decreasing? Withdrawal often signals growing concern or frustration.
- —Assumption validity: Are the assumptions the program was built on still holding? Invalidated assumptions create hidden risk.
Why leading indicators are underused
Leading indicators require interpretation. They don't fit neatly into status reports or RAG dashboards. They're probabilistic rather than definitive — they suggest risk, not certainty.
This makes them uncomfortable to report. It's easier to say "we hit the milestone" than "decision velocity is declining and that concerns me." The former is a fact; the latter is a judgment.
But this discomfort is precisely why leading indicators are valuable. They create space for early conversation, before facts become failures.
Shifting the balance
Effective program oversight requires both types of signal. Lagging indicators provide accountability. Leading indicators provide anticipation.
The goal is not to abandon lagging metrics, but to complement them with forward-looking observation. To watch not just where the program is, but where it appears to be heading.
This shift — from reactive to anticipatory — is the foundation of early warning intelligence.
Continue reading
Why dashboards don't create insight →