Designing Indicators That Actually Measure Impact
- Donfelix Ochieng
- Feb 11
- 4 min read
There’s a moment in almost every project where the room goes quiet.
Someone scrolls through the logframe or results framework and asks, carefully, “Do these indicators really tell us whether this work is making a difference?”
The silence that follows is usually the answer.
Most professionals working in research, development, or evidence-driven programmes have inherited indicators rather than designed them. They arrive pre-packaged in proposals, donor templates, or legacy systems and once they exist, they are rarely questioned. Data is collected. Reports are written. And yet, a nagging doubt remains: Are we measuring impact, or just activity?

Why Indicators So Often Miss the Point
On paper, many indicators look reasonable. They are technically sound. They follow familiar frameworks. They tick the SMART boxes.
In practice, they often fail to do the one thing that matters most: help people make better decisions.
I’ve seen teams track dozens of indicators without being able to answer simple questions like:
What changed because of this work?
For whom did it change?
What should we do differently next time?
The issue isn’t a lack of effort or expertise. It’s that indicator design is often treated as a compliance exercise rather than a thinking process. Once indicators become something you report on instead of something you work with, they lose their power.
Most Indicators Are Designed Too Early
One of the most common mistakes happens right at the start.
Indicators are written before there is real clarity about:
what success would actually look like,
what kind of change is realistic,
or how decisions will be made along the way.
The pressure to “finalize indicators” early often before implementation realities are understood leads to rigid measures that don’t age well. As contexts shift (and they always do), the indicators remain frozen.
In theory, indicators should guide implementation.In reality, implementation often outgrows them.
Practical implication: Indicator design should be iterative. Early indicators can be provisional, refined as understanding deepens. Locking them in too soon trades flexibility for false certainty.
Activity Is Easier to Measure Than Change and That’s the Trap
Counting activities is comfortable. Measuring change is uncomfortable.
It’s far easier to track:
number of workshops held,
reports produced,
people reached.
These figures are clean, defensible, and rarely controversial. But they tell us very little about whether the work actually mattered.
Impact indicators, on the other hand, force harder conversations:
Did people apply what they learned?
Did behaviour, decisions, or outcomes shift?
Were assumptions wrong?
These questions introduce uncertainty and uncertainty makes organizations nervous.
The reality: If an indicator doesn’t risk revealing failure, it probably isn’t measuring impact.
SMART Is Useful But Only When Used Thoughtfully
SMART indicators are often taught as a checklist. That’s where they lose their value.
When used mechanically, SMART becomes a box-ticking exercise that produces indicators that are technically correct but practically weak. When used well, it sharpens thinking.
Here’s what that looks like in practice:
Specific means the indicator is unambiguous enough that two different people would interpret it the same way.
Measurable means there is a credible plan for collecting the data not just a hope that it will be available.
Achievable means grounded in real capacity, not ideal scenarios.
Relevant means directly linked to decisions someone is actually responsible for making.
Time-bound means change can be observed and reflected on, not vaguely implied.
The most overlooked element is relevance. Many indicators are measurable but meaningless because no decision depends on them.
A simple test I’ve learned to use: If this indicator changes significantly, who needs to act and how?
If there’s no clear answer, the indicator is probably unnecessary.
Good Indicators Reflect How Work Actually Happens
Indicators often fail because they reflect how work should happen, not how it does happen.
They assume linear progress, stable contexts, and predictable responses. Anyone who has worked in complex systems knows that reality is messier.
This doesn’t mean indicators should be vague. It means they should be honest.
Some of the strongest indicators I’ve seen:
allow for partial progress,
capture direction of change rather than absolute targets,
and acknowledge external influences.
They don’t pretend complexity doesn’t exist they work within it.
Fewer Indicators, Taken Seriously, Beat Many Taken Lightly
There’s a temptation to cover every angle “just in case.”
The result is often:
bloated data collection tools,
overburdened teams,
and reports that no one fully reads.
In contrast, systems with fewer, well-chosen indicators tend to generate better conversations and stronger learning. When teams know why an indicator exists and how it will be used, data quality improves almost automatically.
Focus creates accountability. Excess creates distance.
Practical Takeaways
Design indicators as tools for thinking, not just reporting.
Revisit and refine indicators as understanding evolves.
Prioritize indicators that inform real decisions.
Be honest about what can and cannot be measured well.
Accept that some of the most important changes are harder to capture, but still worth attempting to understand.
There is no perfect indicator set. There is only one that is fit for purpose.
Closing Reflection
Indicators quietly shape how work is understood, valued, and judged. They influence what gets attention and what gets ignored.
When indicators are poorly designed, they don’t just fail to measure impact. They distort it.
When they are designed with care, humility, and realism, they do something far more valuable: they help people see their work more clearly.
And clarity, in complex systems, is a powerful thing.





Comments