Common MEL Mistakes and How to Avoid Them
- Donfelix Ochieng
- Feb 11
- 3 min read
If Monitoring, Evaluation, and Learning (MEL) were only about tools and templates, most development organizations would be doing exceptionally well.
They have log frames. They have indicators. They have dashboards, frameworks, and reports stacked neatly in shared folders.
Yet, despite all this structure, the same questions keep coming back: Why didn’t this programme achieve what we expected? Why does the data look good, but the reality feel different? Why do we keep repeating the same mistakes?
After years of working with research and development teams, I’ve come to realise something uncomfortable but important:
Most MEL problems are not technical. They are thinking problems.

1. When Indicators Impress but Don’t Inform
One of the earliest cracks in many MEL systems appears at the indicator level.
Indicators are often designed to sound strategic:
“Improved resilience”
“Enhanced livelihoods”
“Increased awareness”
They read well in proposals and reports. But when it’s time to collect data, teams struggle to explain what exactly they’re measuring.
The result?Numbers that look polished but don’t guide decisions.
What works better: Good indicators are not poetic they are practical.
Before approving an indicator, I’ve learned to ask one simple question: What would we do differently if this number went up or down?
If there’s no clear answer, the indicator is probably decoration, not evidence.
2. Baselines Treated as an Inconvenience
Baselines are often the first thing sacrificed when timelines tighten or funding arrives late.
“Let’s just start implementation.” “We’ll figure it out later.” “We know the situation already.”
Months later, during evaluation, everyone is guessing.
Guidance repeatedly emphasizes that without a baseline, claims of change become assumptions dressed as findings.
What works better: A baseline doesn’t need to be perfect.It needs to be intentional.
Even rough baseline data clearly documented with its limitations gives meaning to endline results. Without it, comparison becomes storytelling.
3. Confusing Tools with Strategy
I’ve seen highly sophisticated data platforms used to collect information that no one ever analyses.
The assumption is often: If we use the right tool, the insight will appear.
It doesn’t.
Tools don’t fix unclear questions. They only amplify them.
What works better: Strong MEL systems start with curiosity, not software.
Before selecting a tool, pause and ask:
What decisions need to be made?
Who needs the answers?
How often will this data be reviewed?
Only then should tools enter the conversation.
4. Collecting Everything and Understanding Nothing
There’s a quiet fear in many programmes: What if we miss something important?
So, questionnaires grow longer. Indicators multiply. Field teams are stretched thin.
The outcome is predictable:
Enumerator fatigue
Poor-quality data
Overwhelmed analysts
Decision-makers who stop reading reports
More data rarely equals more insight.
What works better: Focused data beats exhaustive data.
The most effective teams I’ve worked with are ruthless about prioritization. They collect only what they are genuinely prepared to analyze and act on.
If data has no pathway to decision-making, it shouldn’t be collected.
5. Treating MEL as a Reporting Exercise
Perhaps the most damaging mistake of all is when MEL exists mainly to satisfy external accountability.
Reports are written. Recommendations are listed. And then nothing changes.
Research shows that learning only happens when reflection is built into routine work not added as an afterthought.
What works better: Learning must be designed.
That means:
Regular reflection moments during implementation
Honest conversations about what isn’t working
Clear ownership of follow-up actions
Learning is not automatic. It is intentional.
A Final Reflection
Strong MEL systems are not about doing everything right.
They are about:
Asking better questions
Accepting uncertainty
Being willing to adjust course
When MEL shifts from compliance to curiosity, something changes.
Reports stop being an endpoint. They become a starting point.
And that’s when evidence begins to shape real-world decisions not just fill pages.






Comments