top of page



Turning Research into Policy: Why Evidence Often Stalls
Evidence doesn't speak for itself—it speaks through relationships. In my work across KEMRI and Kenya's Ministry of Health, I've seen brilliant research dismissed not for technical flaws, but because researchers lacked follow-through or seemed driven by external agendas. Policy actors rarely reject findings on merit; they reject them based on trust. Building credibility means showing up consistently, acknowledging limitations, and staying engaged when implementation gets mess
Donfelix Ochieng
Feb 245 min read


How Data Can Strengthen Accountability and Trust
Here's where trust is genuinely built or destroyed. Every research organization makes mistakes. Mislabeled samples. Coding errors in statistical analysis. Protocol deviations during recruitment. The question isn't whether errors occur—it's what happens when they're discovered.
Donfelix Ochieng
Feb 245 min read


Designing Indicators That Actually Measure Impact
Many professionals collect indicator data without being confident it actually reflects impact. This article draws on real-world experience to explore why indicators often miss the point, how well-intentioned frameworks can fail in practice, and what it takes to design indicators that genuinely support learning, decision-making, and meaningful change.
Donfelix Ochieng
Feb 114 min read


Common MEL Mistakes and How to Avoid Them
If Monitoring, Evaluation, and Learning (MEL) were only about tools and templates, most development organizations would be doing exceptionally well. They have log frames. They have indicators. They have dashboards, frameworks, and reports stacked neatly in shared folders. Yet, despite all this structure, the same questions keep coming back: Why didn’t this programme achieve what we expected? Why does the data look good, but the reality feel different? Why do we keep repeating
Donfelix Ochieng
Feb 113 min read


Monitoring vs. Evaluation vs. Learning: What's the Difference?
Many research organizations collect vast amounts of data but struggle to turn it into better decisions. Understanding the difference between monitoring, evaluation, and learning (MEL) is essential for improving research quality, relevance, and impact. This article breaks down how each function works in practice, why confusing them weakens research outcomes, and how researchers can use evidence not just to report results but to learn, adapt, and improve future studies.
Donfelix Ochieng
Jan 283 min read
bottom of page
