top of page

Turning Research into Policy: Why Evidence Often Stalls

  • Writer: Donfelix Ochieng
    Donfelix Ochieng
  • Feb 24
  • 5 min read

The Policy Brief That Collected Dust

I remember sitting in a Ministry of Health meeting in Nairobi, watching a senior official flip through a meticulously prepared policy brief months of KEMRI research distilled into actionable recommendations. The data was solid. The analysis was rigorous. The implications were clear. And I knew, even as he nodded appreciatively, that nothing would happen.

Six months later, the brief was still sitting in a filing cabinet. The problem it addressed antimicrobial resistance patterns in county hospitals had worsened. The researchers were frustrated. The Ministry staff felt misunderstood. Everyone had done their job, yet nothing had moved.

This isn't a story of incompetence or bad faith. It's a story of systemic friction that I've witnessed repeatedly across Kenya's health research landscape. And it's far more common than the success stories we prefer to tell.

research and policies

 

The Translation Gap Nobody Wants to Name

There's a persistent myth in global health research: produce quality evidence, get it to the right people, and policy change follows naturally. This linear model research to dissemination to policy looks sensible on paper. In practice, it rarely operates this cleanly.

Working between KEMRI and the Ministry of Health, I've seen how evidence gets filtered through layers of institutional reality. A county health director might agree completely with your findings but face budget cycles that won't accommodate new interventions for eighteen months. A national program manager may support your recommendation politically but know that implementation capacity doesn't exist at facility level. A cabinet secretary might prioritize your issue, then get reshuffled before action materializes.

The research community often interprets these delays as resistance to evidence. Sometimes that's accurate. More often, it's a mismatch between research timelines and policy rhythms, between academic incentives and bureaucratic constraints. Understanding this distinction matters enormously if you want your work to actually influence decisions.


Where Evidence Actually Gets Stuck

1. The Timing Problem

Research operates on discovery timelines. Policy operates on political and budgetary calendars. These rarely align.

I once worked on a KEMRI study on maternal health financing that concluded just after Kenya's national budget had been finalized. The findings were directly relevant to county-level resource allocation, but counties had already set their priorities for the fiscal year. We had strong evidence and willing partners, but we were six months too late to influence the relevant decisions.

The lesson isn't to move faster research integrity requires appropriate timeframes. The lesson is to engage policy processes before final analysis, embedding research within ongoing policy conversations rather than presenting it as a finished product. This means investing in relationships that let you anticipate decision windows and position evidence accordingly.

This approach demands humility. You're no longer the expert delivering truth to decision-makers; you're a participant in messy, iterative processes where evidence is one input among many.


2. The Relevance Gap

Academic researchers are trained to advance knowledge. Policy actors need solutions to immediate problems. These orientations produce different questions, different methodologies, and importantly, different definitions of good evidence.

I've watched researchers present findings on disease burden that were methodologically sophisticated but practically unhelpful wrong geographic granularity, wrong time horizon, wrong framing of trade-offs. Ministry staff listened politely, then continued with existing plans because the research didn't actually address their operational constraints.

Effective policy engagement requires asking different questions early in the research process. Not "what's the most rigorous design?" but "what decision does this person face, and what information would actually change their calculus?" Sometimes this means sacrificing methodological purity for practical relevance. That's a trade-off many researchers resist, but it's essential if policy influence is genuinely the goal.


3. The Trust Deficit

Evidence doesn't speak for itself. It speaks through relationships, and those relationships take time to build.

In my experience working across KEMRI and Ministry structures, policy actors rarely reject evidence on technical grounds. They reject it because they don't trust the source, don't understand the motivations behind it, or have been burned before by research that overpromised and underdelivered.

I've seen Ministry officials dismiss technically sound studies because the researchers had no track record of follow-through no willingness to help with implementation challenges, no presence when things got difficult. I've seen suspicion of externally funded research that seemed designed to advance donor priorities rather than local needs. These trust deficits aren't irrational; they're learned caution from experience.

Building credibility means showing up consistently, not just when you need access. It means acknowledging limitations in your findings. It means staying engaged after publication, when implementation gets messy and the easy part generating evidence is behind you.


4. The Capacity Asymmetry

Here's something underappreciated: producing policy-relevant research and using research to inform policy are distinct skills, and most institutions are weak in one or both.

KEMRI has extraordinary scientific capacity. The Ministry of Health has deep operational knowledge. But the interface between them translating research into policy-ready formats, understanding decision-making contexts, building sustained institutional relationships is often underdeveloped on both sides.

I've seen brilliant researchers struggle to communicate findings in formats that policy actors can actually use. I've seen capable Ministry staff unable to evaluate evidence quality or distinguish between robust and questionable studies. These capacity gaps aren't individual failings; they're systemic features of institutions designed for different primary purposes.

Addressing them requires intentional investment in boundary-spanning roles people who understand both research and policy cultures and can navigate between them. These roles are often undervalued in academic career structures and underfunded in government systems, which helps explain why they remain scarce.


What Actually Moves Evidence into Action

If you're doing research with policy aspirations in Kenya's health system or similar contexts here's what I've learned matters:


  • Start with policy actors, not with research questions. Understand what decisions are pending, what constraints decision-makers face, what information would actually be useful. Design your study to fill specific gaps in actionable knowledge, not just to advance the literature.


  • Invest in relationships before you need them. The official who trusts you will engage seriously with inconvenient findings. The official who doesn't know you will find reasons to discount them.


  • Plan for the long arc of policy change. Most significant policy shifts I've observed involved multiple studies over years, persistent engagement through political transitions, and researchers who stayed involved through implementation rather than moving to the next project.


  • Get comfortable with partial influence. Your evidence might shape one element of a complex policy, or inform debate without determining outcome. This isn't failure; it's realistic understanding of how policy actually develops.


The Harder Truth

After years in this space, I've become less optimistic about individual studies changing policy and more convinced that sustained institutional engagement matters most. Single research projects rarely transform policy landscapes. Persistent presence of credible research institutions, building trust and demonstrating commitment over time, gradually shifts how policy actors think and decide.

This is less satisfying than the myth of the definitive study that changes everything. But it's more accurate to how I've seen evidence actually influence health policy in Kenya. And it suggests a different model for researchers who want real-world impact not as external experts delivering truth, but as embedded participants in ongoing processes of institutional learning and adaptation.


The policy brief in that filing cabinet wasn't wasted effort. It was one contribution to a longer conversation that eventually, through multiple channels and iterations, helped shift antimicrobial stewardship practices. But that took years, not months. And the researchers who stayed engaged through that process not just the ones who produced the initial evidence were the ones who ultimately made the difference.

 

Comments


© 2026 TechMedMind. All rights reserved.

bottom of page