The “Research–Practice Gap” in Implementation Science: A Problem We May Have Created Ourselves
For decades, implementation science has tried to answer a simple but stubborn question: Why does it take so long for research to improve real-world practice? In healthcare, education, and social services, researchers often produce powerful evidence—but practitioners struggle to apply it consistently.
A new commentary published in Global Implementation Research and Applications argues something surprising: the gap between research and practice may be partly a myth created by the way we talk about implementation science itself.
Rather than treating research and practice as separate worlds that need to be bridged, the authors suggest a more radical idea: they are two sides of the same coin. If that’s true, the real challenge isn’t translating research into practice—it’s rethinking how we design research, train professionals, and define evidence in the first place.
Let’s unpack why this matters and what it could mean for public health, social services, and other applied fields.
The Myth of the Research–Practice Gap
One of the most common claims in implementation science is that it takes 17 years for research evidence to reach routine practice. You’ve probably heard this statistic before. It appears in hundreds of articles and presentations about evidence-based practice.
But according to the commentary, the origin of that number is surprisingly shaky.
The widely cited estimate comes from a 2000 paper that attempted to calculate how long it took certain medical practices—like flu vaccinations and cancer screenings—to reach widespread use. However, the analysis relied on a small and somewhat arbitrary set of procedures, and the calculation itself involved several assumptions about adoption rates. In other words:
- The famous “17-year gap” isn’t a universal law.
- It’s based on a narrow set of examples.
- Yet it’s been applied across nearly every field—from public health to education.
When a statistic becomes so widely repeated without scrutiny, it can shape how an entire discipline sees its mission. And that’s exactly what happened here.
When Language Creates a False Divide
The authors argue that the very phrase “research–practice gap” carries a hidden assumption:
That knowledge originates in academia, and practitioners are simply slow to adopt it. But in real-world settings, knowledge flows in both directions. Practitioners constantly generate insights through:
- program delivery
- community engagement
- policy implementation
- local experimentation
Yet these forms of knowledge often receive less recognition because they don’t fit traditional academic definitions of evidence. By framing the problem as research failing to reach practice, we unintentionally reinforce a hierarchy where:
Researchers → produce knowledge
Practitioners → apply knowledge
In reality, both groups are constantly shaping evidence.
The Problem with the “Gold Standard”
Another major theme in the commentary is the overemphasis on randomized controlled trials (RCTs).
RCTs are often called the “gold standard” of scientific evidence, particularly in medicine. But the authors point out that this label can be misleading. Randomized trials are extremely valuable for certain questions—especially testing drugs or medical devices. But many real-world problems don’t fit neatly into controlled experimental designs. Consider issues like:
- homelessness prevention
- community violence
- mental health services
- child welfare systems
These are complex social systems, not laboratory environments. In such contexts, RCTs may struggle to answer the most important questions, such as:
- Is this program feasible in a real-world setting?
- Do communities find it acceptable?
- Does it adapt well to local contexts?
When evaluation focuses too narrowly on RCT evidence, other valuable forms of knowledge—such as program evaluation, quality improvement, and observational studies—can be overlooked. Ironically, this can slow the very implementation we’re trying to accelerate.
The Explosion of Implementation Frameworks
Implementation science has grown rapidly over the past two decades. Today, researchers have identified more than 140 different implementation frameworks and models designed to guide the adoption of evidence-based practices. Some well-known examples include:
- RE-AIM
- CFIR (Consolidated Framework for Implementation Research)
- Behaviour Change Wheel
- Active Implementation Frameworks
While each offers useful insights, the sheer number of frameworks can create confusion for practitioners trying to implement programs in real time. The commentary compares this phenomenon to the “toothbrush problem.”
Everyone develops their own theory, but no one wants to use someone else’s. Read our commentary on that here.
The result? A proliferation of models that may compete rather than converge.
Why Context Matters: “Wicked Problems”
Another key insight from the article is the difference between technical problems and “wicked problems.” Technical problems—like building a bridge—have clear definitions and solutions. But many public health and social service challenges are “wicked problems,” meaning:
- stakeholders disagree on the nature of the problem
- solutions are uncertain
- outcomes depend heavily on context
Examples include:
- reducing substance use
- improving child welfare outcomes
- addressing structural inequality
In these situations, rigid evidence hierarchies or standardized interventions may not translate easily into practice. Implementation becomes less about replicating a program and more about collaboratively adapting solutions.
Rethinking Workforce Training
One of the most practical insights from the commentary concerns how professionals are trained. Many graduate programs in social work, psychology, and public health claim to prepare students to use evidence-based practices. But surveys of program administrators suggest the reality is different:
- Many practitioners graduate without strong training in evidence-based practice.
- Academic faculty and field practitioners rarely collaborate on curriculum design.
- Field placements may not actually implement evidence-based interventions.
This disconnect means that the research–practice gap may start during professional training itself. To address this, the authors argue for deeper integration between:
- academic coursework
- field training
- real-world implementation projects
A Simpler Way Forward
Despite these challenges, the authors remain optimistic about the future of implementation science. Their central recommendation is surprisingly simple: Focus less on models and more on principles. Across different frameworks and disciplines, many implementation efforts share common elements, such as:
- stakeholder engagement
- feedback loops
- adaptation to context
- continuous learning
Rather than developing new frameworks, researchers could focus on identifying core principles of change that apply across settings. This approach would align with the classic design principle: KISS — “Keep It Simple, Stupid.” Simplifying the field could make implementation science more usable for practitioners on the ground.
The Future of Implementation Science
Ultimately, the article calls for a shift in perspective. Instead of thinking about translating research into practice, we should think about building evidence through practice. That means:
- engaging practitioners and communities from the start of research projects
- valuing multiple forms of evidence
- focusing on impact rather than academic metrics
- integrating implementation into professional education
When researchers, practitioners, and communities co-create solutions together, the supposed “research–practice gap” may begin to disappear. Not because we finally bridged it—but because we realized it was never as wide as we thought.


