17 Years: Why Communities Still Struggle to Put Research into Action
By Jon Scaccia
13 views

17 Years: Why Communities Still Struggle to Put Research into Action

Across the U.S., health departments spend millions training providers, distributing toolkits, and launching new prevention programs. Yet, years later, adoption often lags. For example, clinics may struggle to integrate smoking cessation interventions despite robust evidence that they save lives. Why? Because programs don’t just need to work in theory—they must be delivered efficiently, affordably, and at scale in messy, real-world settings.

A recent article in Implementation Science argues that public health can address this challenge by drawing on a framework called intervention optimization. Led by Dr. Kate Guastaferro and colleagues at NYU, the study shows how combining implementation science with optimization methods can create smarter, leaner strategies that work for both providers and communities

The Problem: Evidence Without Impact

Over the past two decades, researchers have developed thousands of evidence-based interventions (EBIs)—from diabetes prevention to mental health supports. But as Guastaferro and colleagues note, many “fall short of achieving their intended public health impact”s

The bottleneck isn’t that interventions don’t work. It’s that implementation strategies—the trainings, reminders, redesigns, and supports meant to help providers adopt programs—are often delivered as one-size-fits-all “bundles.” These packages are tested in randomized controlled trials (RCTs), but the trials rarely reveal which pieces matter most, how they interact, or whether they are affordable in practice.

A New Approach: MOST

Here’s where the Multiphase Optimization Strategy (MOST) comes in. Originally developed in behavioral science, MOST helps researchers systematically test multiple components of an intervention to identify the most effective and efficient package. Think of it as stress-testing each piece of a strategy to see what combination gives you the “biggest bang for the buck.”

The process unfolds in three stages:

  1. Preparation: Identify potential strategies (e.g., training, guides, supervision) and define what “success” looks like given local constraints.
  2. Optimization: Use experimental designs—often factorial trials—to test different combinations of strategies. This shows not only if they work, but also how they work together.
  3. Evaluation: Compare the optimized package to the usual practice in real-world trials.

A Smoking Cessation Example

To illustrate, the authors describe a hypothetical study aimed at improving clinic adoption of smoking cessation counseling.

  • Candidate strategies included:
    1. Training (basic vs. enhanced with booster)
    2. A treatment guide (yes/no)
    3. Workflow redesign (yes/no)
    4. Supportive supervision (one vs. three sessions)
  • Optimization trial: Clinics were randomized into 16 different combinations of these strategies. Adoption rates ranged dramatically—from as low as 13% to as high as 100%, depending on the mix.
  • Decision-making tool (DAIVE): Using decision analysis, the team could weigh adoption outcomes against resource use (like provider time). For instance, one package delivered nearly full adoption but required triple the supervision hours, raising tough yet practical questions about sustainability.

This approach demonstrates how implementation can transition from trial-and-error to evidence-informed, efficient design.

Why This Matters for Public Health Practice

For local health agencies, the implications are clear:

  • Do more with less. MOST helps identify which strategies are truly necessary, saving staff time and resources.
  • Adapt to context. What works in a large hospital may not fit a rural clinic. Optimization enables tailoring to constraints such as workforce capacity or funding.
  • Build equity in. By engaging stakeholders early, decision-makers can ensure strategies reflect community priorities and remove structural barriers.
  • Accelerate translation. On average, it takes 17 years for research to be fully incorporated into practice. Optimization shortens this timeline by focusing on efficiency and scalability from the start.

Barriers and Open Questions

Of course, adopting this model isn’t simple. Challenges include:

  • Complex design: Factorial trials and multi-level optimization studies require advanced planning and statistical expertise.
  • Resource needs: While optimization may save resources in the long run, it requires upfront investment in experimental design.
  • Decision rules: Researchers still debate when an optimized strategy is ready to “graduate” to real-world rollout versus further testing

What’s Next

The authors argue that this approach is especially relevant in settings with tight resource constraints—like community clinics, behavioral health providers, or programs in low- and middle-income countries. It could also be applied beyond clinical care, for example in optimizing public health campaigns, school-based programs, or policy rollouts.

For funders and policymakers, investing in optimization may mean higher initial costs but better long-term sustainability. For practitioners, it provides a means to ensure that strategies align with their reality—not just academic theory.

Join the Conversation

As public health leaders look for new ways to bridge the gap between research and real-world impact, integrating optimization into implementation science could be transformative.

Questions for reflection:

  • How could your agency use optimization to refine current programs?
  • What constraints (time, cost, workforce) most shape your ability to adopt EBIs?
  • Could optimization help identify strategies that improve equity in your community?

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started