Fast Isn’t Always Better: What the D&I Field Still Gets Wrong and Where We Go From Here
By Jon Scaccia
5 views

Fast Isn’t Always Better: What the D&I Field Still Gets Wrong and Where We Go From Here

Each year, the Annual Conference on the Science of Dissemination and Implementation in Health (D&I) offers a front-row seat to the cutting edge of how evidence is translated into action. The 17th conference, held in Arlington, VA, focused on a deceptively simple theme: “Moving Fast and Slow: Optimizing the Pace of Implementation.” While some studies advanced the field with innovative tools and frameworks, others highlighted persistent gaps, particularly in equity, sustainability, and the practical applicability of rapid implementation models in the real world.

This blog explores the key themes, standout abstracts, and glaring absences that deserve our collective attention as we rethink the future of implementation science.

Key Themes: What We’re Talking About in D&I Science

1. The Tempo of Change: Fast vs. Thoughtful Implementation: The conference theme itself underscored a growing tension in the field—should we be racing to scale, or taking our time to ensure adaptation, sustainability, and trust-building? Multiple abstracts explored both “sprint” and “marathon” models, offering important lessons from contexts like COVID-19, primary care, and behavioral health systems.

2. Behavioral Health and Substance Use: From Evidence to Action: A large share of abstracts tackled substance use, especially the implementation of Screening, Brief Intervention, and Referral to Treatment (SBIRT) and Medication for Alcohol Use Disorder (MAUD). These studies showed promise but also illustrated persistent issues around fidelity, workforce readiness, and EHR integration.

3. Equity and Measurement-Based Care (MBC) in Safety Net Systems: Several papers examined the rollout of MBC in safety-net settings. While technically sound, these studies revealed systemic inequities in digital access, language barriers, and cultural fit, raising questions about whether our tools are equitable by design or only equitable in theory.

4. AI and Digital Tools: A New Frontier: Papers exploring AI-based training and feedback tools (e.g., for motivational interviewing or insomnia treatment) hinted at the growing role of digital solutions in accelerating implementation. Yet, many required additional support to ensure quality and equity, suggesting that AI cannot fully replace human-centered strategies.

The Final Frontier, or the next Frontier

Three Promising Abstracts to Watch

STUN Trial (Daniel Jonas et al.) – S1/S7

The STUN Trial demonstrated how primary care facilitation significantly increased alcohol screening and brief intervention delivery, from 20% to 50% screening, and from 0% to 40% counseling. This was especially impressive, given that the implementation occurred during the pandemic.

Why It Matters: This finding demonstrates that modest investments in facilitation and EHR support can lead to substantial behavioral changes in busy primary care settings. It also suggests that with the right structure, even small-to-medium-sized practices can integrate evidence-based approaches at scale.

AI vs. Human Feedback Tools for Motivational Interviewing (Ironside et al.) – S4

This study compared provider engagement with AI-generated vs. expert human feedback. Notably, provider engagement with AI was influenced by individual attitudes, while human feedback engagement was shaped by clinic climate.

Why It Matters: This is one of the few studies trying to empirically unpack how AI performs in real implementation settings. As health systems increasingly turn to tech for scale, understanding the nuanced drivers of engagement is crucial for designing effective hybrid solutions.

Team Science Community Toolkit (Hartstein et al.) – S16

This toolkit provided over 50 interactive and practical resources to support community-academic partnerships. The feedback was overwhelmingly positive, showing strengthened relationships, increased trust, and improved equity in research dissemination.

Why It Matters: It addresses one of the deepest-rooted problems in implementation science: how to authentically and sustainably involve communities. The toolkit’s integration of co-design and shared decision-making is exactly what “slow but steady” implementation should look like.

Gaps in the Agenda: What Was Missing?

Despite its depth, the conference agenda exposed several blind spots:

Limited Global South Representation: Although a few presenters from Low and Middle Income Countries ()LMICs) participated, there was minimal representation in the published abstracts. D&I science still risks being a predominantly Global North endeavor, with less attention to structural inequities in resource-limited settings. This is really work calling out, and it reminded people of the outstanding work of the Global Implementation Society. This has really become a major passion of mine. Implementation IS NOT LIMITED TO WEIRD COUNTRIES. (WEIRD is actually an acronym that stands for “Westernized, Educational, Industrialized, Rich, and Democratized”. So much of scientific research relies on WEIRD participants in WEIRD settings.)

Overemphasis on Health Systems, Underfocus on Policy Levers: Few abstracts explored the interface between implementation science and public policy. With Medicaid cuts looming and prevention programs at risk, we need a stronger alignment between diversity and inclusion (D&I) and policy advocacy. Reflecting on my ongoing work as a community psychologist, we in the field tend to lean too heavily on programs over policy. This is where the work of Trestlelink is so critical. Policymakers must be able to understand and engage with evidence to shape policies that achieve desired outcomes.

Insufficient Attention to Youth and Education Settings: The educational system remains a critical frontier for implementation science, particularly for behavioral and mental health interventions. Yet, few abstracts tackled schools or pediatric populations in any meaningful way. So you want money to research this. The William T Grant Foundation has been funding this EXACT domain for years. Apply here!

Where Do We Go From Here? A Future-Facing Agenda

If the field wants to remain relevant, the next phase of D&I science must:

1. Embrace Pace as Contextual, Not Universal. Rather than framing implementation speed as good or bad, we should measure pace in terms of appropriateness, tailored to the context, equity, and complexity of the intervention.

2. Embed Equity Upstream. Equity should be more than a reporting category. It must shape intervention design, evaluation metrics, workforce planning, and even reimbursement mechanisms.

3. Close the Loop Between Policy, Research, and Practice. D&I science should not just support health systems but inform policy change. This includes creating policy-sensitive models and tools for public health departments, community organizations, and advocacy groups.

Soma also says a lot of other things worth paying attention to.

4. Normalize Failure and Re-Design. As one abstract put it, not all implementation failures are bad; some of it reflect misapplied frameworks. But we need better mechanisms to study and learn from “failed” implementations to refine our theories. As my good friend Dr. Soma Saha says, “We need to learn to fail forward.”

5. Integrate Human + AI Approaches.
Digital tools can’t solve complex implementation problems alone. The most promising models blend AI scalability with human wisdom, especially in training, feedback, and adaptation.

How PubTrawlr and This Week in Public Health Are Filling the Gaps

While the implementation science field continues to grapple with pace, equity, and accessibility, PubTrawlr and This Week in Public Health are already putting many of these principles into practice, not just studying them. We’re building tools and platforms that don’t just analyze evidence, they accelerate its understanding, democratize its access, and amplify its impact across audiences too often left out of the research conversation.

1. Turning Research into Readable Action: We transform peer-reviewed research into plain-language summaries tailored for real-world users, including public health professionals, community coalitions, and parents and patients. Our AI-driven pipeline scans, synthesizes, and translates implementation studies in near-real time. That means the science doesn’t wait for the next grant cycle to reach the people who need it most.

2. Supporting Implementation in Under-Resourced Settings: Through projects with local health departments, rural coalitions, and frontline community-based organizations, we’re actively addressing one of the conference’s most notable gaps: the need for tailored, sustainable D&I tools that work outside of academic institutions. Whether it’s designing dashboards to support local opioid response or building curated knowledge libraries for CHWs, we’re showing what it looks like to decentralize D&I capacity.

3. Bridging the Research–Practice–Policy Divide: Our work doesn’t stop at dissemination. We support decision-makers—at the clinic, community, and policy levels—by delivering evidence in formats that fit their workflow: newsletters, implementation guides, RAG-based AI copilots, and curated article feeds that elevate emerging priorities. As federal and state budgets tighten, actionable intelligence must be faster, smarter, and more affordable. That’s the niche we fill.

4. Embedding Equity from Day One: From platform design to content curation, equity is not an afterthought. We prioritize literature that addresses racial, geographic, and structural disparities and elevate research from underrepresented scholars and LMICs. We also involve community partners in co-creating our tools and sharing power—not just data.

5. Building a Living Laboratory for Implementation Science: Unlike traditional academic outlets, This Week in Public Health is an iterative and responsive platform. We track what readers engage with, how information spreads, and what gaps still exist in the knowledge-to-practice pipeline. In essence, we’re building an open-source ecosystem for applied D&I research—one that tests, learns, and adapts in real time.

Final Thought

Implementation science is maturing, but maturity requires humility. We can’t assume that faster is always better, or that technology is always the answer. The 17th Annual Conference offered many breakthroughs, but also a mirror. If we want to optimize the pace of implementation, we must first ask: who are we moving for, and what are we leaving behind?

Let’s ensure the next phase of D&I science leaves no one behind. Doing something well means doing it right.

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started