How AI Is Changing the Fight Against Health Misinformation — Lessons from APHA 2025
By Jon Scaccia
16 views

How AI Is Changing the Fight Against Health Misinformation — Lessons from APHA 2025

The American Public Health Association’s 2025 conference made one thing clear: the misinformation crisis is now one of the defining public health challenges of our time. From vaccine hesitancy and fluoride myths to viral conspiracy theories about reproductive health, misinformation can become a social determinant of health. But this year, something new emerged from the noise: a growing confidence that artificial intelligence (AI) can help turn the tide.

Across sessions led by researchers, health leaders, and communication experts, a clear narrative formed. AI is becoming a frontline tool in the battle for trust, truth, and equity.

1. Infodemiology Goes Mainstream

The de Beaumont Foundation’s team made waves by introducing their infodemiology initiative: essentially, epidemiology for information. By tracking the spread of false claims across digital platforms, infodemiology provides public health with a new surveillance system for misinformation. This discipline, now integrated into several state health departments and academic programs, uses AI to detect early warning signs of false narratives before they spiral into full-blown crises.

Their Misinformation Working Group, which brings together nonprofits, health agencies, and academic institutions, is utilizing AI-powered data synthesis tools to identify where misinformation is emerging and how best to respond collaboratively. The message was clear: just as we track infectious diseases, we must now track infectious information.

2. The Rise of Real-Time AI Monitoring Systems

One of the most compelling presentations came from Innov8AI, which unveiled its Counter Threat Response (CTR) platform: a real-time AI system capable of scanning and classifying social media content. Using natural language processing and big data analytics, CTR can distinguish accurate from misleading information with over 90% accuracy. Its results were even cross-validated against CDC-confirmed measles outbreaks, showing that digital misinformation patterns mirror disease spread almost perfectly.

Innov8AI’s combining AI detection with localized, culturally responsive messaging represents a new paradigm: precision prevention for the information age. As misinformation narratives mutate faster than any pathogen, such systems offer an early warning infrastructure for digital health threats.

3. Chatbots That Inform and Empower

A study presented on an AI-enabled sexual and reproductive health chatbot demonstrated how AI can improve access to trustworthy health information while reducing disparities. The chatbot, designed for underserved populations and tested among English- and Spanish-speaking users, scored high on usability and trustworthiness. Participants praised its privacy, empathy, and accessibility qualities, too often missing from public health websites.

While not perfect (users wanted more personalization and depth), the results showed how AI, when designed ethically and inclusively, can extend public health’s reach to those who distrust traditional institutions.

4. Pre-Bunking and Digital Literacy: Human + AI Synergy

Several sessions emphasized that AI alone can’t “fix” misinformation, but must rather work alongside humans. For example, a randomized controlled trial on HPV vaccine messaging revealed that pre-bunking (offering strong, accurate messages before exposure to misinformation) significantly increased vaccination intent. Researchers suggested integrating these psychological insights into AI systems that tailor messaging for different audiences.

Other presenters focused on the intersection of AI and digital literacy. As one panel noted, the “infodemic” is about the erosion of critical thinking and media skills. New frameworks, like an expanded eHealth Literacy Lily Model, are incorporating AI tools that teach users how to question sources, identify bias, and engage responsibly with digital content.

5. Trust, Transparency, and Ethical Leadership

Dr. Boris Lushniak, former Acting U.S. Surgeon General, reminded attendees that AI tools can’t substitute for ethical leadership. Public trust, he argued, begins with transparency, empathy, and consistent communication. The role of public health professionals is to model the behaviors they hope to inspire by speaking credibly, acknowledging uncertainty, and addressing misinformation with both humility and authority.

AI can amplify these efforts, but only if it’s guided by values. The best systems are transparent about their data, accountable in their methods, and grounded in the lived experiences of the communities they serve.

6. Community Partnerships: The Human Firewall

Even the most advanced algorithms can’t replace human connection. Projects utilizing boot camp translation, a rapid, participatory method where community health workers (CHWs) translate scientific information into culturally relevant messages, demonstrated the vital role of local expertise. In one case, CHWs co-created videos that debunked health myths in both English and Spanish, making complex science accessible and trustworthy.

These hybrid models, which fuse AI-powered monitoring with human-led translation and storytelling, represent the future of misinformation response: adaptive, equitable, and community-centered.

7. Toward an Equitable Information Ecosystem

A recurring theme throughout APHA 2025 was that misinformation distorts facts and widens gaps in health equity. It preys on distrust born from historical harm, exclusion, and cultural marginalization. AI, when designed thoughtfully, presents an opportunity to bridge these divides.

By building inclusive data sets, co-designing with communities, and ensuring transparency in algorithmic decisions, AI can help public health rebuild credibility. But technology alone isn’t the solution; it’s a multiplier for human integrity, collaboration, and empathy.

The Bottom Line

At APHA 2025, misinformation was recognized as a systemic threat requiring data science, ethics, and empathy in equal measure. The message was both depressive and hopeful: the same tools that fuel misinformation can be redirected to combat it.

AI can’t replace human judgment, but it can accelerate the spread of truth. And in an era where trust is public health’s most endangered resource, that may be the most powerful innovation of all.

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started