The AI-Driven Infodemic Threatening Public Health
By Jon Scaccia
38 views

The AI-Driven Infodemic Threatening Public Health

A few years ago, “publish or perish” was an academic cliché. Today, it’s being supercharged by machines. Since ChatGPT burst into the scene in late 2022, artificial intelligence has infiltrated nearly every corner of science—writing, reviewing, and even summarizing research papers. The result? A flood of AI-generated studies that could overwhelm even the most diligent public health professional.

A new perspective article published in Frontiers in Public Health by Guglielmo Arzilli and colleagues warns that we may be heading toward an AI-driven infodemic—a deluge of scientific content that looks authoritative but may be riddled with errors, bias, or outright fabrication.

From “Publish or Perish” to “Prompt and Publish”

Before AI, scientists already produced staggering volumes of research—some 46,000 academic journals worldwide and counting. But large language models like GPT now make it possible to churn out papers, peer reviews, and meta-analyses in a fraction of the time.

That speed is seductive. With a few prompts, researchers can polish text, generate summaries, or even draft entire manuscripts. Yet this acceleration comes with a dark side: quantity over quality. Under pressure to meet productivity metrics or secure funding, scientists may rely too heavily on AI tools, leading to an avalanche of repetitive, shallow, or even fraudulent work.

As the authors note, “LLMs become catalysts of the ‘publish or perish’ culture.” The faster we write, the less time we spend thinking critically about what we write.

The Rise of the AI-Generated Hoax

AI’s ability to fabricate convincing but false information isn’t theoretical—it’s already happening. In one infamous case, a predatory journal published an entirely AI-generated paper falsely attributed to a well-known academic, complete with fake data and references.

These examples reveal how easily credibility can be manufactured in a world where journal logos, digital object identifiers (DOIs), and professional formatting signal trust. When a fraudulent article can look indistinguishable from a legitimate one, the consequences ripple outward—from bad science to bad health policy to bad patient care.

Who Gets Hurt the Most

Not all health professionals have equal ability to navigate this new landscape. Clinicians in lower-income countries—already struggling with limited access to high-quality journals—may be particularly vulnerable to misinformation. Many rely on open-access articles, which can include both rigorous research and low-quality or predatory publications

AI could widen this divide. Well-resourced researchers might harness AI responsibly to speed up discovery, while those without proper training or oversight could unknowingly amplify false information. Worse, overreliance on machine-generated summaries could “deskills” future professionals, making them less adept at critical appraisal and independent thinking.

The end result? A growing gap between those who can discern truth from noise—and those who can’t.

An Infodemic With Real-World Consequences

We’ve already seen how misinformation can cost lives. During the COVID-19 pandemic, false claims about hydroxychloroquine spread widely, influencing public behavior and even treatment guidelines before being debunked. Now imagine similar distortions—multiplied by AI speed—emerging around new vaccines, climate impacts, or emerging pathogens.

The “AI-driven infodemic,” as Arzilli et al. put it, isn’t just about fake news—it’s about fake evidence. And when evidence itself becomes unreliable, the foundation of public health decision-making starts to crumble.

What This Means in Practice

To prevent an AI-fueled collapse in trust, the authors call for a multipronged response—part regulation, part education. Their message: the future is now, and it’s time to actfpubh-13-1680630.

Key actions for the public health community:

  • Establish clear AI-use guidelines. Journals should require disclosure of AI assistance and hold authors accountable for accuracy.
  • Train for digital literacy. Health professionals must learn to spot AI-generated content, assess data quality, and question what looks “too polished.”
  • Promote ethical AI adoption. Use AI as a support tool—not as a substitute for human reasoning, peer judgment, or moral responsibility.
  • Equalize access. Make training on ethical AI and scientific integrity freely available, especially for professionals in low-resource settings.

These steps echo a broader ethical imperative: ensuring that the same technologies accelerating science don’t erode its credibility.

The Balancing Act Ahead

AI is here to stay, and that’s not inherently a bad thing. When used wisely, generative tools can democratize access to knowledge, speed up systematic reviews, and help non-native English speakers publish in top journals. But as the Frontiers article warns, “regulation alone cannot be the only way to manage such a disruptive phenomenon.”

What’s needed is a cultural recalibration—a renewed commitment to curiosity, skepticism, and reflection. In short, slowing down just enough to ask: Is this good science—or just fast science?

What’s Next?

The public health world faces a paradox: the same tools that could democratize science may also drown it in noise. Navigating this tension will require vigilance, transparency, and new norms of trust.

Questions for reflection:

  • How might your organization verify the credibility of AI-generated evidence?
  • What training or safeguards does your team need to stay ahead of misinformation?
  • And most importantly—how can we use AI to strengthen, not erode, the integrity of public health science?

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started