Most COVID-19 Tweets Linked to Low-Quality Info
March 2020: the world was locking down, and millions of people opened Twitter for answers. What they found was a storm—data dashboards, rumors, home remedies, and official updates jumbled together. Amid that digital noise, how many sources were actually credible?
A new study from Concordia University’s Rozita Haghighi and Mohsen Farhadloo answers that question with hard numbers—and a wake-up call for public health leaders. Their analysis of the 100 most retweeted COVID-19 websites on Twitter (now X) found that nine out of ten failed to meet basic quality and transparency benchmarks. Only 11% qualified as “high quality” under DISCERN ratings used in medical communication research.
The Study in Brief
The researchers focused on March 2020—the month the World Health Organization declared COVID-19 a pandemic. Using a dataset of 28 million English-language tweets, they identified the most-shared URLs and evaluated each website linked to those tweets.
Two independent reviewers scored the sites using two gold-standard frameworks:
- JAMA Benchmarks — checks for authorship, attribution, disclosure of conflicts, and currency of information.
- DISCERN Instrument — rates the reliability and clarity of treatment information on a scale of 16 (low) to 80 (high).
The results were sobering:
- 95 of 100 websites met only two of the four JAMA criteria.
- 81 of 100 websites scored low on DISCERN.
- Government and medical center sites performed only moderately better than news or commercial sites.
The Problem: Trust Lost in Translation
At first glance, social media seemed like a public health lifeline—fast, accessible, and global. But speed came at a cost. The study shows that the most widely shared COVID websites failed to identify their authors or sources. Almost no one disclosed funding or conflicts of interest. Even official sites often omitted the date of their last update.
This matters because during a crisis, the public rarely distinguishes between a peer-reviewed source and a blog masquerading as science. A link retweeted thousands of times can look legitimate even when its content isn’t. As Haghighi and Farhadloo note, “The absence of authorship and disclosure creates significant gaps in transparency and quality.” In an infodemic, those gaps become chasms of mistrust.
Beyond Clickbait: What Drove Low Quality?
The worst performers weren’t the fringe conspiracy blogs you might expect. They were mainstream news and human-interest sites, accounting for nearly 70% of the most retweeted sources. Many of their stories focused on individual experiences rather than verified medical facts—powerful for empathy, but weak for evidence. Even university-affiliated pages faltered on clarity and relevance.
One reason, the authors suggest, is that Twitter’s virality rewarded emotion and speed over accuracy. As posts raced to be first, citations and updates fell by the wayside. That dynamic has not disappeared; if anything, algorithmic amplification makes it harder today for trustworthy sources to compete with sensational ones.
What This Means in Practice 🧩
For Public Health Departments
- Audit your digital footprint. Run JAMA/DISCERN-style checks on your own web content before a crisis hits.
- Time-stamp updates. Show citizens when guidance was last reviewed and by whom.
- Collaborate with platforms. Negotiate priority placement for verified public health accounts during emergencies.
For Journalists and Content Creators
- Cite sources and authors prominently. Transparency builds trust and search credibility.
- Resist over-simplifying. Balance narrative appeal with data accuracy—humans need both.
For Social Media Platforms
- Algorithmic accountability. Prioritize verified health content and reduce amplification of unverified claims.
- Invest in “infodemic watch.” Automated tools and human reviewers should flag low-quality health content before it trends.
For Policy Makers
- Mandate disclosure and authorship for digital health content, just as we do for clinical trials.
- Support public education in media and digital health literacy at community levels.
Why It Still Matters Now
The authors remind readers that COVID-19 was a stress test for our digital health ecosystem. The failures they document aren’t unique to that moment—they are systemic weaknesses in how health information is produced and shared. As future outbreaks, climate-related disasters, and AI-generated content reshape the information landscape, those weaknesses could become public-health threats in their own right.
Infodemic management is now a core competency recognized by the World Health Organization. Agencies from Germany to Nigeria are building “social listening systems” to track misinformation in real time. Haghighi and Farhadloo argue that evaluating content quality must be a routine part of that work—not an after-action report.
What’s Next & Barriers to Action
Funding and capacity remain major hurdles. Many local health departments lack staff to manage social media quality reviews or coordinate with platform moderators. Policy mandates for digital authorship and disclosure face pushback over free speech concerns. And while AI tools can spot patterns of misinformation, they still struggle to evaluate nuance and context.
Yet the authors see hope in cross-sector collaboration: public agencies working with newsrooms, academic partners, and tech firms to create shared quality standards. They also urge training communication teams in health literacy and false-information detection—skills as essential as contact tracing once was.
Conversation Starters 💬
- How does your organization verify the accuracy of what it shares online in real time?
- Would a “health-content transparency label” help restore public trust?
- Could AI-assisted fact-checking be integrated into routine public-health communications?


