
Who Controls What AI Knows? The New Gatekeepers of Information
In the age of generative AI, not all information is created equal — or equally visible. A new analysis from Fractl reveals that a handful of publishers now dominate the “knowledge base” behind AI assistants like ChatGPT, Gemini, and Copilot. These partnerships between AI companies and major media outlets are reshaping who and what gets seen when people ask questions about health, science, or nearly anything else.
The Rise of AI Media Partnerships
Large language models (LLMs) don’t just absorb data from the open web; they’re increasingly trained and grounded using licensed content from select publishers. When OpenAI or Microsoft signs a content deal with a major news group, that partnership acts like gravity. It pulls certain outlets and the brands or ideas they represent closer to the center of AI-generated answers.
According to Fractl’s data, five publishers dominate AI visibility across platforms:
- WebMD (~1.2 million AI citations)
- BBC (~490,000)
- Forbes (~468,000)
- Business Insider (~398,000)
- People (~345,000)
These names may look familiar — and that’s the point. They’re not just shaping what we read; they’re shaping what AI believes to be true.
What This Means for Public Health Communication
For the public health field, this concentration of “AI attention” poses both opportunities and risks. On one hand, WebMD’s dominance ensures that medically vetted information is prominent in AI outputs. On the other hand, smaller community-based organizations, open-access journals, and public health advocates may find their voices fading to the edges of the digital conversation.
In effect, AI assistants now have editorial biases baked in, not intentionally malicious, but structurally inevitable. They reflect where training data comes from, what’s legally licensed, and which outlets appear trustworthy to the algorithms.
That means if your public health message isn’t represented in these central “partner webs,” you’ll work harder (and often pay more) to be found, quoted, or summarized accurately by AI systems.
The New Strategy: Model-Aware Publishing
Fractl’s key recommendation is clear: it’s not enough to have a media strategy anymore — you need a model strategy. That means:
- Publishing original, verifiable data and open-access reports that models can trust.
- Partnering with or syndicating content through high-authority outlets favored by AI assistants.
- Ensuring that your organization’s research and press releases live on stable, citable URLs.
- Framing health data and findings in the clear, structured formats that models weigh most heavily (datasets, reproducible methods, transparent authorship).
Why It Matters
AI is quickly becoming the first touchpoint for health information. Whether people are searching for “heart disease prevention,” “local vaccine data,” or “community health programs,” the assistants they consult will draw from this limited set of sources.
For public health professionals, the takeaway is urgent: visibility now depends on where AI looks. The organizations that learn to publish for both humans and machines, prioritizing trustworthy, open, and verifiable content, will define the next generation of health communication.