AI Is Not Just ChatGPT: How We’re Actually Using AI in Public Health (and Philanthropy)
If you work in public health right now, you’ve probably heard some version of this:
“We should use AI.”
But when people say “AI,” what they often mean is: a chatbot that writes things.
Large language models (LLMs) like ChatGPT have dominated the conversation. They draft emails, summarize documents, and generate reports. They’re flashy. They’re visible. They feel futuristic.
But here’s the truth: AI is much bigger than generative text.
And in public health, the most meaningful uses of AI often have nothing to do with writing paragraphs. At This Week in Public Health, we’ve been building and deploying AI systems for two very different use cases:
- Translating and curating the scientific literature (via Retrieval-Augmented Generation, or RAG)
- Powering philanthropic intelligence through FindGrant.ai
Neither of these is “just a chatbot.” Both represent applied AI in the service of decision-making.
Let’s unpack what that actually means.
First: What AI Actually Is (and Isn’t)
Artificial Intelligence is an umbrella term for systems that can:
- Detect patterns in data
- Make predictions
- Classify information
- Retrieve relevant knowledge
- Optimize decisions
- Generate text or images
Generative AI (like GPT-style systems) is one branch of this ecosystem. But public health has been using AI-adjacent methods for decades:
- Predictive modeling for disease spread
- Risk scoring for health outcomes
- Geographic clustering
- Resource allocation optimization
The difference now isn’t that AI suddenly exists. It’s that the tools have become more accessible and more powerful. The risk is that we confuse the most visible form of AI (text generation) with the most useful form of AI (decision support).
How We Use AI at This Week in Public Health
Let’s start with our core mission: Make scientific knowledge usable.
That’s harder than it sounds. Every week, thousands of peer-reviewed public health articles are published. Practitioners don’t have time to sift through them. Even when they do, articles are dense, technical, and often disconnected from policy realities.
We don’t solve that problem with a chatbot. We solve it with Retrieval-Augmented Generation (RAG).
What Is Retrieval-Augmented Generation (RAG)?
RAG is an AI architecture that combines:
- A retrieval system (search + indexing)
- A large language model
- A curated knowledge base
Instead of asking a model to “make something up,” RAG forces the system to:
- Retrieve relevant documents from a vetted database
- Ground its responses in those documents
- Generate output anchored to real sources
In our case, that database includes:
- Peer-reviewed journal articles
- Public health reports
- Policy documents
- Curated research feeds
This changes everything.
Instead of:
“Write something about social vulnerability.”
We ask:
“Retrieve the most relevant, recent, peer-reviewed research on SVI and health outcomes. Then summarize it for practitioners in plain language.”
The difference is profound. RAG reduces hallucinations. It increases traceability. It keeps the model anchored to real evidence. It’s not replacing expertise. It’s accelerating synthesis.
Why This Matters for Public Health
Public health decision-making is time-sensitive. Agencies need to know:
- What does the evidence say about firearm storage policies?
- What interventions reduce belief in conspiracy theories?
- What works in school-based nutrition programs?
- What predicts medical debt burden across counties?
RAG allows us to:
- Continuously ingest new research
- Match emerging news trends with peer-reviewed findings
- Generate practitioner-friendly summaries
- Maintain an evidence trail
This is AI as knowledge infrastructure, not entertainment. It’s closer to a continuously updating literature review engine than a chatbot. And that distinction matters.
Beyond Literature: AI for Philanthropic Intelligence (FindGrant.ai)
Now let’s pivot to something very different. FindGrant.ai is built on similar AI principles—but applied to funding ecosystems. Instead of ingesting PubMed feeds, we ingest:
- IRS Form 990 data
- Foundation grant histories
- Peer organization funding flows
- Donor-advised fund distributions
This is structured financial data, not text prompts. We use AI to:
- Identify which foundations fund organizations like yours
- Detect patterns in award sizes and frequencies
- Analyze geographic funding concentrations
- Surface alignment opportunities
This is not generative fluff. It’s data-informed (not data-driven!) inference. When a nonprofit leader asks:
“Who is actually likely to fund us?”
We don’t generate a guess. We analyze real historical funding flows and match patterns. That’s AI as pattern recognition and decision support.
Two Use Cases, One Underlying Principle
Whether we’re working with:
- Scientific literature (This Week in Public Health)
- Philanthropic ecosystems (FindGrant.ai)
The underlying AI philosophy is the same:
Ground the system in real data.
Use AI to synthesize and surface patterns.
Keep humans in the loop.
This is fundamentally different from treating AI as a magic writing machine.
AI in Public Health: The Bigger Landscape
Let’s zoom out. AI in public health can include:
Predictive Modeling
- Forecasting outbreaks
- Identifying high-risk populations
- Predicting hospitalization rates
Risk Stratification
- Identifying communities with overlapping vulnerabilities
- Targeting interventions more precisely
Optimization
- Allocating limited public health resources
- Modeling workforce deployment
Knowledge Synthesis (Our Lane)
- Continuous literature review
- Evidence translation
- Research-to-practice acceleration
Philanthropic and Systems Mapping (FindGrant.ai)
- Funding network analysis
- Capital flow transparency
- Strategic positioning insights
When we reduce AI to “LLMs that write,” we shrink the field’s strategic imagination.
The Governance Question
Public health cannot adopt AI casually. Every deployment must ask:
- Is the system grounded in validated data?
- Are we transparent about sources?
- Is there human oversight?
- Have we tested for bias?
- Can we explain how decisions are influenced?
RAG reduces hallucination risk, but it does not eliminate responsibility. Predictive funding analysis can surface patterns, but humans still interpret and decide. AI should augment professional judgment, NOT override it.
Why This Conversation Matters Now
We are entering a moment where:
- Agencies feel pressure to “adopt AI.”
- Funders are asking about AI strategies
- Policymakers are drafting AI governance frameworks
If public health leaders equate AI with chatbots, we risk two bad outcomes:
- Overhype and disillusionment
- Underutilization of powerful analytical tools
AI is not a novelty feature. It is an infrastructure layer. At This Week in Public Health, we are not trying to build the loudest AI system. We are trying to build the most grounded one. And at FindGrant.ai, we are not guessing who might fund you. We are mapping real capital flows. That’s the difference between generative spectacle and applied intelligence.
Final Thought: AI as Public Health Infrastructure
AI is not a replacement for epidemiologists. It is not a substitute for community engagement. It is not a shortcut to equity. But when grounded in real data, transparently deployed, and thoughtfully governed, AI can:
- Reduce information overload
- Improve funding strategy
- Accelerate evidence translation
- Increase strategic clarity
In other words, AI is a toolkit. When used responsibly, it can help public health professionals do what they’ve always done, just faster, and with better visibility into the evidence.


