A Conversation with Andrea King
By Jon Scaccia
377 views

A Conversation with Andrea King

At a time when public health agencies are being asked to do more with less, Andrea King is helping them harness the power of artificial intelligence and modern data tools without losing sight of ethics, equity, or real-world impact. Trained as an epidemiologist and data scientist, Andrea leads the PubHealthAI Collaborative Network, a national community of practice that supports the responsible adoption of AI across government public health. From overdose surveillance to workforce planning, she’s focused on translating innovation into actionable solutions that strengthen systems and improve lives.

We’re thrilled to speak with Andrea about how AI is reshaping public health practice, what responsible adoption looks like, and where she sees the biggest opportunities for impact in the years ahead.

You’ve described your work as helping health departments ‘turn innovation into impact.’ What are some of the biggest barriers you see when public health agencies try to adopt AI or modern data tools?

Barriers generally come in two forms: technical and cultural. Often, government IT teams are not positioned to take on cutting-edge technologies like generative AI, because of a lack of technical expertise, siloed data systems, long review processes for new tools, and procurement challenges. Organizational culture can also be a barrier, including hesitancy to learn new tools, concerns over being replaced by AI, and a lack of understanding among leadership about the best ways to achieve and measure performance and value. Organizations that recognize these challenges and rise to meet them are able to modernize and adopt new technologies more efficiently than those that don’t.

The PubHealthAI Collaborative Network emphasizes the responsible use of AI in government settings. How do you define ‘responsible’ in practice, and what does it look like when an agency gets it right?

Responsible AI, to us, is the use of tools where building trust and minimizing harm are integral to their implementation. Tools should be thoroughly reviewed, thoughtfully deployed for use cases that are a good fit, and evaluated for bias in their performance. We aim to consider both the technology itself and its broader impact on communities and the environment.

Agencies that are getting it right are considering their unique context that only they know: their available resources (human, financial, and technical), the problems they’re trying to solve, and the needs of their community, then developing a governance, implementation and evaluation plan that suits that context. 

You’ve said PubHealthAI stays vendor-neutral and focused on public-sector priorities, not market hype. How can health leaders cut through the noise and tell which AI tools actually add value?

These days, AI is being baked into everything. It’s certainly hard to tell what’s worth an agency’s limited funding dollars. While the value of tools depends to an extent on the use cases, we recommend prioritizing multi-purpose technology (tools and platforms) that protects privacy, has configurable security settings, can integrate with other platforms and be updated/expanded in-house, supports equity through accessibility and multi-language capability, and provides robust training and support at a price that the agency can sustain over time. Tools that check these boxes will give organizations a good chance of future-proofing their AI investments.

AI is advancing fast, but many public health teams are under-resourced. What are practical steps smaller or rural departments can take to start building AI readiness without major budgets?

Learning to work with AI tools can be done for free or at no cost. The PubHealthAI Collaborative Network’s YouTube channel features a variety of talks and demos tailored for a public health audience. Many of the larger companies, such as Google and OpenAI, offer extensive free training content. Google Skills offers a variety of free courses that allow users to get hands-on experience with real tools. Deeplearning.ai has great technical explainers and guided courses as well. For those with technical proficiency, exploring locally installed open-source LLMs is a good way to level up applied AI, as is using open or public data to learn how to utilize models with a publicly available interface, such as ChatGPT. Just don’t send anything confidential over the open internet!

What’s one emerging use case or ethical challenge you’re most excited or concerned about as we look toward the next phase of AI in public health?

I’m really excited about the huge potential of AI to revolutionize disease surveillance and epidemiology. It’s entirely possible that in the near future, we will have tools built on models like Google’s Earth AI that integrate data on climate, infrastructure, population dynamics, and historical disease data to create planet-scale models for understanding current situations and more accurately modeling future scenarios. When you layer an organization’s own data, you can have deeper insight into your local area, for say, deploying emergency response resources or mitigating emerging threats. The future of integrated data for AI-driven insights is bright.

Learn more

PubHealth AI has a full YouTube channel to check out.

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started