When Bernie Sanders warned in an April 2 Wall Street Journal op-ed that artificial intelligence is “a threat to everything the American people hold dear,” he gave voice to a real and growing unease. Americans are worried about jobs, power, misinformation and what all this means for how we live and relate to one another. But framing AI primarily as a threat doesn’t just reflect public sentiment. It reinforces a kind of paralysis at exactly the moment when engagement matters most.
Because here’s the contradiction: Americans are already using AI, even as they say they don’t trust it. More than half report using it for research, and many are using it for writing, work and analysis — yet only about one in five say they trust AI-generated information most of the time. That’s not rejection. It’s adoption with hesitation. And left unaddressed, hesitation tends to harden into disengagement.
You can see that dynamic clearly in the public health field. This is a profession that has every reason to be careful — high stakes, sensitive data, real-world consequences. But caution has a way of blurring into avoidance. While public health professionals debate AI in broad, abstract terms, other sectors are already building it into how decisions are made and how information is delivered. If public health leaders wait for certainty, they won’t be shaping those systems. They will be inheriting them.
The question isn’t whether AI poses risks. It’s whether we are prepared to use it well. That’s a different conversation, and it’s a more practical one. In real-world settings, AI is already doing work that the public health field often struggles to do at scale: translating complex guidance into plain language, adapting messages for different audiences, generating drafts during fast-moving situations and identifying patterns in public feedback that would otherwise be missed. None of that replaces expertise. It extends it. And in a field that is chronically under-resourced, extension matters.
To be fair, not every institution is standing still. The Centers for Disease Control and Prevention’s recent guidance on AI didn’t make headlines, but it sent a clear signal: this is something to use, not just study. The emphasis was on guardrails — human oversight, privacy, scientific integrity — but the underlying message was forward-looking. Start where you can. Use it responsibly. Learn as you go. For a federal agency, that’s a notable shift.
That distinction — between guardrails and avoidance — is where much of this debate gets stuck. Senator Sanders is right to raise concerns about bias, accountability, and the concentration of power. Those are real risks. But building guardrails is not the same as building walls. Guardrails define how a technology can be used safely. Walls delay engagement until someone else has already defined the terms. Public health should be leading on the former, not defaulting to the latter.
The same clarity is needed when it comes to jobs. Seven in ten Americans believe AI will reduce job opportunities. That concern is real, and it’s growing. But we’ve been through versions of this before. New tools don’t just eliminate work — they reshape it. The more immediate issue is whether the field is investing in its own ability to adapt. Are agencies training staff to use these tools effectively? Are leaders creating space to experiment? Or are they signaling — implicitly — that it’s safer not to engage at all?
What’s striking about this moment isn’t the presence of fear. It’s how much of the conversation stops there. Fear is understandable. It’s also incomplete. In a field defined by outcomes, not intentions, it’s natural that AI makes us uncomfortable. The question is whether we’re willing to work through that discomfort to shape how it’s used.
The choice for the public health profession isn’t whether to accept or reject it. It’s whether to help define it — or to adjust to it after the fact.
This article was originally published on Forbes.com