AI in healthcare: A powerful tool for safety—if we keep people at the center
For all the progress we've made in patient safety, some harms remain stubbornly entrenched—not for lack of effort, but because complexities continue to outpace the tools we rely on. But the acceleration of artificial intelligence (AI) offers a chance to close those safety gaps further, helping us work smarter, move faster, and keep patients safer.
AI is rapidly reshaping healthcare. But it’s true promise lies not in replacing people, but in restoring their connection to purpose and improving their effectiveness. While AI can reduce the burden on clinicians, improve documentation, predict potential harm, and make safety reporting easier, it can also pose potential risks—like bias, over-reliance, and inequity. Managing these risks effectively will be critical to protect patients and caregivers.
We’re not here to hype AI. We’re here to help you use it wisely. Start small. Build the right guardrails. And always keep people at the center of your decision-making—and behind each decision.
As AI tools proliferate in healthcare, they must be designed and deployed in a manner that is ethical, equitable, and responsible. Transparency—for both physicians and patients—is essential to building trust and ensuring safe, effective use.
Why AI matters for patient safety
Despite decades of investment in systems and solutions, some challenges in healthcare remain stubborn—missed follow-ups, documentation overload, and siloed data, for example. Yes, these can be considered workflow issues. But, more critically, they are safety risks.
AI is a practical tool that can reduce cognitive load, surface hidden insights, and streamline care. When used well, it can make safety more proactive, not reactive.
We’ve seen how AI can help:
- Pull together relevant data so clinicians don’t have to dig
- Draft documentation in real time
- Prioritize inbox messages and suggest empathetic, accurate responses
- Flag gaps in care (like missed test results and referrals)
- Predict risks like falls or infections
- Help with safety event classification and prioritization (as seen in hospitals using Press Ganey’s High Reliability Platform and PSO)
- Double the identification of at-risk patients through AI-powered predictive rounding
These ideas are already being tested and deployed by—and making a difference at—leading healthcare systems.
And sometimes the biggest “wins” come from solving everyday problems. Not the moonshots, but the inbox. The notes. The safety events. The follow-ups. When it comes to AI, that’s where healthcare teams should start.
3 risks you can’t afford to overlook
Every new technology brings unintended consequences. AI is no different. The stakes are high, and the margin for error is slim. Before scaling, you need to ask: Is it safe? Is it fair? Is it being used the way it was designed?
Despite our best intentions and our best efforts, things can go wrong.
1. Design flaws
AI tools are only as strong as the data behind them. If that data is biased, incomplete, or poorly structured, the output will reflect it. At that point, it’s not just a technical issue, but a safety issue.
Take bias, for example. If an algorithm is trained primarily on data from one population, it can systemically fail others. That can lead to missed diagnoses, inappropriate recommendations, and unequal treatment. And because AI often operates behind the scenes, these biases can persist unchecked, reinforcing existing inequities and gaps we aim to close.
Then there’s hallucinations—outputs that appear plausible, but are actually inaccurate. Large language models (LLMs)—the kind often used to support AI systems—are particularly susceptible to this issue. They can feign citations, misinterpret clinical context, or suggest actions that don’t align with best practices.
Finally, “drift” describes how ongoing changes in the world—such as evolving events, shifting data patterns, or new user behaviors—can gradually impact an AI system’s accuracy. AI tools that once delivered reliable results may become less effective if not routinely re-evaluated and updated to reflect current realities.
2. Usage risks
Even the best-designed AI can cause harm if used carelessly. Human intervention is essential to ensure AI is deployed responsibly—and to keep people safe.
One growing concern is overreliance. If clinicians start trusting AI outputs in lieu of their own judgment, they risk losing the critical thinking that keeps care safe. Imagine, for example, a model that says a patient is stable, but the clinician’s gut says otherwise. What happens when the gut is right?
"Deskilling” is another related concern. If AI handles documentation, decision support, and risk prediction, clinicians may, gradually, lose the ability to do those tasks themselves. Some skill loss may be acceptable—but it raises another question: What are the new skills needed to provide proper oversight of AI?
There’s also the risk of depersonalization. Automation can make care feel transactional, especially if patients don’t know AI is involved. This can lead to broken trust. And as we know, in healthcare, trust is everything.
Physicians are rightly concerned about potential liability when AI underperforms or contributes to harm. As AI takes on roles in documentation, decision-making, and treatment recommendations, responsibility becomes shared—between clinician, technology, and patient. If AI-generated notes aren’t properly reviewed, who’s accountable? The clinician? The vendor? The platform? Without clear policies, accountability blurs, and patients may suffer the consequences. While the legal system will ultimately define liability, healthcare leaders must establish guardrails rooted in safety, transparency, and trust.
3. Equity risks (and benefits)
AI systems require large datasets to train and operate effectively. Smaller datasets lead to biased data. And biased data can lead to biased results, so ensuring data quality is critical. AI also has the potential to widen inequity in care if not used discretionally and intentionally—most poignantly in its relation to access and inclusion.
Well-resourced systems are more likely to adopt AI early, train their teams, and integrate it into workflows. That’s great, but where does that leave smaller or rural providers? If they’re left behind, we risk creating a two-tiered system where some patients benefit from AI and others don’t.
AI can also narrow inequities if designed in the right way—and mitigate bias that currently exists.
Human oversight is the safety net AI can’t replace
The “A” in AI should really stand for “augmented" when referring to clinical care. AI can draft notes, suggest actions, and flag risks. But it requires guardrails. AI can’t (and shouldn’t) make final decisions. That’s still up to clinicians. And if they’re not trained to oversee AI outputs, they may miss errors or trust the tool too much.
Effective training isn’t solely about how to use the technology, but how to question it, too. How to spot hallucinations. How to recognize bias. How to know when AI is just plain wrong. Critical thinking is imperative to AI’s successful implementation and use.
In my next article, I’ll discuss how healthcare leaders can avoid these risks, and effectively use AI to build trust and advance safety. To learn how Press Ganey is helping organizations implement AI thoughtfully and safely, reach out to a member of our team.