Skip to main content
Request a demo

Using AI (the right way) to advance patient safety

Artificial intelligence is already shaping the future of healthcare. But how we use it will determine whether that future is safer, fairer, and more human.

AI holds real promise. It can surface risks before harm occurs, reduce the burden on clinicians, and streamline safety reporting. But like any powerful tool, its impact depends on how well we govern, monitor, and integrate it. As healthcare leaders, we should neither resist AI nor rush to adopt it. Instead, we must implement it the right way: with guardrails, with intention, and with people always at the center.

previously wrote about the risks associated with AI in healthcare—design, usage, and equity concerns that can undermine safety, widen disparities, and erode trust if not addressed head-on. These risks are already appearing in early deployments, and they demand proactive leadership.

These risks aren’t reasons to avoid AI, but reasons to use it wisely. That means building governance structures that monitor design, implementation, and use. It means training clinicians to oversee AI—not use it haphazardly—with the executive team bracing for the day something goes wrong (asking for forgiveness, not permission, doesn’t typically translate well in healthcare). And it means keeping patients informed, engaged, and at the center of every decision.

AI can help us deliver safer care, but only if we treat it for what it is: another tool in our toolkit. The goal isn’t to automate safety. It’s to support the people who make safety possible, every day. And, like any new tool, its value lies in how well it strengthens the human relationships at the heart of care.

Governance is the foundation

Before you roll out any AI tools in your organization, you need clarity—where it’s being used, who’s responsible for it, and how it’s being monitored. That’s governance.

In some systems, individual departments are already experimenting with AI, and sometimes without informing leadership. This creates blind spots. If you don’t know where AI is embedded, you can’t assess its impact, manage its risks, or support its users.

Strong governance starts with a centralized structure that oversees AI adoption across the organization. It requires clear policies for evaluation, approval, and ongoing monitoring. And it ensures every AI tool aligns with your organization’s values and goals, as well as its safety standards.

AI must fit in with the way care is delivered, not the other way around

AI can illuminate critical insights—but insight alone doesn’t keep patients safe.

If a prediction model flags a patient as high-risk, what happens next? Is there a protocol? Or does the alert just sit there?

To get value from AI, you need to tie insight to action. That requires collaboration across all teams—clinical, operational, and technical. It also requires listening to front-line users, because they’re the ones who actually know what works and what doesn’t.

Measurement keeps it honest

AI will never be static. We are still in the early phase of integrating it into care delivery and decision-making. We will, undoubtably, bear witness to its magnificent evolution during our lifetimes. And we will learn as we go—sometimes through success, sometimes through unintended consequences. Ongoing evaluation will be essential. You need to track its performance over time. Is AI improving outcomes? Reducing errors? Saving time? Or is it creating new problems?

Measurement should be built into every organization's governance framework. It should include both quantitative metrics and qualitative feedback. And it should be used to refine, retrain, or retire tools as needed.

Regulations are coming. Get ahead of them now.

AI in healthcare is still a moving target, but regulations are on the horizon. Federal agencies are already developing frameworks for safe, equitable AI use. Organizations that take proactive steps now will be far better positioned to adapt.

That calls for organizations to stay informed, participate in industry coalitions, and align your internal policies with emerging best practices. Just as critical, it demands transparency—with your teams, your patients, and your partners.

Bottom line: AI needs structure to succeed

There’s a growing belief in healthcare that not using AI could become the bigger risk. When the tools are available, and the benefits are evident, choosing not to use them may mean missing opportunities to prevent harm, reduce burden, and improve care.

We’re long past the pilot phase. The tools are here, the use cases are real, and the potential is growing. So, the question now isn’t, “Can we use AI?” but, “How do we use AI well?"

That doesn’t mean rushing in, though. It means moving forward with intention, clarity, and discretion. Because when AI is used properly, it makes healthcare smarter—and safer.

To discuss how Press Ganey can help bring these strategies to life at your organization, get in touch with one of our safety and high reliability experts.  

About the author

As Chief Safety and Transformation Officer, Dr. Gandhi, MPH, CPPS is responsible for improving patient and workforce safety, and developing innovative healthcare transformation strategies. She leads the Zero Harm movement and helps healthcare organizations recognize inequity as a type of harm for both patients and the workforce. Dr. Gandhi also leads the Press Ganey Equity Partnership, a collaborative initiative dedicated to addressing healthcare disparities and the impact of racial inequities on patients and caregivers. Before joining Press Ganey, Dr. Gandhi served as Chief Clinical and Safety Officer at the Institute for Healthcare Improvement (IHI), where she led IHI programs focused on improving patient and workforce safety.

Profile Photo of Dr. Tejal Gandhi, MPH, CPPS