COVER STORY

October 2025

AI Can Diagnose, Prescribe, and Decide. Is it Time to Replace Clinicians?

AI is no longer taking notes, it’s taking charge. As hospitals race to harness agentic AI, the question isn’t how fast technology can evolve, but how much control healthcare leaders are willing to give up.

— By Eric Wicklund, Associate Content Manager, Innovation and Technology, 
COVER STORY

October 2025

AI Can Diagnose, Prescribe, and Decide. Is it Time to Replace Clinicians?

AI is no longer taking notes, it’s taking charge. As hospitals race to harness agentic AI, the question isn’t how fast technology can evolve, but how much control healthcare leaders are willing to give up.

— By Eric Wicklund, Associate Content Manager, Innovation and Technology, ewicklund@healthleadersmedia.comLinkedin

TAKEAWAYS

  • Hospitals are letting agentic AI tools diagnose, prescribe, and even plan treatment steps, forcing leaders to ask where human judgment ends and algorithmic control begins.
  • Some say AI is finally easing burnout and decision fatigue; others fear it’s eroding the art, and autonomy, of medicine.

  • As AI grows more “empathetic,” executives are confronting an unsettling paradox: what happens when patients feel closer to their algorithms than their doctors?

TAKEAWAYS

  • Hospitals are letting agentic AI tools diagnose, prescribe, and even plan treatment steps, forcing leaders to ask where human judgment ends and algorithmic control begins.
  • Some say AI is finally easing burnout and decision fatigue; others fear it’s eroding the art, and autonomy, of medicine.

  • As AI grows more “empathetic,” executives are confronting an unsettling paradox: what happens when patients feel closer to their algorithms than their doctors?

Ambient AI is so yesterday. Agentic AI is all the rage now.

Across hospitals and health systems, the next generation of automation—agentic AI—is stepping beyond transcription and summarization to independently analyze, plan, and act. These intelligent systems can sift through years of clinical data, suggest diagnoses, and even guide treatment decisions. But that power comes with a question that’s dividing medicine: when does AI stop assisting clinicians and start replacing them?

For healthcare leaders, this is no longer a thought experiment. It’s a reckoning. Hospitals are investing millions in AI tools that promise to reduce burnout, standardize care, and improve accuracy. Yet many executives and clinicians are beginning to wonder whether, in the pursuit of efficiency, they are risking something irreplaceable—the clinician’s judgment, intuition, and humanity.

At Seattle Children’s, AI can now analyze thousands of clinical pathways and recommend next steps in real time. At other systems, it’s already screening patients, suggesting medications, and managing follow-up care. The results are remarkable—but also unsettling. 

The debate has split hospital corridors and C-suites alike: some see AI as the savior of an exhausted workforce; others fear it’s the first step toward a healthcare system where humans take orders from machines.

How far should health systems let AI go, and what happens when the line between human and algorithm begins to blur?

iRobot

The concern ties into the ages-old idea that AI will replace doctors and nurses (some health systems and hospitals are even slowing their AI strategies to avoid alarming nurses’ unions). And while some feel that’s an inevitable conclusion for a growing industry with a declining labor base, healthcare leaders have to set the groundwork for what AI can do and what it can’t do—namely, what clinicians should always do. 

“Clearly, what everyone agrees on is that there will always be a need for humans in the loop,” says James Blum, MD, CDH-E, Chief Health Information Officer at University of Iowa Health Care.  “But we clearly haven’t been able to figure that out yet.”

Agentic AI is loosely defined as an autonomous solution capable of independently planning, analyzing and executing complex, multi-step tasks and workflows. Within healthcare, development has been centered around four use cases:

1. Note creation and transcription
2. Data mining (such as through the EHR)
3. Clinical decision support
4. Patient facing functions like screening and scheduling

Our survey shows that 27% of organizations say they’re already using agentic AI for automation, with another 39% planning to adopt it within the next year. Plus 67% of business leaders see AI as a game-changer, set to shake up traditional practices and open new doors for an innovative AI workplace. 

• 94% of healthcare organizations view AI as core to their        operations 

• 86% of healthcare organizations are using AI extensively      no

Source: Blue Prism’s Global Enterprise Ai Survey 2025

Evidence of clinical ROI is growing. Some health systems say the bots have cut down on administrative stress and burnout among doctors and nurses, reducing turnover and time away from work. 

The ‘Ask Jeeves of Healthcare’

Zafar Chaudry, SVP, Chief Digital Officer and Chief AI and Information Officer at Seattle Children’s Hospital, sees agentic AI as a critical tool to helping clinicians organize their thoughts on patient care and choose the right care pathway.

Chaudry says Seattle Children’s worked with Google to create an interactive clinical tool, called Pathway Assistant, that draws from “thousands of thousands of pages” of clinical pathways developed by the hospital over the past 30 years. 

“So now as a doc, you can say, ‘I have a patient' [who is] five years old, has a fever, cough, vomiting, all the symptoms presenting to the clinic or to the emergency room, what should I do?” Chaudry explains. “And what [Clinical Pathways] will do is trawl through all of that knowledge, all validated by our clinicians. And it will say, ‘Hey, thanks for telling me that; this is what you should do next. Go and run this test, go and do this X-ray or scan or whatever and then come back and tell me what the results are.”

Zafar Chaudry, SVP

Chief Digital Officer and Chief AI and Information Officer at Seattle Children’s Hospital

“Then the physician comes back and tells the agent the results,” he continues. “The agent then goes through the pathways again and says, ‘Well, here is what this could potentially be. Now this is what you need to do next, or you need to prescribe this medication. So tell me what the weight of the patient is.’ You put the weight of the patient in and then it will say, ‘Based on this patient's weight, this is the correct dose of this medication that you should start the patient on.’”

“I can get a clinician to a [patient] within 70 seconds or less, and they can talk to a real person [and] get their problem solved 90% of the time without using AI. And when I cashed it out, it’s cheaper than building an AI tool.

—Zafar Chaudry, SVP, Chief Digital Officer and Chief AI and Information Officer at Seattle Children’s Hospital

“If you're a seasoned professional, it allows you to validate what you may already know, but you may have forgotten the dosage,” Chaudry concludes. “But this way you make less errors and you are building your practice on standardized pathways that hundreds of clinicians have built over the years.”

Chaudry points out that the tool is built on data collected and stored by Seattle Children’s, rather than an outside vendor. And it’s designed to support the clinician but not be crutch. Clinicians are responsible for all care decisions, and they have to review and sign off on any AI-developed data points.

During the recent HealthLeaders AI in Clinical Care Mastermind forum in Deer Valley, Utah, some executives questioned whether clinicians might rely too much on agentic AI to do their work. Doesn’t this, after all, feed into fears that AI will replace clinicians?

“Where’s the critical thinking?” asked Michael Fiorina, MD, CMO of the Independence Health System. 

Andy Crowder, CHCIO, CDH-E, SVP and Chief Digital Officer at Advocate Health, told HealthLeaders earlier this year that healthcare organizations have to guard carefully against relying too much on AI. Clinicians can’t accept AI without validation.

“It is so difficult for a clinician to practice medicine today without the EHR, and now I'm going to give them a generative tool set,” he says. “I'm worried about what happens when it's not available or [there’s a] cognitive decline for critical decision-making.”

“I think bias and model drift and all of those things will get better and intelligence on top of the intelligence, but, you know, you do have to have an off button for the stuff, too.”

One key may be in how clinicians are taught to use AI. Some have suggested that medical schools introduce AI to the curriculum gradually, after students demonstrate that they know how to be doctors and nurses. Then, AI should be introduced as a reference tool, used for supporting or confirming care decisions but not making them.

“Physicians, nurses and other clinicians are more likely to be more thoughtful caregivers, because they can focus on the patient in front of them and allow AI to do more of the work behind them,” CommonSpirit Health CIO Daniel Barchi said in a recent interview. “And I've seen clinicians be very open-minded [about] what AI can provide them, whether it's data or insight, because they know at the end of the day, their overarching objective is improving the health of the patient in front of them.”

Andy Crowder, CHCIO, CDH-E

SVP and Chief Digital Officer at Advocate Health

The ‘Her’ Paradox

For the consumer/patient, chatbots are becoming an effective tool to gather information, schedule medical appointments and get test results. Some providers are experimenting with chatbots to help with primary care coordination, identifying through questions what care a patient needs and routing them to the right provider.

But here’s where an interesting conflict surfaces. Studies have found that while consumers have a fear of talking to an AI bot instead of a real person when seeking healthcare services, some have also found that an AI bot can be more sympathetic than a live person.

“Although AI doesn’t experience emotions, AI-powered tools, like chatbots, can offer empathetic-like responses, at all hours of the day,” Elizabeth Grill, Psy.D., an associate professor of Psychology at Weill Medical College at Cornell University, wrote on the Psychology Today website. “Many patients appreciate the chance to ask questions or get medication reminders without waiting for office hours. In fact, some studies even show that, in certain cases, patients perceive these AI-driven interactions as more empathetic than conversations with rushed human providers.”

“I think bias and model drift and all of those things will get better and intelligence on top of the intelligence, but, you know, you do have to have an off button for the stuff, too.”

—Andy Crowder, CHCIO, CDH-E, SVP and Chief Digital officer of Advocate Health.

Michael Fiorina, MD

CMO of the Independence Health System

That can be especially helpful when screening for sensitive topics like substance abuse, sexual diseases and violence. Patients might feel more comfortable disclosing the truth to an AI bot, such as how many drinks they’d had, whether they’re used drugs or whether they’ve had sex.

Patients are sometimes “willing to tell a more complete story of their health,” says Crowder, at Advocate Health. “They don’t feel judged.”

The question then is whether bots can be ‘too human,’ or whether patients can rely too much on what AI gives them – much like Joaquin Phoenix’s character in the 2013 film Her, who falls in love with an AI bot (and if you think that’s just a movie, a recent survey by Vantage Point Counseling Services found that more than one in every four adults say they’re had an intimate or romantic relationship with AI, and more than half claim to have had some kind of relationship with an AI system, whether as a friend, colleague or confidant). Some states, like Illinois, are strictly regulating or even banning AI bots in mental healthcare because of concerns that they could harm patients who rely on them too much.

And while they praise the potential of AI to enable patients to be more informed on their health and wellness and to come to doctor’s appointments with better questions, resulting in more meaningful conversations, healthcare executives also worry that they’ll come into the doctor’s office with bucketloads of data that doctors don’t want, or with questions that muddy the care management plan or make it even more complex.

“It’s all about navigating the delivery of care,” Scott Eshowsky, Chief Medical Information Officer for the Beacon Health System, an 11-hospital health system spanning parts of Indiana and Michigan, told HealthLeaders during September’s Oracle Health and Life Science Summit in Orlando. “I would love it if there were an AI-generated set of instructions for patients.”

Some say AI can be an important colleague for patients overwhelmed by the healthcare experience. Bots could be used to help them understand medical jargon, as well as the complexities of care management (imagine a bot helping an elderly patient keep track of daily medications or guiding a young person living with diabetes on how to test for blood sugar content, administer insulin, and ascertain how diet affects the disease).

ChatGPT vs. A Physician 

For nearly 80% of answers, ChatGPT was considered better than the physicians.

Good or very good quality answers: ChatGPT received these ratings for 78% of responses, while physicians only did so on 22% of responses.

 

Empathetic or very empathetic answers: ChatGPT scored 45% and physicians 4.6%.

James Blum, MD, CDH-E

Chief Health Information Officer at University of Iowa Health Care

Daniel Barchi

CIO, CommonSpirit Health

Blum notes that AI bots can also help providers cut back on the 15-20 phone calls they’d be making every day just to follow up with patients. 

“Humans are human, and being trained to speak to the least common denominator isn't necessarily something that we're capable of,” he points out. “Nor do we want that.”

What executives do want, he says, is an AI tool that can communicate with patients in a clear and pleasant manner, exchanging data with them and giving clinicians what they need to manage and improve care. When that care is delivered, however, is when the clinician steps in.

“Our job is to do what's best for patients,” he says. 

Crowder agrees. Agentic AI tools can do the heavy lifting on administrative tasks and data crunching, can conduct initial interviews with patients and even direct them to the right providers, and they can connect the dots and fill in the gaps. But they can’t deliver healthcare.

Chaudry, of Seattle Children’s, says he’s not convinced that AI bots can or should replace humans when communicating with patients. Empathy may be one thing, but talking to a real person—especially in healthcare—matters more.

“I can get a clinician to a [patient] within 70 seconds or less, and they can talk to a real person [and] get their problem solved 90% of the time without using AI,” he says. “And when I cashed it out, it’s cheaper than building an AI tool.” 

Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders

Back to top
Back to home page
Share