How The Pitt’s AI Drama is Playing Out in Real Hospitals

On Thursday’s episode of The Pitt, the conflict surrounding the use of artificial intelligence at the Pittsburgh Trauma Medical Center finally came to a head.

The second season of the award-winning medical drama introduces Dr. Baran Al-Hashimi (Sepideh Moafi), a new doctor focused on making the hospital run more smoothly. She believes that implementing new AI technology will significantly reduce the time staff spend on paperwork – by as much as 80% – giving them more time with patients and for their personal lives, though her team is initially doubtful.

However, in the sixth episode, doctors found that the AI program had fabricated information about a patient and mistakenly mixed up the fields of urology and neurology.

According to Al-Hashimi, even with a 2% error rate, AI is more accurate than someone taking dictation, though it still requires review. However, Dr. Campbell, an internal medicine physician, strongly disagreed, stating she needs completely accurate information in patient records and doesn’t care about using AI if it compromises that.

This plotline reflects a real discussion happening in hospitals nationwide. A recent survey by the American Medical Association (2025) found that about two-thirds of doctors are using artificial intelligence in their work. While some doctors find AI incredibly helpful in providing patient care and preventing burnout, others worry it’s being implemented too quickly and makes too many mistakes for a field where accuracy is critical.

AI as a medical sounding board

In the medical drama The Pitt, artificial intelligence is initially presented as a solution to help doctors with charting – the process of recording patient interactions. Charting is a major source of frustration for doctors, often requiring them to work late to complete. Recently, hospitals have begun using AI-powered scribes that listen to doctor-patient conversations and automatically create summaries for medical records.

Dr. Murali Doraiswamy, a professor at Duke University School of Medicine, explains that AI note-taking tools allow doctors to concentrate more on patients and less on typing during appointments. However, he points out, these tools don’t save much time overall—perhaps just a minute or two per visit—because doctors still need to review and edit the AI-generated notes (as reported in The Pitt). He admits the tools don’t dramatically reduce doctors’ after-hours work, but believes they are still a positive step and will likely improve over time.

Certain AI tools are taking healthcare even further. Last year, Presbyterian Healthcare Services in New Mexico tested an AI assistant named GW RhythmX. This assistant gives doctors a quick overview of their patients’ medical backgrounds, which could save them time they would normally spend reviewing lengthy charts and lab results before appointments.

According to Lori Walker, the Chief Medical Information Officer at Presbyterian, the RhythmX tool can help solve difficult patient cases. She explains that a patient recently came in with an infected wound but had allergies to many common antibiotics. In the past, doctors would need to consult an infectious disease specialist, which could take a day or two. However, in this case, the doctor used the chatbot and instantly received a suitable prescription.

Sudheesha Perera, a doctor at Yale School of Medicine, explains that he and his coworkers frequently use OpenEvidence, an AI chatbot specifically trained on reliable medical information. He says it’s a quick way to get answers – for example, if a patient has an infection, he might ask the chatbot for alternative medications instead of searching through textbooks or Google.

Dr. Perera is working with Yale to develop training for doctors on how to best use AI in their work. He also uses AI tools like Claude Code and Gemini in Yale’s Cardiovascular Data Science Lab to help him write code for analyzing data. He explains that he can simply describe what he needs in everyday language – telling the AI what his data looks like and what he wants to achieve – which significantly speeds up his work.

Mistakes and risks

However, there are significant concerns and potential dangers. Similar to issues seen in the story The Pitt, AI tools have already made errors in actual healthcare situations. Michelle Gutierrez Vo, a nurse and leader with the California Nurses Association and National Nurses Organizing Committee, recounts that three years ago, her hospital attempted to use a new AI tool to automate decisions made by case managers. During testing, the tool made several mistakes, including recommending a cancer patient receiving month-long chemotherapy be discharged after only two or three days.

She explains that, repeatedly, using AI has turned out to be less effective and more costly for organizations. A recent 2024 survey also revealed that two-thirds of union nurses believe AI negatively impacts their work and puts patients at risk.

Gutierrez Vo is concerned that artificial intelligence is primarily being implemented to reduce expenses and boost profits, which could lead to increased workloads for already stretched hospital staff. This concern is reflected in the perspective of Dr. Robby (played by Noah Wyle) in The Pitt, who states, “AI will improve efficiency, but hospitals will likely expect us to see more patients without receiving additional compensation.”

A big worry is that relying too much on AI could actually weaken doctors’ skills and judgment, especially when they need them most. This concept is highlighted in this week’s episode of The Pitt, which features a hospital forced to operate without technology after a cyberattack, requiring staff to depend entirely on their own expertise and training.

Dr. Perera agrees, explaining that in critical situations, quick thinking is essential. “When a patient’s condition is rapidly deteriorating, you need readily available knowledge. An AI tool simply isn’t fast enough,” she says. “It’s important to remember that we need to be able to practice medicine effectively without relying on these tools.”

Dr. Perera worries that if future doctors depend too much on AI without first developing essential skills, it could seriously damage healthcare. He explains that just as students might use ChatGPT to write essays without learning to write themselves, doctors could rely on tools like OpenEvidence instead of developing their own critical thinking and planning abilities. He emphasizes the importance of teaching medical residents how to use these tools effectively and at the appropriate stage of their training.

Doraiswamy envisions AI tools as aids to doctors, helping them make better decisions instead of replacing their expertise. He believes the ideal AI wouldn’t simply provide answers, but would prompt doctors to consider the right questions and think critically. Ultimately, he wants AI to encourage deeper thought, not just offer quick solutions.

Read More

2026-02-13 07:06