Opinion: What's wrong with letting apps and AI run the ER?


My resident describes our next patient in the emergency room: a 32-year-old woman with severe cramping mid-abdominal pain, vomiting, and occasional loose stools. The symptoms have been present for almost a week and there is tenderness on both sides of the upper abdomen. It could be a gallbladder problem, says the resident, hepatitis, pancreatitis, diverticulitis or atypical appendicitis. She proposes routine blood tests along with an ultrasound and an abdominal CT scan.

This is the traditional approach to an undifferentiated patient complaint: generate a list of possible diagnoses, decide which ones represent a “reasonable” concern, and use the results of additional tests to conclude what is happening. However, the second phase of this process (evaluating which diagnoses represent a reasonable concern) is receiving less and less attention. It's the heavy lifting of any patient encounter: weighing the probabilities of illness and searching for details. It's often easier and faster to cast a wide net, click through the standard order for blood tests and images, and wait for the results to appear.

The “doctor busy ordering too many tests” problem has plagued medicine for decades. Now, as hospitals inject algorithms and technology into their workflow, it's much worse. Medicine is moving inexorably away from the deductive arts, becoming more dependent on technology and evidence and less patient-centered.

Go to an emergency room today and you will likely be greeted within minutes by a doctor whose only job is to perform a “rapid medical evaluation.” The provider asks a few questions, checks boxes on a computer screen, and, shazam, you're in line for the most likely series of tests and scans, all based on an encounter that typically lasts less than 60 seconds.

This strategy seems obvious. When exams begin as soon as the patient arrives, wait times decrease, patient satisfaction increases, and fewer patients leave frustrated before even being seen. These are the metrics that make administrators smile and give hospitals high marks in national surveys.

But is it good medicine? Without the luxury of time, these access providers often group patients into broad, generic categories: the middle-aged person with chest pain, the asthmatic with difficulty breathing, the pregnant patient who vomits, the septuagenarian with a cough and fever, etc. . The diagnosis is then reverse engineered with tests to cover all possible bases for that particular ailment.

In essence, this is flipping the script on traditional medicine while encouraging doctors to use testing as a substitute for critical thinking, simplifying the practice of medicine, and pouring gasoline on the problem of overtesting.

Since rapid screening became the norm, the use of laboratory, CT, and ultrasound services at my hospital has increased nearly 20%. Just the other day, a pregnant woman in my emergency room underwent an entire battery of time-consuming, expensive, and invasive tests, even though she had had them all done at another hospital the day before. As far as I can tell, the only reason we did it was because that's what an algorithm told us to do.

This has real effects on patients. Contrary to popular perception, more testing may not provide more answers. This is because the accuracy of any test depends on the probability that the patient has the disease in question. before the test is performed. Tests performed without proper indication or context can produce incidental or even false results that can cause your doctor to look in the completely wrong direction.

The basic problem with hospitals' growing obsession with efficiency is this: Algorithmic systems treat all patients equally, expecting precise, equal answers to every question with just the right amount of detail. Except every patient is unique. And they tend to deliver their stories at their own pace, in intermittent, non-linear spurts, sometimes combining truth and fiction in ways that can be counterproductive and frustrating, but also uniquely human. I often remember Jack Webb on the old television series “Dragnet,” imploring a witness to offer “just the facts, ma'am, just the facts.” In real life, whether due to situational stress, self-deception, superstition, health illiteracy, mental illness, drugs or alcohol, my patients' initial version of their complaint is rarely “just the facts” or the last word on the subject.

A colleague recently described her role in a clinical encounter as a translator in nine parts and a doctor in one part. One question leads to another, and then another, and another, until she successfully translates the patient's lived experience into a language that modern medicine and its algorithms could begin to understand. My experience is similar. With the right choreography, the doctor-patient interaction becomes a pas de deux: two synchronized people trying to solve a puzzle together, each sharing her perspective and experience. As we transition to comprehensive care, I am concerned that health decisions will be made with information that may be incomplete or, at times, completely unreliable.

Algorithmic medicine also seems tailor-made for an AI takeover. The logic is obvious. Use “big data” to help doctors and nurses struggling to keep up with the demands of modern medicine. AI can ensure a level and consistent level of care that avoids errors of omission by considering a deliberately broad list of diagnostic possibilities. In an ideal world, a synergy of human and machine intelligence could amplify the doctor-patient encounter. AI will most likely lead doctors to abdicate judgment and responsibility to automated machine response.

So I congratulated my resident on her list of concerns, but suggested we spend a little more time with the patient. The history of her symptoms did not seem complete. I recommended my resident pull up a chair and simply ask the patient about her life. What emerged was the chaotic image of an exhausted part-time student during the day, working twice as a waitress at night and surviving on pizza, pasta and energy drinks. She had always had a “fragile stomach.”

Our list of reasonable diagnoses was expanding and contracting, replaced by irritable bowel syndrome, food intolerances, intestinal motility issues, all layered on top of a stressed-out individual barely holding it together. The initially proposed laboratory tests, ultrasound or CT scan now seemed irrelevant.

The result: the patient left the hospital faster. She received helpful suggestions on stress reduction, diet, and sleep habits. She got an appointment with a primary care doctor and avoided thousands of dollars in testing. If we had simply relied on evidence instead of asking a few more questions, it is very likely that we would have completely missed the best approach to her problem.

Waiting rooms and emergency rooms are crowded, and streamlining care has never felt more essential. But this is not an excuse for doctors to renounce their humanity or their “method.” We should modify the process: give doctors more time to get the story right, do fewer tests until we've weighed the risks and rewards, prioritize asking questions rather than simply looking for answers.

Sociologists coined the term “pre-automation” to describe the transitional phase in which humans lay the groundwork for automation, often acting in increasingly mechanical ways. As providers, we should not align.

Put another way, now that AI is poised to take on a substantial role in how doctors deliver care, we must remind ourselves: if we behave like machines, we certainly won't be missed when machines replace us.

Eric Snoey is an emergency physician at Alameda Health System-Highland Hospital in Oakland.

scroll to top