Contributor: AI could democratize medicine, but better regulation comes first


Last month, a group of researchers was able to manipulate an AI-powered drug prescription service to triple the dose of opioids and label methamphetamine as safe. Days later, New York lawmakers introduced sweeping legislation That compares clinical AI to a doctor practicing medicine without a license, making it potentially illegal for AI to provide even basic medical guidance. California has opted for a middle ground, enact legislation earlier this year requiring disclosure to patients when AI is involved.

As states continue to send conflicting signals about how best to regulate AI in medicine, millions of Americans are not waiting for a consensus. Data shows that one in three Americans is turning to AI chatbots to diagnose symptoms and direct care, a figure that has doubled in just one year. In short, AI is already practicing medicine.

I have worked as an emergency physician at academic medical centers, a safety net hospital, and a community emergency room. What defines my experience, across institutions, is the staggering burden of unmet medical need: patients left without an essential medication and unable to get refills. A diabetic who hasn't seen his endocrinologist in months because appointments are scarce. A UTI that progresses to a kidney infection without timely treatment. Every day, our system transforms manageable conditions into major crises and turns emergency rooms into a substitute for all the care Americans can't access. The human cost is amazing.

Artificial intelligence can change this reality and the possibilities are not radical or experimental. Women should be able to refill birth control without scheduling an appointment. Patients with cold sores or yeast infections should not have to wait days for a call back; in many parts of the worldthat care is accessible without prescription. AI can provide equivalent access to American patients, with appropriate safety standards built in.

In fact, the most ambitious model of this vision is further along than most people realize: the federal government is currently requesting proposals from the private sector to develop AI that will independently manage heart failure events, a disease for which only 1% of patients receive the recommended medication regimen and five-year mortality rates now exceed 50%.

The potential for AI to radically expand access to medicine is a good thing, perhaps even revolutionary. Most Americans don't choose between AI and their trusted family doctor. Barriers like cost and doctor shortages mean Americans are choosing between AI and nothing. Those patients deserve better, and AI is the first development in decades that promises tangible help at scale.

That's why, in addition to my clinical practice and research, I recently joined a company that uses AI to democratize access to medicines. I didn't make that decision lightly. There are legitimate reasons to be wary of technology as powerful as AI reaching vulnerable patients without adequate safeguards. But the answer is not the approach New York is considering. Neither doctors nor policymakers can afford to sit on the sidelines while patients fill the many gaps in our healthcare system with AI. We need regulation that is serious, applicable and designed for the speed at which this technology advances.

The federal government has already begun to influence this rapidly changing field. In January, the Food and Drug Administration updated its software guide to allow AI tools to operate with less oversight when assisting doctors. Under the new rubric, software that allows a doctor to independently review the basis of an AI recommendation falls outside the agency's medical device regulation. A textbook example would be software that can warn a doctor about dangerous drug interactions before signing a prescription.

But this exception covers AI only with a doctor in the know. There is no comparable exemption for AI that speaks directly to patients without a doctor in the room or that makes recommendations in time-critical situations. That technology presumably remains subject to full FDA oversight, although the government has not yet intervened. Building federal barriers around rapidly advancing technology is really difficult, and the FDA's caution is understandable. But the result is counterintuitive: the clinical AI that operates most autonomously is, ironically, the least regulated.

In this vacuum, states have moved quickly and in different directions. Some, including Utah, Arizona and TexasThey are creating frameworks to speed up implementation. Others, including New York and California, are taking steps to limit AI in medicine. In many ways, these are the laboratories of the democracy model that function as intended, allowing federal policy to find its footing through experimentation and evidence gathering at the state level. But 50 competing standards cannot be the answer for a technology with so many consequences. Patients deserve basic protections when using clinical AI no matter where they live, and companies creating these tools must adhere to uniform standards that prioritize patient safety.

The framework we need is an extension of what the FDA already knows how to do: require evidence of safety and effectiveness from independent third parties before implementing a clinical AI system; require adverse safety testing as part of the approval process; and impose a uniform federal standard, with room for states to go above but not fall below it. Finally, when AI harms a patient, there must be a clear path to accountability. Medical malpractice has governed physician liability for decades. It can be adapted here.

Many assume that regulation slows down transformative technology, but history suggests otherwise. Federal deposit insurance made people trust banks enough to use them. Federal safety regulations made commercial aviation the safest form of mass transportation.

Clinical AI needs the same foundation and it is urgent to act now: it is already in the hands of patients and advancing faster than any technology we have attempted to govern. The patients who have the most to gain are the same ones who have the most to lose if we don't do it right.

Hashem Zikry is an assistant professor at UCLA and medical director of research and policy at Counsel Health.

scroll to top