Policymakers trying to catch up with the rise of AI in healthcare


Lawmakers and regulators in Washington are starting to debate how to regulate artificial intelligence in healthcare, and the AI ​​industry believes there's a good chance they'll screw it up.

“It's an incredibly daunting problem,” said Dr. Robert Wachter, chair of the Department of Medicine at UC San Francisco. “There is a risk that we will come in with guns blazing and excessive regulation.”

The impact of AI on healthcare is already widespread. The Food and Drug Administration has approved 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save doctors time. They are starting to help radiologists read MRIs and x-rays. Wachter said she sometimes informally consults a version of GPT-4, a large language model from the company OpenAI, for complex cases.

The scope of AI's impact (and the potential for future changes) means the government is already playing catch-up.

“Policymakers are woefully behind,” Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang's peers have invested heavily in the sector. Rock Health, a venture capital firm, says financiers have invested nearly $28 billion in digital health companies specializing in artificial intelligence.

One problem regulators face, Wachter said, is that unlike drugs, which five years from now will have the same chemistry they have today, AI changes over time. But governance is taking shape, with the White House and multiple health-focused agencies developing rules to ensure transparency and privacy. Congress is also showing interest; The Senate Finance Committee held a hearing last week on AI in healthcare.

Along with regulation and legislation comes an increase in lobbying. CNBC reported a 185% increase in the number of organizations disclosing AI lobbying activities in 2023. Trade group TechNet launched a $25 million initiative, including the purchase of television ads, to educate viewers about the benefits of artificial intelligence.

“It is very difficult to know how to intelligently regulate AI as we are so early in the technology's invention phase,” Bob Kocher, a partner at venture capital firm Venrock who previously worked at the technology, said in an email. the Obama administration.

Kocher has spoken to senators about AI regulation. She highlights some of the difficulties the healthcare system will face in adopting the products. Physicians, facing malpractice risks, may be wary of using technology they do not understand to make clinical decisions.

An analysis of January Census Bureau data by consulting firm Capital Economics found that 6.1% of healthcare companies planned to use AI in the next six months, roughly in the middle of the 14 sectors surveyed.

Like any medical product, AI systems can pose risks to patients, sometimes in novel ways. An example: they could invent things.

Wachter recalled a colleague who, on a trial basis, assigned OpenAI's GPT-3 the task of writing a prior authorization letter to an insurer for an intentionally “outlandish” prescription: a blood thinner to treat a patient's insomnia.

But the AI ​​“wrote a beautiful note,” he said. The system cited “recent literature” so convincingly that Wachter's colleague briefly wondered if she had missed a new line of research. It turned out that the chatbot had made up her claim.

There is a risk that AI will amplify the bias that is already present in the healthcare system. Historically, people of color have received less care than white patients. Studies show, for example, that black patients with fractures are less likely to receive pain medications than whites. This bias could be set in stone if artificial intelligence is trained on that data and subsequently acts on it.

Research into AI implemented by large insurers has confirmed that this has happened. But the problem is more widespread. Wachter said UCSF tested a product to predict missed clinical appointments. Patients who are considered unlikely to show up for a visit are more likely to receive a double appointment.

The test showed that people of color were more likely to not show up. Regardless of whether the finding was accurate or not, “the ethical response is to ask why and if there is something that can be done,” Wachter said.

Hype aside, those risks are likely to continue to attract attention over time. Artificial intelligence experts and FDA officials have emphasized the need for transparent algorithms, monitored over the long term by humans – outside regulators and researchers. AI products adapt and change as new data is added. And scientists will develop new products.

Policymakers will need to invest in new systems to track AI over time, said University of Chicago Chancellor Katherine Baicker, who testified at the Senate Finance Committee hearing. “The biggest breakthrough is something we haven't thought about yet,” she said in an interview.

KFF Health NewsFormerly known as Kaiser Health News, it is a national newsroom that produces in-depth journalism on health issues.

scroll to top