Will policy, regulation issues stifle AI’s advances in healthcare?

By Eric Wicklund

While healthcare organizations are scrambling to understand how and where AI can best fit in, there’s a mad rush to figure out policy and regulations as well.

During the recent AIMed Global Summit in San Diego, Alya Sulaiman, a partner in the McDermott Will & Emery law firm who focuses on digital health, described an active landscape in which federal agencies like the Health and Human Services Department’s Office of the National Coordinator of Health IT (ONC), the US Food and Drug Administration and the Federal Trade Commission were competing with the likes of state attorneys general to regulate the technology.

The ONC, for instance, recently floated a proposal to create new transparency and risk management expectations for artificial intelligence and machine learning technology that aid in decision-making in healthcare, including any technology that integrates with EHRs.

Meanwhile, she said, there are dozens of pieces of legislation on the state level that would place guidelines on such technology as chatbots, AI that helps nurses, and AI that aid in behavioral health treatment.

“There’s an increasing [number] of very specific health AI [bills],” she said, that would add regulations and chains of approval to any health system using the technology within a specific state’s borders.

Sulaiman also noted that AI may soon be referenced in lawsuits in which a health system might be held liable if it doesn’t use available AI technology.

“That’s a real example that we’re starting to see in [potential] litigation,” she said. “AI is being interjected into the standard of care.”

In this fast-moving landscape, the three-day conference offered an opportunity to highlight how the healthcare industry is approaching AI—sometimes called augmented intelligence, rather than artificial intelligence, to focus on the idea of technology assisting clinicians and other healthcare staff rather than replacing them or acting on their behalf.

The conference featured a number of keynotes and panel discussions on the challenges and benefits of using AI in healthcare, which is still very much in its infancy. It included a ‘Shark Tank’ styled main stage event in which several start-ups in search of investment funding pitched their business plans to a board of investors. The start-ups encompassed a wide range of AI-in-healthcare ideas, including wound care analytics, drug discovery trials, consumer-facing search engines, identifying and addressing a patient’s risk of falling in a hospital, heart health, oxygen therapy, and identifying and addressing mental health issues in high school students.

David Higginson, executive vice president and chief innovation officer at Phoenix Children’s Hospital and a participant in more than one panel at the event, said healthcare organizations are moving slowly but steadily forward with AI. They’re launching small programs that address care gaps or “low-hanging fruit” to score easy wins, then scaling up and out to tackle bigger issues.

“It’s good to know we’re getting there,” he said. “We have to take those chances.”

At the same time, healthcare leaders need to be aware of the shifting policy and regulatory landscape.

On a state level, 23 attorneys general have submitted a letter to the National Telecommunications and Information Administration (NTIA), a part of the US Department of Commerce, calling for transparency and accountability with AI technology. They also argued that AGs “should have concurrent enforcement authority in any Federal regulatory regime governing AI.”

“AI is increasingly a part of our lives, influencing transactions and decisions big and small,” California Attorney General Rob Bonta said in a June 14 press release announcing that he’d joined the coalition. “We need policies governing this technology that prioritize transparency, audits, and accountability, and that put consumer protection front and center.”

At the same time, the American Medical Association—whose president-elect, Jesse Ehrenfeld, MD, MPH, gave a keynote at the AIMed conference–addressed AI during its recent Annual Meeting. The organization’s House of Delegates announced plans to “develop principles and recommendations on the benefits and unforeseen consequences of relying on AI-generated medical advice and content that may or may not be validated, accurate, or appropriate.”

“AI holds the promise of transforming medicine,” AMA Trustee Alexander Ding, MD, MS, MBA, a practicing physician and assistant professor at the University of Louisville School of Medicine, said in a press release issued by the AMA. “We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation.”

“We’re trying to look around the corner for our patients to understand the promise and limitations of AI,” he added. “There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.”

The AMA also adopted a policy regarding the use of AI in one of the more controversial topics in healthcare: Prior authorizations. This follows a ProPublica report claiming that Cigna denied more than 300,000 claims over two months through a process that used AI, enabling doctors to spend an average of 1.2 seconds on a claim.

“The use of AI in prior authorization can be a positive step toward reducing the use of valuable practice resources to conduct these manual, time-consuming processes,” AMA Board Member and Pennsylvania physician Marilyn Heine, MD, said in an AMA press release. “But AI is not a silver bullet. As health insurance companies increasingly rely on AI as a more economical way to conduct prior authorization reviews, the sheer volume of prior authorization requirements continues to be a massive burden for physicians and creates significant barriers to care for patients. The bottom line remains the same: we must reduce the number of things that are subject to prior authorization.”

Regardless of the challenges around regulation and policy, the mood at the AIMed conference was that healthcare stands in good position to benefit from the technology as long as researchers and providers more slowly and steadily and don’t rush forward expecting to solve all of healthcare’s problems within a few months, or even years..

Healthcare is “a complex system,” Robert Groves, MD, executive vice president and chief medical officer for Banner | Aetna, said in a keynote on the last day of the event. “There are just so many boxes to select, so many things to do … [but] complexity is the nature of advancement.”

The key, said Groves and several others, is to understand that AI can help as long as it’s used as a tool and not a replacement. At the end it’s important, Groves said, to “value caring over curing.”

Eric Wicklund is the Innovation and Technology Editor for HealthLeaders.

More Like This