California’s New Legislation on AI in Healthcare

California is implementing a groundbreaking law aimed at regulating the use of artificial intelligence (AI) algorithms employed by insurers for prior authorization and coverage decisions. Targeting concerns over bias and accountability, the new legislation will take effect on January 1, 2025, imposing strict limitations on how AI-generated formulas can influence healthcare decisions.

The law, known as the Physicians Make Decisions Act (SB 1120), mandates that the use of AI in medical decisions be grounded in a patient’s individual medical history and clinical circumstances. It prevents decisions based solely on group datasets and reinforces that human physicians must approve any algorithm-based decision. This legislation was notably supported by various physician organizations, medical groups, and the California Hospital Association, although it faced opposition from insurance industry representatives.

Human Oversight Remains Paramount

State Senator Josh Becker, who sponsored the legislation, emphasized the importance of maintaining human involvement in healthcare decisions. Citing the complexities of individual medical histories, Becker pointed out that algorithms can lead to erroneous or biased decisions if not handled by qualified professionals.

Dr. Tanya W. Spirtos, President of the California Medical Association, stressed the value of AI as a supportive tool rather than a replacement for physicians’ decision-making capabilities.

Establishing Guardrails and Preventing Bias

The law establishes clear guardrails around AI applications in healthcare, demanding that algorithms be applied fairly and equitably. Dr. Sara Murray, Chief Health AI Officer at UCSF Health, raised concerns about inherent biases in AI systems, referencing research that demonstrated disparities in care based on algorithmic decisions. As highlighted, the accuracy of AI tools heavily depends on the data used for training, necessitating transparency concerning these datasets to ensure they reflect the populations being served.

Importantly, state regulators are provided with new accountability measures under this law, granting them the authority to scrutinize AI implementations in healthcare to mitigate risks of bias and ensure ethical practice.

A Growing Need for Regulation

This legislative move reflects a broader trend among state lawmakers to impose more stringent regulations on AI, particularly in the absence of federal oversight. As noted by legal experts, the regulation of AI-based algorithms has been a blind spot in existing healthcare laws, presenting crucial challenges that the new law aims to address. Moreover, while certain AI applications are scrutinized by the FDA, algorithms like those targeted by SB 1120 remain largely unregulated, underscoring the necessity of state-level action.

The American Medical Association (AMA) has similarly voiced concerns about the need for oversight, advocating for comprehensive guidelines to ensure AI serves the interests of patients without undermining the human element in medical care.

Future Implications for AI Use in Healthcare

While the new law asserts the importance of human oversight, experts recognize the potential efficiency gains that AI can bring to prior authorization processes. Properly trained AI systems could streamline decision-making and reduce administrative burdens, facilitating better experiences for patients and healthcare providers alike. However, there is also the risk of “review creep,” where the increasing reliance on AI could necessitate even more intensive oversight and reviews.

The law also encompasses a provision (AB 3030) mandating that providers inform consumers when communications have been generated by AI unless a licensed healthcare provider has reviewed them beforehand. This transparency is crucial in building trust and ensuring patients are aware of how their care is being managed.

Increasing Legislative Momentum

As California sets a precedent, it is anticipated that other states will follow suit in the coming years, propelled by growing concerns regarding the use of AI in healthcare. The AMA has indicated that further legislative activity regarding AI is expected in 2025, driven by an increase in reports about the misuse of AI in denying healthcare claims.

In light of ongoing lawsuits against insurers regarding their use of AI algorithms, this legislative development marks a significant step towards ensuring ethical standards in healthcare AI applications and protecting patient rights. The emphasis on human oversight and accountability is essential for fostering an ethical and effective incorporation of AI technology in the healthcare sector.