The rapid advancements in artificial intelligence (AI) are becoming increasingly integral to various aspects of daily life. However, accompanied by these advancements are well-documented dangers and flaws inherent to AI technologies. In the United States, the lack of substantial government regulation has left the responsibility to states like California to formulate balanced regulations that allow the advantages of AI while ensuring public safety.

A critical component of such regulation is the establishment of liability — ensuring that AI companies can be held accountable if their products lead to harm. Unlike other industries, where legal accountability is established, the lack of this mechanism in AI raises concerns about prioritizing profits over public safety.

In September, Governor Gavin Newsom vetoed Senate Bill 1047, known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” which sought to create legal liabilities for high-risk AI systems. Despite the bill’s narrow focus, which only held companies accountable for mass casualties or damages exceeding five hundred million dollars, it garnered strong legislative support yet sparked significant public debate. Newsom remarked that it generated enough contention to create its own “weather system.”

In the veto letter, Newsom suggested a California-specific approach to such matters, contingent on empirical evidence and science, proposing a working group to examine the issue further—an approach reminiscent of tactics used by industries such as tobacco and fossil fuels to stifle legislative action.

Now, with the report issued, it explores many aspects of AI policy but notably omits substantive discussions of AI liability. Released during the 2025 legislative cycle, the timing indicates that significant progress on liability legislation may not occur until 2026. While Newsom positions himself against Trump’s proposed moratorium on state AI legislation, his actions appear to hinder essential legislative efforts to enforce accountability in AI.

This report, lacking clear guidance on liability legislation, may lead lawmakers to believe that future bills will meet similar vetoes as SB 1047. With Californians left vulnerable to AI-related harms, such as scams, deepfakes, and discrimination, the pressing need for regulatory measures becomes even more apparent.

One potential avenue for accountability lies in statewide ballot measures, echoing California’s challenges in establishing privacy laws. Delays in legislative actions could empower other stakeholders to intervene, risking ad-hoc developments that may not effectively address the core issues.

Alternatively, reliance on courts to determine outcomes in AI liability cases could result in prolonged delays, leaving citizens susceptible to irreversible harms projected by AI technologies.

Despite the report’s shortcomings regarding liability, there remains room for legislative actions informed by its endorsed principles. For instance, Assemblymember Buffy Wicks’s AB 853 seeks to enhance transparency by mandating social media platforms label AI-generated content, while Assemblymember Rebecca Bauer-Kahan’s AB 1018 emphasizes third-party risk assessments of AI decision-making systems.

Moreover, Senator Scott Wiener’s SB 53 aims to incorporate key provisions of SB 1047, including protections for whistleblowers in AI labs. Although the report does not expressly support any specific legislation, it can be interpreted as an implicit endorsement of these initiatives. Yet, the ambition for more comprehensive regulatory frameworks remains apparent.

While the Governor purported that the report’s aim was to provide frameworks for safely deploying generative AI, the recommendations ultimately fall short of establishing the necessary guardrails. The accumulating evidence of AI-related harms—from scams targeting vulnerable populations to automated decision-making discrimination—requires urgent regulatory attention. The critical question emerges: at what point will sufficient harm trigger decisive action?

Effective regulation is vital for clarity in the AI industry and direction for companies. It is time for California’s citizens and lawmakers to demand accountability and enact robust laws that protect society from real and emerging risks associated with AI technologies currently in use.