Keypoint: The California legislature is considering several bills that, if passed, would add to the nation’s emerging legal patchwork governing the use of artificial intelligence.

In mid-May, Colorado Governor Jared Polis signed the Colorado Artificial Intelligence Act (CAIA) into law, making Colorado the first state to enact legislation governing the use of high-risk artificial intelligence systems. Earlier this year, Utah enacted SB 149, which creates limited obligations for private sector companies deploying generative artificial intelligence, including disclosing its use.

The California legislature is currently considering seven AI-related bills that, if passed, would add to the growing patchwork of state AI laws. All of these bills have passed their chamber of origin and are currently being considered by the opposite chamber. While many state legislatures have already closed for the year, California’s legislative session does not end until August 31, 2024, meaning that there is still time for California to pass one or more bills.

In the below article, we briefly summarize these bills (as they are currently drafted) and identify their current status. We previously discussed four of these bills in our April 25 AI Legislation Update.


AB2930 seeks to minimize algorithmic discrimination in a manner similar to Colorado’s recently passed CAIA. The bill broadly prohibits deployers and developers of automated decision tools from using these tools or making such tools available that result in algorithmic discrimination. “Algorithmic discrimination” is defined by the bill as “unjustified differential treatment or impacts disfavoring people based on their actual or perceived, race, color, ethnicity, sex […] or other classification protected by state law.”

Under the bill, a deployer would have to perform an annual impact assessment describing the intended benefits and potential adverse impacts of their tool. The bill would also require deployers to provide notice to individuals subject to the deployer’s automated decision tool and, when feasible, accommodate a person’s request not to be subject to an automated decision tool. Lastly, the bill authorizes government agencies to bring civil actions against deployers for violations of the law but, notably, does not provide a private right of action to individuals (a change from prior versions of the bill).

The bill was introduced by Assembly Member Bauer-Kahan on February 15 and passed the Assembly by a 50-14 vote on May 21. It was re-referred to the Senate Judiciary Committee and last amended on June 3.


SB1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Act,” is designed to implement clear standards for the largest and most powerful AI models. Among other requirements, developers of AI models trained with large quantities of data would be required to: (1) implement numerous safety and security measures before training the model (including shutdown capabilities); (2) perform capability testing of the model upon completion of the training and submit a certification to the newly established “Frontier Model Division” of the Department of Transportation; (3) implement reasonable safeguards on the model before initiating its commercial, public, or widespread use; (4) periodically reevaluate the model; (5) submit annual certifications to the Frontier Model Division; and (6) report each AI safety incident affecting the model to the Frontier Model Division.  Note that the bill provides a “limited duty exemption” to some of these obligations for nonderivative models that “reasonably exclude the possibility” of having “a hazardous capability or . . . com[ing] close to possessing a hazardous capability.”

The bill was introduced by Senator Wiener on February 7 and passed the Senate on May 21 by a 32-1 vote. It is currently under consideration by the Assembly Judiciary Committee and Committee on Privacy and Consumer Protection.  


AB3211 would require generative AI system providers to: (1) place watermarks containing provenance data into their AI generated content; (2) provide public tools or services that can determine whether a piece of content was created by the provider’s generative AI system; (3) conduct regular “red-teaming exercises” to test whether watermarks can be easily removed or fabricated; (4) publicly disclose the discovery of any vulnerability or failure in their generative AI system; and (5) make certain disclosures to users of the developer’s “conversational AI systems.” 

The bill would also require “large online platforms” to: (1) disclose provenance data found in content distributed to its users; (2) detect and label synthetic content missing watermarks and text-based inauthentic content uploaded by its users; (3) obligate its users to disclose whether the content they upload or distribute is synthetic; and (4) provide a verification process for its users to apply digital signatures to content created by a human being. Additionally, the bill would require that digital cameras and recording devices sold in California be manufactured or updated via firmware with an option to place watermarks in the content produced by the device.

The bill passed the Assembly on May 22 by a 62-0 vote. It was assigned to the Senate Committee on Rules on May 23. The bill’s primary sponsor is Assembly Member Wicks.


AB2013 would require developers of AI systems or services to post a high-level summary of the datasets used in the development of the system or service on their website, with certain details described in the bill. The bill would also require developers to disclose whether the system uses “synthetic data generation,” which the bill defines as “a process in which seed data are used to create artificial data that have some of the statistical characteristics of the seed data.” Note that there is a narrow exemption to these disclosure requirements for AI systems or services with the sole purpose to “help ensure security and integrity” as defined by pre-existing statute.

The bill was introduced by Assembly Member Irwin on January 31 and passed the Assembly on May 20 by a 56-8 vote. It is currently with the Senate Judiciary Committee.


AB 2885 would incorporate a new definition of “artificial intelligence” into existing California law. The latest version of the bill would define AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

The bill was introduced by Assembly Member Bauer-Kahan on February 15 and passed the Assembly on May 16 by a vote of 71-0. It is currently with the Senate Judiciary Committee.


AB1791 would require social media platforms to redact personal provenance data from content uploaded by a user but prohibit platforms from redacting system provenance data from the such content (with exceptions). Violations of the bill’s requirements would constitute an unfair business practice under existing California law.

The bill was introduced by Assembly Member Weber on January 4 and passed the Assembly on May 1 by a 50-10 vote. It is currently with the Senate Judiciary Committee.


SB942, the “California AI Transparency Act,” would require a covered provider (a business that provides generative AI systems with one million monthly users on average) to create an AI detection tool that a person could use to identify what text, image, video, audio, or multimedia content was created by the provider’s generative AI system. The bill would require that such detection tools be publicly accessible on the internet and protect the personal information of individuals who use the tool. Additionally, a covered provider would be required to include a visible disclosure that content is AI-generated for any image, text, video, or multimedia content created by its system. Finally, a covered provider would also be required to implement reasonable procedures to prevent the downstream use of its generative AI system without such disclosures.

The bill was introduced by Senator Becker on January 17 and passed the Senate on May 21 by a 32-1 vote. It is currently with the Assembly Judiciary Committee and Committee on Privacy and Consumer Protection. 

In addition to the above bills, the legislature had been considering the following bills, which failed to pass the house of introduction prior to the May 24 deadline and, consequently, are dead.

SB970 would have required any person or entity that sells or provides access to any AI technology designed to create synthetic content to provide a consumer warning that misuse of the technology may result in liability for the user. The Department of Consumer Affairs would have been responsible for specifying the content of the consumer warning.

AB3204 would have required a “data digester,” defined as a “covered entity that designs, codes, or produces an [AI] system or service […] by training the system or service on the personal data of 1,000 or more individuals or households,” to register with the California Privacy Protection Agency (Agency) and provide the Agency with certain information about the personal data used to train its AI. The Agency would have been required to create a public internet page where such registration information could be accessed and would have been responsible for the creation of a “Data Digester Registry Fund.”

AB3050 would have required any entity that produces “AI-generated materials” to include watermarks that satisfy future regulations issued by the state’s Department of Technology. The bill would have also imposed liability on any AI-generating entity that creates deepfakes without permission from the person being depicted.