Listen to this post

Key point: The California Legislature is considering a bill that would impose comprehensive requirements on the development, deployment, and use of artificial intelligence (AI) systems, with a focus on transparency, accountability, and risk mitigation.

California’s Push for AI Regulation

So far this year, 61 AI-related bills have been introduced in the California Legislature. To date, California has opted to legislate in a more piecemeal manner than other jurisdictions. Rather than adopting one comprehensive bill, it has created a patchwork of legislation, with each bill aimed at a different sector or issue.

One of the proposed California bills, SB 243, aims to address growing concerns about the risks and impacts of AI technologies. This bill, which would regulate AI companion chatbots to protect minors and vulnerable users, has passed both the State Assembly and Senate, with bipartisan support, and now heads to Governor Gavin Newsom’s desk. If enacted, it would establish new legal obligations for organizations that create or deploy AI systems, particularly those with the potential to significantly affect safety, privacy, and civil rights.

Subject to some exceptions, key obligations include:
Mandatory Impact Assessments: Operators of consumer-facing AI systems, including companies that develop, deploy or otherwise control access to “companion chatbots” (Operators), must conduct and document assessments to evaluate potential risks and benefits.
Transparency Requirements: Operators must disclose the use of AI in products and services, including clear labeling and public reporting of specific metrics (for example, the number of detected instances of suicidal ideation or self-harm).
Risk Mitigation Measures: Operators must take proactive steps to minimize foreseeable risks, such as risks of bias, discrimination, and unintended consequences.
Oversight and Enforcement: Proposes the creation of a state-level AI oversight body with authority to investigate, audit, and enforce compliance. Provides a private right of action for individuals as well as enforcement and oversight by state agencies.

What the Bill Means for Operators

If passed, the California AI bill would represent one of the most stringent legal frameworks for AI in the United States. Key implications include:

Operational Changes: Entities that operate companion chatbot or virtual assistant platforms would need to implement new governance structures, document their practices related to how the chatbot is used and what it generates, and implement technical safeguards, including reminders that users are interacting with AI, to comply with the law.
Legal Exposure: Failure to meet the bill’s requirements could result in significant penalties, enforcement actions, and reputational harm.
Innovation vs. Regulation: While the bill seeks to protect consumers and the broader public, some stakeholders warn that its overly broad and prescriptive rules could stifle innovation and investment in AI.

How this Bill Compares to Other Current AI Laws

Colorado AI Act (2024)

  • Scope: Applies to developers and deployers of “high-risk” AI systems that make consequential decisions about consumers (e.g., employment, housing, credit).
  • Key Requirements: like SB 243, the CO AI Act requires “reasonable care” to avoid algorithmic discrimination, annual assessments for high-risk systems, risk mitigation measures and consumer disclosures.
  • How It’s Different from CA SB 243: Comprehensive, cross-sector, and risk-based. Defines “high-risk” systems and “algorithmic discrimination, creation of an AI Impact Task Force and independent enforcement by the CO Attorney General.

EU AI Act (2024)

  • Scope: Applies to providers, users, and importers of AI systems in the EU, regardless of where the provider is established.
  • Key Requirements:
    • Risk-Based Approach: Classifies AI systems as unacceptable, high, limited, or minimal risk.
    • High-Risk Systems: Subject to strict requirements—risk management, data governance, transparency, human oversight, and conformity assessments.
    • Transparency: Users must be informed when interacting with AI (e.g., chatbots, deepfakes).
    • Prohibited Practices: Bans certain uses of AI (e.g., social scoring, real-time biometric surveillance).
    • Enforcement: Significant penalties for non-compliance; oversight by national authorities and the European Commission.
  • How It’s Different from CA SB 243: More comprehensive and stricter than SB 243. Applies extraterritorially to any AI system affecting EU residents, so it sets a global precedent for risk-based AI regulation.

Practical Implications and Next Steps

Organizations operating in California—or those offering AI-enabled products and services to California residents—should begin preparing for potential compliance obligations. Recommended actions include:

Review AI System Inventories: Identify which systems offering companion chatbots or virtual assistants may be subject to CA SB 243’s requirements.
Update Policies and Procedures: Develop or revise internal policies to address transparency, risk assessment, and mitigation.
Engage with Policymakers: Monitor legislative developments and participate in public consultations to help shape the final law.

Governor Newsom has until October 12 to either veto the bill or sign it into law. If he signs, it will take effect January 1, 2026, making California the first state to require AI chatbot operators to implement certain protocols.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Heidi Salow Heidi Salow

Heidi counsels clients on a wide range of privacy, cybersecurity, and artificial intelligence laws, regulations, and standards, including the CCPA, FERPA, EU AI Act, EU and U.K. GDPR, HIPAA, FCRA, GLBA, and NIST frameworks, as well as various U.S. state laws and regulations…

Heidi counsels clients on a wide range of privacy, cybersecurity, and artificial intelligence laws, regulations, and standards, including the CCPA, FERPA, EU AI Act, EU and U.K. GDPR, HIPAA, FCRA, GLBA, and NIST frameworks, as well as various U.S. state laws and regulations touching on healthcare and financial privacy, artificial intelligence, biometrics, and information security. She draws on a notable background as one of the first U.S. attorneys focused on data privacy and cybersecurity, as well as experience as a corporate executive. Heidi previously held executive roles at two large multinational corporations, Thomson Reuters and Leidos.

Shannon Kapadia

Formerly in-house at a major technology company, Shannon advises clients on data privacy, technology transactions, and cloud services contracting.

After growing up observing the realities of business ownership, Shannon brings a business mindset to legal challenges and serves as a strategic partner for

Formerly in-house at a major technology company, Shannon advises clients on data privacy, technology transactions, and cloud services contracting.

After growing up observing the realities of business ownership, Shannon brings a business mindset to legal challenges and serves as a strategic partner for clients navigating digital transformation, data privacy, and commercial contracting. She most often represents organizations as they negotiate contracts with large technology companies, including those involving AI governance. Shannon’s practice is rooted in data privacy matters, and she is also deeply familiar with the intellectual property issues that often arise in tech agreements.