Key point: Recent legislative efforts in Massachusetts, seeking to add another comprehensive data privacy law to the national patchwork of state laws, and in California enacting a law to regulate AI development, occurred this week when the Massachusetts Senate unanimously sent Senate Bill 2608 to the state House, and California enacted the nation’s second substantive state law regulating AI.
On September 25, 2025, the Massachusetts Senate unanimously passed SB 2608, the Commonwealth’s new Data Privacy Act (MDPA), which would establish comprehensive new requirements for data controllers. Specifically, SB 2608 grants consumers rights to confirm, access, correct, delete, and port their personal data, and potentially goes beyond the requirements in the California Consumer Privacy Act. Critically for businesses, the MDPA bans the sale of sensitive data and all personal data of minors, limits data transfers, creates opt-out rights for targeted advertising, and blocks such ads entirely for minors. Notably, SB 2608 would extend the ban on the sale of geolocation data related not only to Massachusetts residents, but also to visitors to the state for any reason, including those traveling for health care. The bill will now move to the House of Representatives for consideration.
In its current form, SB 2608 does not include a private right of action, but it would grant broad enforcement powers to the Attorney General, who can impose civil penalties of up to $5,000 per violation.
On September 29, 2025, the California legislature enacted SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). In contrast to other pending legislation that focuses on chatbots, or protecting children (as we discussed previously), SB 53 aims to promote safe and responsible innovation by requiring certain AI developers to meet new transparency standards. Effective January 1, 2026, this law will establish CalCompute, a new consortium housed within the California Department of Technology, which itself is part of the Government Operations Agency. CalCompute will be tasked with developing a framework that ensures AI development and deployment are “safe, ethical, equitable, and sustainable.” To implement this framework, SB 53 requires covered developers to publish AI system documentation (e.g., model cards and safety policies), disclose known limitations and potential risks, and report critical incidents to the California Office of Emergency Services. The Department of Technology must also make annual recommendations based on changing technological developments and international AI standards.
Finally, SB 53 creates a dual enforcement structure, giving enforcement authority to the California Attorney General to bring civil actions and impose penalties of up to $1 million per violation, and to individuals who raise concerns under the law and seek whistleblower protections from retaliation.
Together, these developments reflect an ongoing trend, and a new development: states are continuing to regulate data privacy, in the face of federal inaction, which is now expanding into artificial intelligence. Whether it’s California’s push for AI transparency or Massachusetts’s restrictions on geolocation and sensitive data, organizations should prepare for a patchwork of compliance obligations. The message is clear – companies operating in or with these states will need to build proactive, state-specific strategies around privacy and AI governance.