Keypoint: Since our inaugural post on US artificial intelligence legislation, the first AI bill from this year is set to pass in Utah, new bills have been introduced in Connecticut, Illinois, New Jersey, and several bills have stalled in Virginia, Rhode Island, and Washington.
Below is our second update on the status of pending US artificial intelligence (AI) legislation that would affect the private sector.
Table of Contents
- What’s New
- Bill Tracker Charts and Map
- Recent Husch Articles and Resources on AI-Related Issues
1. What’s New
In this second alert, we have identified two new types of legislation to better categorize the volume of new bills. We break down our updates by the following five types of bills: (a) algorithmic discrimination; (b) automated employment decision making; (c) AI Bill of Rights; (d) “working group” bills; and (e) other types of bills that do not fit into those categories but would, if passed, impact private entities. Note that some of the bills blur the lines between the various categories. For a more detailed summary of certain bills referenced below, please see our first update from February.
Although not covered below, it should be noted that the California Privacy Protection Agency published revised draft regulations on automated decision making technology in connection with its March 8 Board meeting. The Agency has not yet initiated formal rulemaking on the ADMT regulations; however, it could receive Board authority to do so at the March 8 meeting.
a. Algorithmic Discrimination
The first set of bills we are tracking seek to prevent “algorithmic discrimination,” which is generally defined as an automated decision tool’s differential treatment of an individual or group based on their protected class. As a reminder, these bills apply to both AI developers and deployers, which include (but is not limited to) employers that use such tools to make employment decisions. Therefore, this category does overlap with our next category of “automated employment decision making” bills.
The first update in this category is that California’s AB 331 has a new bill number. The original bill died under procedural rules and was refiled as AB 2930 on February 15.
In Connecticut, lawmakers led by Senator Maroney filed SB 2 on February 21. The bill was assigned to the Joint Committee on General Law, which held a public hearing on February 29.
The Connecticut bill focuses on “high-risk artificial intelligence systems” and requires developers and deployers of such systems to use reasonable care to avoid algorithmic discrimination and make certain disclosures. Among other requirements, deployers must (1) implement a risk management policy and program for their high-risk AI systems; (2) complete impact assessments; (3) conduct annual reviews to ensure against algorithmic discrimination; and (4) provide consumer notifications regarding such systems. Of note, the bill creates a rebuttable presumption that developers and deployers have satisfied the reasonable care standard if their other respective obligations under the law are met. Other provisions are reserved for developers of “generative artificial intelligence systems” and “general purpose artificial intelligence models,” with a general consumer disclosure requirement for any developer and deployer of any “artificial intelligence system.” The bill would also create an Artificial Intelligence Advisory Council to assess future AI regulation and state government use. Finally, the bill contains several prohibitions on the use of deepfakes.
Another new bill is Illinois’s HB 5322 which was introduced and referred to the House Rules Committee on February 9. The bill requires deployers and developers of automated decision tools to conduct annual impact assessments and disclose such assessments to the state’s Attorney General upon request. Developers must also disclose its tool’s intended uses and known limitations to deployers, and implement a publicly available policy summarizing its tools and how the developer manages reasonably foreseeable risks of algorithmic discrimination.
In Rhode Island, HB7521 appears to have died as the House Innovation, Internet, and Technology Committee recommended on February 15, that the bill be held for further study. Similarly, Virginia’s HB 747 has been continued to 2025. Washington’s HB 1951 also appears unlikely to move forward this year given that the legislature’s crossover deadline has passed and the legislative session ends on March 7 with the bill remaining in committee.
Other bills we are tracking in this category that have no new updates since our last alert include: (1) Hawaii’s HB 1607 and SB 2524; (2) Illinois’ HB5116; (3) New York’s A8195; (4) Oklahoma’s HB 3835; and (5) Vermont’s H.710 and H.711. We also expect Colorado’s algorithmic discrimination bill to be released soon.
b. Automated Employment Decision Making
Our next category looks at bills that specifically apply to the use of AI in the employment context. These bills seek to regulate AI tools, commonly referred to as “automated employment decision tools” (AEDTs) or “predictive data analytics,” used to make hiring, firing, promotion, and compensation decisions.
New Jersey had three new bills recently introduced. A3854 and A3855 were introduced on February 22 and, similar to the New Jersey Senate bill S1588 (which remains in committee), would prohibit the sale or use of AEDTs unless such tools are subject to independent bias audits and summary results of such audits are publicly disclosed. A3855 would further require that employers provide affected candidates advance notice of the tool and instructions on alternative selection processes. Both bills are currently referred to the Assembly Science, Innovation and Technology Committee.
New Jersey’s third new bill, A3911, was introduced on February 27 and would regulate the use of AI-enabled video interviews in the hiring process. Any employer that requests applicants to record video interviews and uses any AI analysis of the video must do the following before requesting a video interview: (1) notify the applicant that AI may be used, (2) provide the applicant with information explaining how the AI works and what types of characteristics it uses to evaluate applicants, and (3) obtain written consent from the applicant. It also contains other related obligations to collect and report certain demographic data to the state’s Department of Labor and Workforce Development.
Vermont’s H.114 remains in the House Committee on General and Housing where members discussed the bill during a committee meeting on February 21. That bill applies to the collection of employee data as well as “automated decision systems” used for employment-related decisions, judgments, or conclusions.
Another new bill is Maryland’s HB 1255, which would prohibit the use of an “algorithmic decision system” (defined to include computational processes that facilitate decision making) in connection with screening applicants for employment or otherwise helping determine terms and conditions of employment unless (1) the tool was subject to an “impact assessment in the prior year and is subject to an impact assessment each subsequent year; and (2) the impact assessments determine use of the tool would not involve a “high risk action” (i.e., result in discrimination or have a disparate impact on a group of individuals on the basis of an actual or perceived characteristic). The law would also require employers to notify applicants that an AEDT was used in connection with their application, that the tool was subject to an impact assessment, and that the tool assessed the job qualifications or characteristics of the applicant.
In New York, Assembly Member George Alvarez introduced two bills on February 28 focusing AEDTs. The first, A9314, would prohibit employers from using AEDTs unless they conduct an annual disparate impact analysis of the tool and disclose a result summary of the analysis publicly on their website and to the New York Department of Labor. The bill charges the New York Attorney General and Commissioner of the Department of Labor to initiate investigations for suspected violations.
The second bill, A9315, is much more expansive. That bill would similarly prohibit the use of AEDTs unless the tool undergoes an annual bias audit conducted by an independent and impartial party, and employees are given advance notice of the tool. Where an audit demonstrates disparate treatment, the employer must take reasonable and appropriate steps to remedy the impact and disclose such steps. The bill also prohibits employers from relying “solely” on an AEDT’s output to make employment decisions and requires “meaningful human oversight” of the tool’s use. Employers also cannot require employees to be subject to the AEDT and must allow employees to request a reevaluation of the AEDT’s results. Next, the bill would impose additional (and significant) restrictions on an employer’s use of electronic monitoring tools to collect employee data, prohibiting the use of such tools unless (1) there is a legitimate purpose for the collection, (2) employees are given a detailed notice in advance of the collection; and (3) the collected data is destroyed after its initial purpose is satisfied or after the particular employment relationship ends. Notably, the bill prohibits employers from broadly stating in their notice that monitoring “may” take place or that the employer “reserves the right” to monitor employees. If passed, violations of A9315 would be enforced by New York’s Attorney General and harmed employees who are granted a private right of action. In any civil action, joint and several liability would extend to “any person, employer, vendor, or other business entity that used, sold, distributed, or developed” the AEDT or electronic monitoring tool.
Finally, H 7786 was introduced in Rhode Island. That bill would, among other things, require deployers to conduct impact assessments prior to deploying a consequential artificial intelligence decision system.
Other AEDT bills we are tracking that have no new updates include: (1) Illinois’ HB 3773; (2) Massachusetts’ H.1873; (3) New Jersey’s S1588; and (4) New York’s A7859, S5641A, and S7623A.
c. AI Bill of Rights
This is a new category of bills that seek to establish an AI Bill of Rights, providing state residents with specified rights regarding their use of AI.
This week, the only update is found in Oklahoma’s HB 3453, which passed the House Government Modernization and Technology Committee on February 20. The bill would grant Oklahoma citizens the right to know when (1) they are interacting with AI; (2) their data is being used to inform AI; and (3) they are consuming images, text, contracts, or other documents generated by AI. The bill would also grant rights to rely on a watermark to verify the authenticity of a creative product and to approve derivative media generated by AI that uses a person’s audio recordings or images.
The other bills we are tracking in this category are New York’s A8129 and its companion Senate bill S8209, which both remain in committee.
d. “Working Group” Bills
Another new category in this post is what we call “working group” bills. These bills primarily create government commissions or working groups to study the implementation of AI technologies and develop recommendations for future regulation.
The most significant update in this category is Utah’s SB 149, which passed both chambers as of February 28, and will be sent to Governor Cox for signature. Sponsored by Senator Kirk Cullimore, author of the Utah Consumer Privacy Act, the bill (1) establishes liability under the state’s consumer protection laws for certain uses of generative AI that lack proper disclosure; (2) creates an Office of Artificial Intelligence Policy and AI Learning Laboratory Program to analyze and recommend potential legislation regarding AI; and (3) implements a unique “regulatory mitigation” licensing scheme where participants of the AI Learning Laboratory Program can avoid regulatory enforcement while developing and analyzing new AI technologies.
In other updates, Florida’s HB 1459 was reported out of committee and referred to the House calendar for a second reading on February 23. This bill would, among other things, create a Government Technology Modernization Council to study the development of AI technologies and recommend regulations. In Hawaii, HB2176 was amended by the Consumer Protection and Commerce Committee after a public hearing and referred to the Finance Committee on February 16. It’s companion bill however, SB 2572, was deferred by the Senate Commerce and Consumer Protection Committee on February 15.
Massachusetts’ SB 2539 is the other bill we are tracking in this category which has no new updates.
e. “Other” Bills
Our final category of “other” bills includes bills that do not neatly fit into the above categories. In this category, Oklahoma’s HB 3577 was passed by the House Government Modernization and Technology Committee on February 21. This bill requires insurers to disclose whether AI-based algorithms will be used in utilization review processes and further requires insurers to submit their algorithms and training data sets to the state’s Insurance Department to certify that bias risks have been minimized.
The final bill in this “other” category is California’s SB 1047, which has no new updates since our last alert.
2. Bill Tracker Charts and Map
For more information on all the state bills introduced to date, including links to the bills, bill status, last action, hearing dates, and bill sponsor information, please see the following charts:
- Algorithmic Discrimination Bills
- Automated Employment Decision Making Bills
- Other AI-Related Bills of Note
To access our AI tracker map, click the following link:
3. Recent Husch Articles and Resources on AI-Related Issues