Keypoint: In our first regular update on the happenings of US artificial intelligence law, we provide an overview of proposed state AI-related private sector bills.
Below is our first regular update on the status of US artificial intelligence laws. In this update, we provide an overview of proposed state artificial intelligence bills impacting the private sector and links to recent firm articles on various AI-related issues.
Table of Contents
- What’s New
- Bill Tracker Chart and Map
- Recent Husch Articles on AI-Related Issues
1. What’s New
This is our first alert. We are tracking three types of state bills: (1) algorithmic discrimination; (2) automated employment decision making; (3) other types of bills that do not fit into those categories but would, if passed, impact private entities. As discussed below, some of the bills blur the lines between the various categories. However, we have done our best to group the bills as seems best appropriate.
a. Algorithmic Discrimination
With respect to algorithmic discrimination, we have identified 13 states that have introduced such bills to date. Those bills – which are identified in the chart in the following section – have been filed in California, Florida, Hawaii, Illinois, Massachusetts, New York, Oklahoma, Rhode Island, Utah, Vermont, Virginia, Washington, and Connecticut.
In California, AB331 is still pending from last year’s legislative session which would amend the state’s anti-discrimination laws to prohibit algorithmic discrimination by an “automated decision tool” (ADT). The bill would require deployers and developers of such tools to (1) perform annual impact assessments; (2) submit those assessments to the state’s Civil Rights Department; (3) implement governance programs and safeguards to prevent the reasonably foreseeable risks of algorithmic discrimination; and (4) publish policies regarding the tools. Deployers would also be required to disclose the use of such tools to subjected individuals and accommodate opt out requests in certain circumstances. Developers must also provide deployers with certain disclosures regarding the ADT’s use and limitations. The bill further provides a private right of action against any deployer, including employers, for using an ADT that results in algorithmic discrimination.
In Florida, HB 1459 would first create a Government Technology Modernization Council to “study and monitor the development and deployment of new technologies and provide reports on recommendations for procurement and regulation of such systems.” The bill also establishes transparency obligations, requiring certain disclosures for entities or persons who produce or offer for use AI content or technology to the Florida public for a commercial purpose. Violations of these standards constitute unfair and deceptive trade practices under state law. On February 2, 2024, the bill was referred to the Judiciary Committee for review.
The Hawaii legislature is currently considering two sets of bills. HB1607 and its companion SB2524, would prohibit covered entities from making “algorithmic eligibility determinations” or “algorithmic information availability determinations” in a discriminatory manner. The bills also require users of such algorithmic decision-making tools to provide notice of the use and explain how the tools use an individual’s personal information. Finally, the bills require annual audits of such tools and obligate covered entities to submit their audit results to the state’s attorney general each year. Both bills were referred to committee on January 24, 2024.
The second set of companion bills from Hawaii include HB2176 and SB2572. HB2176 would create a temporary Artificial Intelligence Working Group to develop acceptable use policies and guidelines for the regulation, development, and use of AI technologies in the state. Similarly, SB2572 would establish the Office of Artificial Intelligence Safety and Regulation within the Department of Commerce and Consumer Affairs to regulate the development, deployment, and use of AI. That bill sets forth a number of “precautionary principles” that the Office must adhere to in developing AI regulations and further prohibits any person from deploying AI products in the state unless affirmative proof establishing the product’s safety is submitted to the Office.HB2176 was heard by the House Committee on Consumer Protection & Commerce on February 13, 2024. SB2572 is set for a public hearing on February 15, 2024.
Illinois is considering HB5116 which would require any deployer of an ADT to perform an annual impact assessment of the tool. Before the tool is used to make a “consequential decision,” a deployer must disclose use of the tool to any subjected individual. Also, the bill requires deployers to accommodate individual opt out requests under certain circumstances and to implement a governance program and safeguards to manage the “reasonably foreseeable risks of algorithmic discrimination” associated with the tool. Additionally, deployers must make a policy publicly available that summarizes the types of ADTs in use and how risks of algorithmic discrimination are being minimized. Finally, deployers are prohibited from using an ADT that results in algorithmic discrimination and are subject to private rights of action for such violations.
In Massachusetts, SB2539 marks a substantial piece of legislation which largely focuses on cybersecurity issues but also contains provisions regarding ADTs. Specifically, the bill would create an automated decision making control board to study such tools and issue appropriate regulations, limits, standards, and safeguards. The bill was referred to committee on December 28, 2023.
In New York, the legislature is considering A8129, S8209, and A8195. A8129 and S8209 are companion bills that would enact an “artificial intelligence bill of rights” for New York residents which include, among others, (1) protections against algorithmic discrimination; (2) the right to have agency over one’s data; (3) the right to know when an automated system is being used; (4) the right to understand how and why an automated system contributed to outcomes that impact one; (5) the right to opt out of an automated system; and (6) the right to work with a human in the place of an automated system. A8129 was referred to committee on January 3, 2024, and S8209 was referred to committee on January 12, 2024.
A8195 is a comprehensive bill, titled the “Advanced Artificial Intelligence Licensing Act.” The Act would provide the Department of State broad authority to regulate the development, use, and deployment of AI systems in New York. The bill’s primary target is “high risk advanced artificial intelligence systems” which it defines as AI systems that can cause “significant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment.” As to operators of such “high risk” AI systems, the Act would, among other requirements, (1) create a robust licensing regime; (2) require operators to establish an ethics and risk management board to assess the ethical implications of their AI systems and submit annual reports to the Secretary of State; (3) empower the Secretary to conduct periodic evaluations; (4) limit the sharing of a person’s biometric information; and (5) apply recordkeeping requirements and an ethical code of conduct. The bill also prohibits the development or use of any AI system that causes physical or psychological harm, infringes on an individual’s liberty or financial interests, or acquires sensitive personal information without authorization. A8195 was referred to committed on January 3, 2024.
The Oklahoma legislature is currently reviewing HB 3835, titled the “Ethical Artificial Intelligence Act.” As currently drafted, the bill applies to ADTs used to make “consequential decisions” regarding a list of certain subjects, including employment. The law would prohibit any person or entity from using such tools in a way that results in algorithmic discrimination, providing a private right of action for any violation. The bill would further require annual impact assessments for automated decision tools and would require developers of ADTs to publish policies summarizing the types of ADTs they make available and how they manage the reasonably foreseeable risks of algorithmic discrimination from such ADTs. The bill passed a second reading and was referred to committee on February 6, 2024.
Rhode Island’s pending legislation, HB7521, is similar to the above bill in Oklahoma. HB7521 also applies to ADTs used to make consequential decisions in certain areas, prohibiting the use of such tools that result in algorithmic discrimination and providing a private right of action for violations. The bill would also require users of such ADTs to perform annual risk-based impact assessments and ADT developers to annually assess their tools. Additionally, among other requirements, the bill would require deployers of ADTs to notify subject persons of the tool’s use and accommodate a person’s request to opt out. Rhode Island’s bill has been referred to committee and is scheduled for a hearing on February 15, 2024.
In Utah, the State Senate is considering SB149 sponsored by Senator Kirk Cullimore, author of the Utah Consumer Privacy Act. The bill would (1) establish liability for use of generative AI that violates consumer protection laws if not properly disclosed; (2) create an Office of Artificial Intelligence Policy and AI Learning Laboratory Program to analyze and recommend potential legislation regarding AI; and (3) create a “regulatory mitigation” licensing scheme whereby participants of the AI Learning Laboratory Program can avoid regulatory enforcement while developing and analyzing new AI technologies. The bill was amended on February 8, 2024, and passed a second reading in the Senate on February 12, 2024.
Vermont is considering two bills — H.710 and H.711 — both sponsored by Representative Monique Priestley. H.710 is another comprehensive bill that largely applies to developers and deployers of “high-risk artificial intelligence systems,” defined as an AI system that “makes or is a controlling factor in making a consequential decision.” The bill would require developers and deployers to use reasonable care to avoid algorithmic discrimination and impose numerous disclosure requirements upon developers. The bill also requires deployers of such tools to (1) implement a risk management policy and program for the AI system; (2) conduct annual impact assessments; (3) notify individuals subject to the tool; and (4) disclose the uses and explanations of the tool on their public-facing website. The bill further imposes requirements for developers of generative AI systems. H.710 was referred to committee on January 9, 2024.
Turning to Vermont’s H.711, that bill would primarily apply to developers and deployers of “inherently dangerous artificial intelligence systems,” defined as a “high risk artificial intelligence system, dual-use foundational model, or generative artificial intelligence system.” The sale or use of such AI systems is prohibited unless certain testing, evaluation, verification, validation, and risk management policy requirements are satisfied. The bill does provide for a private right of action for a violation. The bill was referred to committee on January 9, 2024.
In Virginia, the House is considering HB 747 which imposes numerous disclosure requirements upon developers of “high-risk artificial intelligence systems,” defined as “any artificial intelligence system that is specifically intended to autonomously make, or be a controlling factor in making, a consequential decision.” The bill also establishes multiple operating standards for deployers of “high-risk” AI systems. Those standards include (1) avoiding algorithmic discrimination; (2) implementing a risk management policy and program; (3) completing an impact assessment; (4) disclosing the use of such AI tools to subjected individuals; and (5) publishing a statement that summarizes how the deployer manages any foreseeable risk of algorithmic discrimination. Notably, the bill includes many exemptions to its requirements and specifically excludes from the definition of “high-risk artificial intelligence system,” any system: “intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review, or (iv) perform a preparatory task to an assessment relevant to a consequential decision.” With the Virginia legislature adjourning in March, the bill was continued to the next legislative session on February 5, 2024, and is still in committee.
In Washington, HB 1951 is pending which prohibits deployers from using automated decision tools that result in algorithmic discrimination. The bill would also require deployers and developers of automated decisions tools to complete annual impact assessments and submit such assessments to the state attorney general’s office upon request. The bill further requires developers to provide deployers with certain documentation regarding the tool’s use and limitations, and to make public a policy concerning the tool and how risks of algorithmic discrimination are managed. A public hearing was held on the bill on January 19, 2024, and the bill remains in committee.
Finally, in Connecticut, a group of lawmakers led by Senator Maroney filed SB 2. The text of the bill is not yet available; however, the bill description states that the bill will seek to “protect the public from harmful unintended consequences of artificial intelligence.” The bill was referred to committee on February 7, 2024.
b. Automated Employment Decision Making
Turning to bills that apply specifically to automated employment decision tools (AEDTs) and an employer’s use of AI technologies, six states have introduced or enacted such bills to date.
Illinois has already enacted the Artificial Intelligence Video Interview Act, which was last amended in 2022. The Act governs the use of AI to analyze recorded video interviews of job applicants for positions in the state. It requires employers to (1) give job applicants advance notice if AI will be used to analyze interviews; (2) provide an explanation of how the AI analysis works (i.e., the characteristics it evaluates); and (3) obtain consent before using AI for interviews. Additionally, the law requires employers to destroy any recorded video interviews within 30 days of an applicant’s request. In 2022, the law was amended to require any employer that relies solely on AI analysis of video interviews in selecting whether to perform a follow-up in-person interview to annually collect and report data on the racial and ethnic demographics of the interviewees and hires to the Department of Commerce and Economic Opportunity.
In 2024, the Illinois legislature is considering HB3773 which would amend the state’s Human Rights Act to encompass employers using “predictive data analytics.” The bill prohibits such employers from considering an applicant’s “race or zip code when used as a proxy for race to reject an applicant in the context of” certain employment decisions. Notably, the bill states that it does not prevent the use of predictive data analytics to support the inclusion of diverse candidates. Additionally, the bill amends the state’s Consumer Fraud and Deceptive Business Practices Act to impose numerous requirements on individuals and entities that rely on predictive data analytics to determine a consumer’s creditworthiness. The bill was assigned to committee on January 31, 2024.
Maryland enacted HB1202 in 2020 which prohibits employers from using certain facial recognition services during a job applicant’s interview unless the applicant consents by signing a written waiver with specified provisions.
In Massachusetts, H.1873 is an expansive bill that remains pending from last year. Titled “An Act Preventing a Dystopian Work Environment,” the bill (1) places privacy restrictions on the personal information employers collect from their employees; (2) regulates an employer’s use of electronic monitoring data; (3) requires employers to disclose the use of any “automated decisions system” (ADS) to employees and the state’s Department of Labor & Workforce Development; (4) restricts an employer’s use of an ADS; (5) requires independent “algorithmic impact assessments” and “data protection impact assessments”; and (6) provides a private right of action for any violation. On February 8, 2024, the bill’s reporting date was extended to May 8, 2024, and it remains in committee.
In New Jersey, S1588 seeks to regulate the use of AEDTs in hiring decisions to minimize discrimination. Like New York City’s Local Law 144 (see below), the New Jersey bill makes it unlawful to sell an AEDT unless the tool (1) has undergone a bias audit; (2) includes an annual bias audit service at no additional cost; and (3) includes a notice that it is subject to the requirements of the law. The bill also requires employers to notify subjected candidates if the AEDT will be used. The bill was referred to committee on January 9, 2024.
The New York legislature is currently considering three bills — A7859, S5641A, and S7623A. Note that New York bills A567 and A8328 also sought to regulate AEDTs but were both stricken in January 2024.
A7859 requires employers using AEDTs to screen candidates for employment to disclose to such candidates: (1) that an AEDT will be used; (2) the job qualifications and characteristics that the tool will assess; and (3) information about the data collected for the tool. Such disclosure must be made ten days before the AEDT is used and allow the candidate to request an alternative selection process or accommodation. A7859 was referred to committee on January 3, 2024.
Turning to New York’s S5641A, that bill prohibits a deployer of an AEDT from using the tool in a manner that violates the state’s anti-discrimination law. It requires those deployers and the developers of such tools to perform annual impact assessments and implement governance programs and safeguards to avoid discrimination. Deployers must also notify subjected individuals of the AEDT at or before the time an employment decision is made. Additionally, developers are required to provide certain disclosures to deployers regarding the AEDT’s intended uses and known limitations. The bill was amended on January 8, 2024 and remains referred to committee.
S7623A represents another comprehensive bill that aims to regulate electronic monitoring as well as AEDTs. First, the bill substantially limits an employer’s ability to use an electronic monitoring tool to collect employee data and requires prior written notice. Among other restrictions, the bill requires employers to delete the collected data after a specified period and prohibits the disclosure of such data except in limited circumstances. Next, and similar to New York City’s Local Law 144 (see below), the bill makes it unlawful for an employer to use an AEDT for an employment decision unless the tool undergoes an annual bias audit by an independent party. Such audits must be submitted to the state’s Department of Labor. Employers must also give prior notice and other disclosures to employees and candidates who will be subject to an AEDT. Notably, the bill also prohibits employers from relying “solely” on an AEDTs output when making hiring, promotion, termination, disciplinary, or compensation decisions. The bill was referred to committee on January 3, 2024.
Finally with respect to New York, note that New York City Local Law 144 went into effect last year to regulate the use of AEDTs to screen applicants for employment or employees for promotional opportunities within the city. The law makes it unlawful to use an AEDT to screen candidates or employees for an employment decision unless (1) the AEDT is subject to an annual bias audit by an independent auditor before use and (2) the results of the most recent bias audit and the AEDT’s distribution date are published on the employer’s website. The law was enacted in December 2021 and went into effect on July 5, 2023.
In Vermont, the House is considering H.114 which concerns employee data and “automated decision systems” (ADS) as applied to employment-related decisions, judgments, or conclusions. The bill prohibits the electronic monitoring of employees unless numerous requirements are met, including prior notice. The bill also restricts the use of an ADS in the employment context and prohibits an employer from relying “solely” on outputs from an ADS when making employment-related decisions. Like other bills, H.114 requires employers to conduct impact assessments before the system is used and such assessments must be made available to employees upon request. The bill also prohibits electronic monitoring and an ADS from incorporating any form of facial, gait, or emotion recognition technology. Finally, the bill includes privacy protections over employee data and provides employees certain rights with respect to their data. H.114 was referred to committee in 2023 and walked-through on January 30, 2024.
c. “Other” Bills
Finally, two states have introduced “other” bills that we briefly discuss below.
In California, SB1047 would require developers of foundation/frontier models to make documented “positive safety determinations” and certify with a new government body, the “Frontier Model Division.” The bill was introduced and referred to committee on February 7, 2024.
In Oklahoma, there are two “other” bills worth noting. First, HB3453 would create rights for Oklahoma citizens to know when (1) they are interacting with AI; (2) their data is being used to inform AI; and (3) they are consuming images, text, contracts, or other documents generated by AI. The bill also grants the right to rely on a watermark, the right to approve derivative media generated by AI that uses audio recordings or images of a person, and the right not to be subject to algorithmic discrimination. No enforcement mechanism is provided. The bill had a second reading and was referred to committee on February 6, 2024.
Second, HB3577 would require insurers to disclose whether AI-based algorithms are used or will be used in the insurer’s utilization review process. Insurers must also submit their algorithms and training data sets to the state’s Insurance Department for transparency to certify they have minimized bias risks. The bill had a second reading and was referred to committee on February 6, 2024.
2. Bill Tracker Chart and Map
For more information on all the state bills introduced to date, including links to the bills, bill status, last action, hearing dates, and bill sponsor information, please see the following charts:
- Algorithmic Discrimination Bills
- Automated Employment Decision Making Bills
- Other AI-Related Bills of Note
Stay tuned for the release of our map later this week.
3. Recent Husch Articles on AI-Related Issues
- Why You Need an Employee AI Use Policy (Laura Malugade and Eric Locker)
- Federal Lawsuit Targets NIL in Connection with Artificial Intelligence (Dustin Taylor and Andrea Fischer)
- When the AI Does it, Does that Mean it is Not Illegal (Michael Martinich-Sauter and Rebecca Furdek)
- House Committee Forms AI Working Group as Regulators Emphasize Existing Authority to Regulate AI (Alexandra McFall, Marci Kawski, and Leslie Sowers)