Key Point: The EEOC released guidance to employers on how to assess adverse impacts when using artificial intelligence (AI) in the employment decision-making process.
The Equal Employment Opportunity Commission (EEOC) recently issued a technical assistance document to help employers avoid discriminating against job applicants and employees when using AI for employment decisions. In the technical assistance, the EEOC highlights that employers may violate Title VII of the Civil Rights Act of 1964 (Title VII) if their algorithmic decision-making tools have an adverse impact on protected classes, even where those tools are designed or administered by third parties.
Background
Title VII makes it unlawful for an employer to discriminate against any individual based on race, color, religion, sex, or national origin. While Title VII covers both the prohibition of disparate treatment and adverse impact discrimination, the EEOC’s technical assistance focuses on the latter. To avoid violating Title VII and facing discrimination claims, the EEOC proposes proactive and ongoing assessments of all AI and algorithmic decision-making tools used by employers.
The technical assistance defines three central terms concerning automated systems and AI to explain how they relate to the Title VII analysis. First, the EEOC defines “software” as any information technology program or procedure that provides instructions to a computer on how to perform a given task or function. It then defines “application software” (also referred to as an “application”) as a type of software that performs or helps a user perform a specific task. In the employment context, the EEOC appears most concerned with resume-screening software, video interviewing software, and employee monitoring/management software, among others.
Second, according to the EEOC, an “algorithm” is a “set of instructions that can be followed by a computer to accomplish an end.” The EEOC notes that algorithms are used by human resources software to evaluate, rate, and make decisions about job candidates and employees throughout various stages of employment. Finally, the EEOC adopts Congress’s definition of “Artificial Intelligence” from the National Artificial Intelligence Act of 2020 as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
These definitions are broad, encompassing HR tools used throughout the employment cycle—from the candidate selection process, such as resume scanners or “chatbots” that screen job candidates for pre-defined requirements, to employee work habits, like keystroke monitoring software.
How to Assess Adverse Impacts
The technical assistance analyzes AI employment tools’ potential for adverse impacts under the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures (the “Guidelines”). According to the EEOC, the definition of “selection procedures” in the Guidelines is broad enough to include algorithmic decision-making tools when they are used for the purpose of making or informing decisions related to hiring, promotion, and termination. As a result, employers should look to the Guidelines for direction on the appropriate adverse impact analysis.
According to the Guidelines, a selection procedure has an adverse impact when the selection rate for members of a protected group is “substantially” less than the selection rate for those of another group. The selection rate for a particular group can be calculated by dividing the number of individuals hired, promoted, or otherwise selected by the total number of candidates in that same group. Generally, a rate is considered “substantially” different than another if the ratio is less than four-fifths, otherwise known as the “four-fifths rule.” However, the EEOC makes clear that although the “four-fifths rule” is a good starting point, compliance with the rule, alone, is not sufficient to show that a particular selection procedure is lawful. A court may find compliance with the four-fifths rule inadequate in situations where a test of statistical significance should be applied, such as when there is a small difference in the selection rate but a large number of selections made, or when an employer’s actions disproportionately deter individuals from applying based on protected characteristics.
Traditionally, an employer may avoid violating Title VII by showing that an otherwise discriminatory selection procedure is necessary to the safe and efficient performance of the job and that a less discriminatory alternative is not available. According to the EEOC, these considerations are no different when the selection procedure incorporates AI or algorithmic decision-making tools. However, if an employer cannot satisfy those criteria and discovers that its AI selection tool does have an adverse impact, the employer can take action to mitigate the impact, adjust the algorithm, or select an alternative tool. The EEOC therefore emphasizes that employers should continuously test and adapt their AI tools to reduce any discriminatory impact and avoid Title VII violations.
The technical assistance also provides steps an employer should take to minimize its risk of liability when using a third-party to create or administer an algorithmic decision-making tool. The EEOC advises employers to ask if the third-party evaluates whether use of the tool results in substantially lower selection rates for individuals with Title VII protections. If the tool is expected to have an adverse impact, the employer must take the additional steps of considering whether (1) the selection procedure is job related and consistent with business necessity; and (2) less discriminatory alternatives exist that fit the employer’s business need. An employer that cannot meet these requirements, or relies on a third party’s incorrect impact assessment, may be liable under Title VII according to the EEOC. This underscores the importance of employers regularly performing their own disparate impact audits on any AI tools they use.