EEOC Issues Guidance on the Use of AI in Employment Decisions

Last week, the Equal Employment Opportunity Commission (EEOC) issued technical guidance on the use of artificial intelligence (AI) in employment decisions under Title VII of the Civil Rights Act of 1964. The guidance aims to ensure that employers comply with federal EEO laws when using new AI technologies in making employment-related decisions.

Title VII generally prohibits employment discrimination based on race, color, religion, sex, or national origin. Employers risk violating Title VII when they use AI to make employment decisions, including hiring new employees, monitoring performance, setting pay, and making promotion decisions. Accordingly, the EEOC guidance assesses the permissibility of certain employer “selection procedures” developed with and administered by artificial intelligence.

The guidance is the most recent pronouncement by EEOC as part of the agency-wide initiative EEOC Chair Charlotte A. Burroughs launched in 2021. The initiative – called the “Artificial Intelligence and Algorithmic Fairness Initiative” – was adopted by EEOC to ensure that the use of software, including AI, machine learning, and other emerging technologies using in hiring and other employment decisions comply with the civil rights laws enforce by EEOC. On May 12, 2022, EEOC a technical assistance that provides practical tips to employers in complying with Title I of the ADA in using software that relies on algorithmic decision-making.

In a statement accompanying the release of the According to the EEOC, the guidance issued last week constitutes an:

“ongoing effort to help ensure that the use of new technologies complies with federal EEO law by educating employers, employees, and other stakeholders about the application of these laws to the use of software and automated systems in employment decisions.”

The EEOC guidance coincides with recent New York City regulations addressing the use of artificial intelligence by employers in hiring and promotion. Those regulations generally prohibit employes from using automated employment decision tools to make hiring and promotion decisions unless the tool is audited for bias annually, the employer publishes a summary of the audit, and the employer provides notice to applicants and employees subject to the screening. We posted a blog summarizing the new NYC regulations.

EEOC Guidance and Title VII

Under a “disparate treatment” theory, Title VII prohibits employment tests and standards that are “designed, intended or used to discriminate because of race, color, religion, sex or national origin.” Title VII prohibits employers from using seemingly neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin, unless the employer shows they are “not job related for the position in question and consistent with business necessity.” This is referred to as “disparate impact” discrimination.

The EEOC guidance primarily addresses the risk of disparate impact discrimination caused by algorithmic selection procedures in employment decisions. The guidance states that “[i]f an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII.” “Selection procedures” are defined as any “measure, combination of measures, or procedure” if used as a basis for an employment decision. Employment selection procedures must be “job-related” and “consistent with business necessity.” An employer must show that selection procedures are job-related and consistent with business necessity, and, if so, whether there is a less discriminatory alternative available.

According to the guidance, an employer’s use of an algorithmic or AI decision-making tool can be a “selection procedure” if used as basis for employment decisions. The guidance explains that an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end, such as hiring, performance evaluation, promotion, and termination. Accordingly, employers must ensure that any AI-supported selection procedures do not result in disproportionately excluding protected classes of employees by undertaking the following steps:

  • Employers should assess whether a selection procedure has an adverse impact on a protected group by assessing whether the use of the procedure causes a “selection rate” for individuals in a protected group that is “substantially less than the selection rate for individuals in another group.” The guidance defines “selection rate” as the proportion of applicants or candidates who are hired, promoted, or otherwise elected.

  • If an employer determines that an algorithmic decision-making tool has an adverse impact on individuals of a protected class, then the use of the tool will violate Title VII unless the employer can show that its use is “job related and consistent with business necessity.”

Employers are responsible for the use of algorithmic decision-making tools even if those tools are designed or administered by a third-party, such as a third-party software vendor. Before retaining a vendor or other agent to develop or administer an algorithmic decision-making tool should ask the vendor whether it has tested and validated the tool and request a copy of its validation studies. The guidance clarifies that even if a vendor is incorrect about its own assessment of a selection procedure, the employer may still be held liable.

The Four-Fifths Rule

The guidance also describes the “four-fifths rule,” a general rule of thumb for determining whether the selection rate for one group of applicants or employees is “substantially” different than the selection rate of another group. Under this rule, one rate is substantially different than another if its ratio is less than four-fifths (or 80%).

The guidance provides the following example for understanding selection rates and the four-fifths rule. Suppose that 80 White individuals and 40 Black individuals take a personality test that is scored using an algorithm as part of a job application, and 48 of the White applicants and 12 of the Black applicants advance to the next round of the selection process. Based on this outcome, the selection rate for White applicants is 48/40 (60%) and the selection rate for Black applicants is 12/40 (30%). The ratio of the two rates is 30/60 (50%). Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule provides that the selection rate for Black applicants is “substantially different” than the selection rate for White applicants, which may be evidence of discrimination against Black applicants.

In a statement accompanying the guidance, Burrows said that employers should test and audit employment-related AI tools to make certain they are not inadvertently excluding protected classes. She said “I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination.”  

Importantly, EEOC notes that compliance with the four-fifths rule does not guarantee that a particular employment procedure does not have an adverse impact under Title VII. The rule may be inappropriate under certain circumstances, such as where smaller differences in selection rates may indicate adverse impact where a procedure is used to make a large number of selections. The guidance expressly provides that the EEOC “might not consider compliance with the [four-fifths rule] sufficient to show that a particular selection procedure is lawful under Title VII when the procedure is challenged in a charge of discrimination.”

Considerations for Employers

Employers using or considering using AI tools in selection, promotion and/or compensation decisions should ensure that those tools comply with the latest EEOC guidance by assessing whether the tools have a potentially adverse impact on applicants or employees. The EEOC places the burden of compliance squarely on employers, stating that employers may be liable under Title VII if a selection procedure “discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.”

Employers should appoint an AI oversight officer to ensure compliance with Title VII and the EEOC’s new guidance. The officer would be responsible for testing and auditing AI-based selection procedures. Additionally, employers should, through contractual obligations and oversight, ensure that vendors and other third-party software developers develop and administer algorithmic decision-making tools in compliance with the latest EEOC and state and local guidance.

Share