Skip to main content
Tell your legislator to say NO to the Governor’s permanent Corporate Transit Fee. SEND A MESSAGE

Employers can be held liable if they use an outside vendor’s automated artificial intelligence (AI) tools that result in discrimination against employees and job applicants, the U.S. Equal Employment Opportunity Commission (EEOC) told businesses this week.

Even if the vendor has assured an employer that its automated AI tools comply with Title VII of the Civil Rights Act of 1964, it is still ultimately the employer’s responsibility if employment discrimination is occurring, the EEOC said in a technical assistance document issued May 18.

Employers increasingly use automated systems, including those with AI, to help them with a wide range of employment matters, such as hiring new employees, monitoring performance, and determining pay or promotions. Examples include:

  • Resume scanners that prioritize applications based on key words
  • Employee monitoring software that rates employees based on the number of their keystrokes or other factors
  • Virtual assistants or chatbots that ask job candidates questions about their qualifications and eliminate those who do not meet pre-defined requirements
  • Video interviewing software that evaluates candidates based on facial expressions and speech patterns
  • Software that tests and scores applicants and employees on their personalities, aptitudes, cognitive abilities or their “cultural fit” within an organization

“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality,” said EEOC Chair Charlotte A. Burrows. “This new technical assistance document will aid employers and tech developers as they design and adopt new technologies.”

The EEOC document puts the burden of compliance squarely on employers, whether they administer an outside vendor’s HR-related tests themselves or whether they hire an outside agency to conduct the selection process.

“This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf,” the EEOC stated in the guidance.

Employers may want to ask the vendor whether steps have been taken to evaluate whether use of a tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII, the EEOC said. If the vendor says a lower selection rate for a group of individuals is expected, the employer should consider whether the AI tool is consistent with business necessity and whether there are alternatives.

In addition, if the vendor is incorrect about its own assessment and the tool results in disparate treatment discrimination, the employer could be liable. Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether the use of the procedure causes a selection rate for individuals in a particular group that is substantially less than the selection rate for other individuals in another group.

The EEOC proposed this hypothetical example. Suppose 80 white individuals and 40 Black individuals take a personality test that is scored using an algorithm as part of a job application, and 48 of the white applicants and 12 of the Black applicants advance to the next round of the selection process. Based on these results, the selection rate for whites is 60%, and the selection rate for Blacks is substantially lower at 30%. This could be evidence of discrimination.

“I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” Burrows said. “This technical assistance resource is another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.