EEOC, DOJ ‘sounding alarm’ over AI hiring tools that screen out disabled applicants

The Biden administration is telling employers and software vendors to avoid artificial intelligence hiring tools that may screen out employees with disabilities...

The Biden administration is telling employers and software vendors to avoid artificial intelligence hiring tools that may screen out employees with disabilities.

The Equal Employment Opportunity Commission (EEOC) and the Justice Department issued guidance Thursday outlining ways AI and automated hiring tools can violate the Americans with Disabilities Act (ADA).

EEOC Chairwoman Charlotte Burrows told reporters the guidance applies to employers nationwide, but said the EEOC is working with agencies across the federal government to ensure compliance.

EEOC, she added, is working closely on this guidance with the Labor Department’s Office of Federal Contract Compliance Programs, which ensures federal contractors comply with nondiscrimination laws and regulations.

“Obviously, the various agencies do different things, but [they] have made it a point to be in that dialogue, so that we don’t end up with sort of different focuses, and that we can really lead,” Burrows said.

Burrows said more than 80% of employers are using AI in some form for their work and employment decision-making. The guidance, she said, focuses on ensuring AI tools comply with civil rights laws, “instead of become a high-tech pathway to discrimination.”

“By and large, people are trying to figure this out, and the expertise that you might have in developing these products may not necessarily be the same expertise that you need to look at the civil rights lens. What we want to do is marry that up and be as helpful as we can, as this technology develops,” Burrows said.

The Bureau of Labor Statistics reports about 34% of working-age individuals with a disability are employed, and that the unemployment rate this population is nearly twice as high as the unemployment rate for workers without disabilities.

Assistant Attorney General for Civil Rights Kristen Clarke said the U.S. is “at a critical juncture” with this technology, and that employers are turning to algorithms and AI more frequently to help them select new employees, track performance and determine pay or promotions.

“We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers. And today, we’re making clear that we must do more to eliminate the barriers faced by people with disabilities,” Clarke said.

Clarke said DOJ has made it a top priority to ensure state and local government employers are not discriminating against job applicants or employees with disabilities.

“Employers may turn to these tools thinking that algorithms or AI can prevent discrimination by removing potential biases that humans may bring to the decision-making process. But in fact, they’re using such technologies in ways that may actually lead to discriminatory hiring decisions,” she said.

Clarke said the guidance aligns with the Biden administration’s focus on promoting diversity, equity, inclusion and accessibility in and out of government.

The EEOC and DOJ guidance documents state an employer may be held responsible for discriminatory actions carried out by software vendors authorized to act on their behalf.

Burrows said employers need to exercise due diligence and ask vendors “what’s under the hood” of these algorithms before using them to vet candidates. Employers, she added, should ask vendors whether any AI algorithm used for hiring allows for reasonable accommodations, a requirement for employees with disabilities under the ADA.

“If the vendor hasn’t thought about that, isn’t ready to engage in that, that should be a warning signal,” Burrows said.

Applicants going through a pre-employment evaluation must also be able to have the opportunity to ask for a reasonable accommodation.

“If everything is automated, it is very difficult for someone with a disability to raise their hand and say, ‘Hey, I need an accommodation.’ Has the vendor thought about at what point could someone do that, and could someone interrupt that sheerly automated process, to have a conversation with a person about the options? Because that accommodations conversation is vitally important,” Burrows said.

Clarke said hiring algorithms in some cases may analyze candidates’ facial movements, speech patterns and words, then compare those behaviors to some its most successful hires.

However, Clarke said those criteria may inadvertently weed out applicants with disabilities from the hiring pool.

“Without enough data to properly assess people with disabilities, like individuals who have speech impairment or autism, people with disabilities can be unfairly locked out or screened out of the applicant pool,” Clarke said.

Burrows said AI discrimination in the workplace isn’t just limited to hiring decisions. In some cases, employers may use productivity algorithms to track how much work an employee is doing, but may not allow for breaks or reasonable accommodations.

Automated hiring tools can also violate the ADA if they make disability-related inquiries or seek information that’s considered at a medical examination before giving the candidate a conditional offer of employment.

The EEOC’s technical assistance document is part of its AI and Algorithmic Fairness Initiative to ensure that the use of AI and software tools using for hiring and employment decisions adhere to federal civil rights laws.

“We are hopeful that the actions that our agencies have taken today help to empower workers and empower people who are seeking to access the job market, we want to ensure that job applicants with disabilities know that they have a right to seek reasonable accommodations,” Clarke said.

House Education and Labor Committee Chairman Bobby Scott (D-Va.) said in a statement that the EEOC and DOJ guidance will help ensure a more equitable job market.

While algorithmic tools have the potential to address bias in the hiring process, the research is clear that — without the appropriate safeguards in place — artificial intelligence and other automated technologies can further entrench discrimination against workers, including workers with disabilities,” Scott said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories