Recently, the Equal Employment Opportunity Commission (EEOC) released a technical assistance document which laid out certain confines employers will need to ope...
Recently, the Equal Employment Opportunity Commission (EEOC) released a technical assistance document which laid out certain confines employers will need to operate in if they’re using artificial intelligence during the hiring process. There have already been a few cases where unintended discrimination affected some companies hiring initiatives. To hear about them and learn more about the document, Federal Drive with Tom Temin spoke with Carol Miaskoff, the EEOC’s Legal Counsel.
Interview Transcript:
Carol Miaskoff EEOC has had a focus and an initiative on artificial intelligence and the overlap with civil rights and EO law for a while now. We held a public meeting on the implications for employment in January 2023. And overall, big picture, what we’ve been hearing from employers and from vendors of these AI systems as used in employment is that they really want some guidance, want some guardrails, want some input in terms of how the established EEOC, ETO, standards for discrimination and selection and other employee monitoring, etc., how that will be applied to the AI technologies that are being used now. So it was in response to that general need that we certainly heard from stakeholders who are reaching out to us employers and vendors, as well as the information that we gathered in our public hearing in January 2023 on this topic. And you can find that on our website under about EEOC with links to all the testimony. We decided that it would be really helpful to put out a short, straightforward technical assistance document, which is what this document is, that establishes that when you use AI technology, which has a lot of promise in a lot of ways, the established EEOC standards for assessing whether or not it’s discriminatory under Title Seven will apply to these tools. So in a sense, the highest level point made by our piece is that when AI is used in a selection procedure for employment, the established framework of the uniform guidelines on employee selection procedures to apply. By making that clear, it’s clear that a basic rubric of rules then apply to determine if there is a certainly disparate impact that might be illegal in using these tools. And I think the first big question that was being asked by stakeholders was do the establish standards under the uniform guidelines apply in this arena? When AI is at play and the answer to that, according to our piece, is yes. So that’s the highest level. The sort of next level down in terms of detail is that the uniform guidelines have something that people colloquially call the 4/5 rule. And what that means is if people with a particular characteristic that’s protected under the EO laws are selected at a rate that’s less than 4/5 of the majority group, then it’s likely that there was a sort of disproportionately negative exclusionary effect. And that’s important in a sense, in terms of what it does and what it doesn’t do. What it does is it is a rule of thumb for assessing impact. What it doesn’t do is definitively, in any way, shape or form, decide if if an impact is discriminatory or not. And that is really important because we started to hear about folks saying, well, our tools are EEO compliant. We’ve passed the 4/5 rule. Really there’s unfortunately a lot more and a lot more complicated than that because even if there is an impact, first of all, the 4/5 rule is a rule of thumb. You look at statistical significance. But more importantly than that, the issue is whether the tool that’s having an impact is actually predictive of success on the job, and that’s the job related consistent with business necessity legal standard under Title Seven. That’s the key is whether it’s predictive of success on the job. And they’re just different steps in the analysis. If it does appear that there was an impact, statistically significant impact, then you go ahead and you look at whether it is predictive of success on the job, and that’s the heart of the issue. And I would say there is indeed one more step. Is, was it predictive of success on the job? And did the employer actually reject an alternative that might be less discriminatory? So those are the kind of substantive considerations you look at. And that’s really the bottom line point of this document.
Eric White Got it. And so it’s who was involved, I guess, is my next question is, was it. Obviously, the legal angle came from your side of things, but did you also include the folks who could tell you what are the capabilities of these new hiring tools that utilize AI in their purpose?
Carol Miaskoff Well, we certainly we did this in response to hearing from a lot of stakeholders, employers and vendors about questions of whether these standards even apply to the tools that use AI. So we were responding to certainly that concern. We were responding to what was heard at the public hearing in January, and we were responding to what we were saying in some of our own matters coming forward in terms of we have some matters, some charges that were settled with public conciliation, for example, I guess the spring, obviously a case DHI. DHI operated a job search website for technology professionals called dice.com, and they entered a conciliation agreement with EEOC because they were using software that was excluding Americans. So it was national origin discrimination against Americans because they had some pretty straightforward algorithms that included searching for discriminatory keywords, such as they were looking affirmatively for a H1-B or a visa that appeared near the words only or must in job postings. And that’s what they did. They selected for people which with H-1B visas and thereby excluded Americans. So this was one of our first sort of public conciliations in this area.
Eric White Got it. And getting bias out of AI and machine learning has been a larger issue overall. Hearing from stakeholders such as that company, have they said how tall of a task it is to actually make sure that these algorithms don’t discriminate against anybody even if it’s accidentally and obviously not on purpose.
Carol Miaskoff Right. Well, if you look at the conciliation statement in this instance that the settlement was somewhat straightforward because it was basically an algorithm and they agreed to sort of scrape the algorithm for these search tools that were like H-1B, within two words of only that kind of thing very basic kind of computer programing. And they agreed to search what they called scrape their software for that to change it so it wouldn’t select positively only for people with H-1Bs. And so that’s a fairly straightforward solution. That said, I’m not going to pretend that this is simple, that this is easy. It’s not. I think everyone, vendors, employers, the government, everyone is working to to think about how to approach this and how to most efficiently monitor it. And it will not be straightforward and simple or else there wouldn’t you know, there wouldn’t be the sort of the level of interest and questions that are happening now as we’re all essentially as a society, I think adjusting to AI and sophisticated algorithms. And there was a joint statement that our chair signed with other major agencies involved in this effort that sort of sets forth that sort of cross-government approach of really trying to understand these issues and work with stakeholders as we all work, to see how we can maximize the positive aspects of this technology worked, which are undoubtably there, while avoiding the negative downsides or the unintended consequences. We have a litigation that we filed fairly recently against a group called i-Tutor, and it’s a case that again involves sort of software algorithmic selection of applicants. And that’s an instance where the software was just expressly programed to, I think, exclude women over 55 and men over 60 just straight out, straight away. So that’s in the early stages of litigation now. So there’s sort of no substantive result. But that’s a straightforward example. And I would say that I think when we say AI, I know at least my my brain immediately goes to perhaps the most complex and sophisticated software and sort of black holes. And I feel like, oh, we’ll never understand. But what EEOC is finding in some charges is that algorithms like anything else can very straightforwardly be used improperly.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED