Artificial intelligence is growing fast as a technology to help companies hire the people they need faster. But it also has the potential to introduce bias on a...
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
Artificial intelligence is growing fast as a technology to help companies hire the people they need faster. But it also has the potential to introduce bias on a record scale. That’s according to attorney Keith Sonderling, who was appointed by former President Donald Trump to the Equal Employment Opportunity Commission in 2020. He’s made fair use of AI a top priority, and he joined the Federal Drive with Tom Temin for more discussion.
Interview transcript:
Tom Temin: Mr. Sonderling, good to have you in.
Keith Sonderling: Thank you for having me.
Tom Temin: And before we get into the AI issue, which I know is a big priority for the agency writ large, just for people that are not operating at the level of presidential appointee, when you have three people that are Republicans and two Democrats on the EEOC, and the President is a Democrat and the chair is a Democrat — we had her on last week, Ms. Burrows, highly capable lady — how does it operate? What are things like in the office of the EEOC when that situation is extant?
Keith Sonderling: Well, it is a unique situation. But if you look at these independent agencies, these agencies that are bipartisan, that have different appointees for different terms, staggered terms. And that’s the way Congress designed these agencies, is to have different viewpoints, to bring different backgrounds to these very important, in our case, federal civil rights laws. So right now, because of the way we were confirmed in the terms, there is a unique situation at the EEOC, where even though we are in a Democrat administration, there are five commissioners, three of them happen to be Republican. And the fall of 2020 when I was confirmed along with two other commissioners, it brought some of the most significant changes to the EEOC in years. Not only was it the first time since 2016 the EEOC has actually had five members, which is the way it’s supposed to be, it’s the first time in a very long time that Republicans control the majority of that. So what does that practically mean? Essentially, how the EEOC is run, the chair has the power to set the agenda, but she needs the votes of all, a majority vote from the commissioners, to actually make any new regulations or new policies or guidelines. So right now, because of the dynamic, everything that has to be done, it needs to be done in a bipartisan fashion. And some would argue that’s how the agencies are supposed to be, these dynamics where the power structures are different and not, okay, we’re in a Democrat administration, it’s all Democrats. That’s the way Congress designed these agencies. Unlike where it was before — the U.S. Department of Labor or Department of Justice or EPA right now, which are the political appointees are all Biden-appointed Democrats — it’s different, it’s unique, it really allows us to do some very interesting things in a bipartisan manner.
Tom Temin: And when you do have appointees of the two parties working in the same office, do you put silly putty on each other’s chairs?
Keith Sonderling: No, there are a lot of gag gifts everywhere. No, but we’re very, it’s a very cordial working relationship. Look, we were all confirmed by the U.S. Senate for a reason: Because we care deeply about civil rights and civil rights in the workplace, and believe in these laws where we wouldn’t want to do that. So we all have that base understanding that we care about these laws, that we’re the agency charged with administering, enforcing some of the most, I would argue, some of the most important laws out there — the right to work, the right to make a living, without any discrimination based upon protected characteristics.
Tom Temin: And by the way, what has the workload been like in terms of incoming cases to the commission?
Keith Sonderling: We’re still waiting for the actual breakdown of our fiscal year 2021 stats. Normally, they come out in January, but and then we’ll be able to really dive into see how [COVID-19] and the virus affected the amount of claims that were coming in the, types of claims coming in. For instance, you know, retaliation is the number one claim at the EEOC every single year. Behind that, at least in fiscal year 2020, it was disability discrimination. So in fiscal year 2020, religious discrimination was around 3% of all of our cases. So we’ll see how that dynamic changes with the fiscal year ’21 numbers. With the fiscal year ’22 numbers, as we see now, religious discrimination may skyrocket much higher because of all the mandatory vaccination programs. So it’s hard to tell right now, just where the we are in the new fiscal year with old fiscal year data, but I think when that comes out, specifically on the breakdown in cases, we’ll see how the pandemic has impacted the amount and types of cases we’re getting.
Tom Temin: Sure, so the change in the physical health situation of the nation could, in some indirect way, affect policy toward how these cases are handled and adjudicated, and what is considered discriminatory practice.
Keith Sonderling: Right, and also where the agency needs to go on guidance. You know, our job also is, we’re not just a civil law enforcement agency, just like when I was at the Department of Labor. We need to put out guidance for both employees and employers to make sure that they understand the requirements of the laws, know what their employer’s obligations are to employees. But for employers, you know, know how to manage these laws and implement these laws that so there’s no discrimination. So I think when we start seeing the amount of claims that have come in through COVID the pandemic has changed, you know, it’s on us to continue to give guidance, and help both employers and employees with the new challenges that have come from the pandemic.
Tom Temin: We’re speaking with Keith Sonderling. He’s a member of the Equal Employment Opportunity Commission. And just another question I had operationally. How much discussion, back-and-forth collaboration, is there between the EEOC and the Federal Labor Relations Authority?
Keith Sonderling: Well, I know there’s a lot of coordination with career staff in our various departments with all federal agencies, you know, not on two sides. One, from a policy perspective, if there’s any information that, whether it’s Department of Labor or Department of Justice that needs that implicate or laws, I know there’s a lot of working together. And the same goes with Congress about giving technical assistance on our very complicated federal laws. But also don’t forget the EEOC also enforces our laws against the federal government. So all the individual EEO offices there’s constant communication with, not just on training, but dealing with the cases coming through that federal government employees complain. So specific to the NLRA, I’m not 100% Sure the the amount of coordination. I just know globally that we have staff that works with all agencies,
Tom Temin: And getting to artificial intelligence, this has become kind of an abiding issue for you and the EEOC. Miss burrows was on again speaking about that also, but you have been interested in this for a while. And you get pretty deep into the technology of it. What’s the big concern?
Keith Sonderling: What people don’t understand, you hear so much about artificial intelligence now, across the government, you know, across industries, whether it’s the private sector, whether it’s the federal government, whether it’s foreign government. And specifically to the laws the EEOC administers and enforces, let me tell you what it’s not. Everyone wants to say, oh, this is robots replacing humans. The robots are outside the door, they’re finally coming in and they’re going to take all your jobs and that’s it. Well, that may happen one day; it’s not what’s happening right now. The use of AI right now is pervasive throughout the private sector. And what do I mean by that? This is just generally software that is replacing some typical HR functions. For instance, like creating a job description, screening resumes, chatting with applicants, interviewing applicants, doing performance reviews for applicants, in some extreme situations terminating an applicant or an actual employee as well. So it is being used, it’s out there, and the reason I’ve been talking about it and raising awareness about it is because we have to address it now. Because as we’ll talk about, there’s so many promising aspects to this technology to potentially help remove bias from the decision making at all stages of the employment relationship, which is why our agency exists. So I think there’s a lot of good to it, but if not used properly it can potentially violate the civil rights laws from the 1960s far greater than we’ve ever seen before, because a computer can do a lot more than an individual person.
Tom Temin: Right. And so we’ve seen a couple of cases where that has happened because of the training data that you would think smart tech companies have fed into their algorithms. And so what, from a policy standpoint, could the EEOC promulgate to industry such that this doesn’t happen?
Keith Sonderling: Yeah. And if you think about it, AI has no intentions of its own, at least for now. You know, again, back to the robots replacing all of us. But for now, it’s only based upon the data that’s fed to the algorithms. So if you’re an employer, and you want to diversify your workforce, or you want just to find the best candidates, and you tell the computer here are my top 10 salespeople — let’s just use that example — go find me 100 of them. Well, if your salespeople are all of one gender, one race, one national origin, that computer is going to look for those patterns, and they’re just going to potentially replicate the status quo and not give opportunities to others that don’t meet those protected characteristics, which you’re not allowed to make an employment decision on. And you know, there’s some really classic examples of this — and these were very publicly disclosed, these are not EEOC cases — where one company went to one of these resume screening programs and said here’s our top employees, go find us more. And the computer said the best characteristics is being named Jared and having played high school lacrosse, and that’s an example, you know, we often use because what does that show you? That’s not looking to necessarily what their best actual work skills are, which is what we encourage employers to make employment based decisions upon, it’s just on some of these characteristics. And the other very classic examples with Amazon when they tested one of these programs, again, talking about where we are within the data set here of saying, you know, if it’s potentially biased data, you’ll get biased results. So in that case, they gave the one of these resume screening programs the data from their last 10 years of applicants and employees from one position, and because they were all men, what happened? They started, the computer said, well, if you went to a women’s college or playing women’s sports teams, you were downgraded. And again, that’s not proof of misogynistic intent. And that’s where I’m trying to steer this conversation away from — it’s not that these computers are intentionally discriminating because they don’t like certain people, certain backgrounds. It’s that the data that goes into it,
Tom Temin: It sounds like you have to select not only the data, but the fields, specifically that you’re going to have the algorithm look at. So like the sports played or not, I can’t imagine what, unless you’re hiring for sports, how that would impinge on there. So it seems like almost a limitation of the data, rather than more data.
Keith Sonderling: Correct. And it’s also using AI to make sure that all these protected characteristics are removed. So you can, you know, let’s talk about, for instance, a name. What does a name mean for somebody’s job performance? Absolutely nothing. All it tells you is about potential protected characteristics, like somebody’s gender, somebody’s national origin, or in some cases religion. So having AI that removes the name completely, you know, already starts you at a better position then before, and also, there’s programs that do the interviewing, which we’ll also get a little deeper into, and that just rely on your voice. And that, at the initial stage, if it’s a computer taking a transcription versus an HR person actually seeing that you’re of a certain national origin, a certain gender, a certain religion, or you’re disabled, you know, that eliminates bias at the earliest stages. And there’s a lot of studies about bias in the interviewing process. So if I’m interviewing somebody, and I see somebody’s actually disabled or pregnant, no matter what, in the back of your head, you’re thinking that, okay, this may cost me because this person has a disability and I need to make an accommodation for, or this person is pregnant and they may want leave. And that is a factor in the decision, even though it’s completely unlawful and you’re not allowed to do that. Nobody should be doing that. But some of those things you can’t unsee, but a computer can’t see that. So some of the benefits of removing the initial bias is very strong, if AI is used properly.
Tom Temin: Got it. So do you anticipate, say, at some point EEOC, with the initiative that it has going, with respect to use of AI in employment, could issue a rule around this? Or is it possible for a rule to be able to define carefully enough what kind of data people use to train algorithms?
Keith Sonderling: Because this is Federal News Network, I can get a little deeper into that we have very limited rule-making authority at the SEC. So I came from the Department of Labor, Wage and Hour Division.
Tom Temin: Oh, they’re good at it.
Keith Sonderling: Well, yeah, well, we can make a rule on anything, you know, whether it’s the overtime rule or certain requirements, for benefits for wages. You know, we could really get into it. Here, it’s a little different, because we have very limited rule-making authority under Title VII. So what generally happens in our example is last year we put out religious guidance about religious freedoms in the workplace. That was over 130 pages that document, and prior guidance was like that as well. But it’s done through a commission vote. So there is a process, and it’s not necessarily a federal rule. But you know, there are different levels of guidance, which we can do. But I do think, you know, as this initiative has come out, and as you’ve talked to Chair Burrows, it’s very important we do that. And who are our groups here? It’s employees who are being subjected to this technology, who may not even know that a computer is making a decision, not a human. Employers who are buying these systems — there’s two parts of the employers. and this is very relevant to the federal government, if they start to buy or develop these softwares, too. You know, what kinds of things should you be looking at, as employer, before you buy the systems? With the amount of systems out there, with the amount of money going into these systems, you have a lot of options. And everyone’s promising that this is a program, if you spend money on us, it will help eliminate bias and help you with your diversity needs. And we talked about some of the examples of how that can be true and it can not be true. So what can employers ask these vendors from an EEOC perspective? And then once you buy it, you can’t have a hands-off approach, because you know, allowing the computer just to make decisions itself — essentially related to somebody’s civil rights in the workplace — can be very, very complicated and have significant ramifications. Because under our laws, whether an employer intends to discriminate, or the discrimination happens, liabilities can essentially be the same. So then it’s going over what can — you know, I think the EEOC really needs to talk about — what can the employer do once they buy the software to make sure that it’s not discriminating? And how do you do testing? How do you make sure the there’s the bias results? More importantly, also, how do we, as the federal government, communicate to vendors?
Tom Temin: Sure.
Keith Sonderling: And that’s sort of, historically, some of the difficult part because under our law, specifically EEOC, we have full jurisdiction over the employers. But when you start getting out — and this is not unique to the EEOC — when other areas of the law start to affect the agency’s laws, how do you communicate with them? And how do we make sure that they have best practices, so they design these systems to make sure they comply with the law, so the people who were buying them are not violating the law through their systems, even though there are certainly some questions about their liability. So that sort of like would be a great outcome out of all this. And it’s a unique part in where that this technology is growing. And we can come in now, the federal government, especially in the tech world, which is all this is based out of and say, look, we know that there’s a lot of benefits to the technology. Or from my perspective, we know that this is being bought widely, and it’s already being used, and you can’t stop the train now. So how can we — the EEOC, the federal government — be at the table in a very advanced technological area, and work with them on the onset to make sure that the programs are used properly, but also continually be developed with our loss in mind?
Tom Temin: Right. So this becomes a good testbed for even other industries than employment and hiring, because the general approach you have to have in AI is to prevent whatever bias you don’t want in your system, whether it’s personnel or some type of machinery.
Keith Sonderling: Which also, as you know, outside of the seizure section, if you look at the FTC, they’re dealing with housing, and they’re dealing with, you know, credit. And a lot of this framework, the vendors — who are also, you know, developing and selling systems and that — can really help achieve global compliance overall, if we’re all doing this on the ground floor.
Tom Temin: Right, and Jared playing lacrosse can get a good home loan, right, I guess, if the system is built in right way than the wrong way. And the other question is, then, that these systems need to be not only fed with data that doesn’t produce bias results. This also implies you need a accountable and transparent system to make sure that you can go back and investigate what happened and understand what happened in your system to be able to adjust it.
Keith Sonderling: Right. And from the EEOC perspective, how our laws work and our enforcement works, we’re going to look at the results. So if the results are results of those potentially biased inputs, you’re going to see those biased outcomes. If the inputs are proper, but the AI allows you to intentionally discriminate, then also we’re going to have unlawful results. So whether it’s biased inputs or an actual discriminatory intent to use the algorithm in a way saying, okay, well, I have 100 applications — it’s a perfect subset of this area, you know, based on gender based on national origin — but now I want to filter out everyone who’s over 40. And now I want to filter out, you know, all females. So look, you actually have the right data set, but now you’re using computers to scale somebody’s discrimination, which can be dangerous. So at the end of the day, look at the results. If you’re using these programs, look to see what the actual results are showing, and then you can sort of backtrack to see well, okay, was it a biased input? I mean, that input is in the data, or was somebody actually inputting, you know, discriminatory intent into the algorithm? So I think that’s, you know, probably one of the key takeaways.
Tom Temin: All right, well, we’ve got a lot of homework to do on AI, I guess. Keith Sonderling is a member of the Equal Employment Opportunity Commission. Thanks so much for joining me.
Keith Sonderling: It’s my pleasure.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED