AI might help agencies make workforce decisions, and the ethical issues that raises

Hodan Omaar of the Information Technology and Innovation Foundation thinks public sector employers can help solve some of those concerns by becoming responsible...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

It may only be a matter of time before artificial intelligence technologies become relatively common in workplaces. But that prospect raises concerns ranging from privacy to possible bias and much more. Hodan Omaar thinks public sector employers can help solve some of those concerns by becoming responsible early adopters of AI for workforce decisions. She is a policy analyst at the Information Technology and Innovation Foundation’s Center for Data Innovation, and the author of a new paper on AI in the workplace. She joined Federal Drive with Tom Temin to talk about it.

Interview transcript:

Jared Serbu: I think the obvious most logical place to start is talk us through a little bit what the most interesting applications for AI in the workforce context really are at this point, both in terms of what’s actively in use, or what what seems to be right around the corner, that could seem really beneficial.

Hodan Omaar: Yes, so I think that the place where, or the part of the kind of workforce decision area where it’s being used the most is the hiring part of it. So the hiring process is really where AI is being used the most. And it’s being used in a number of different ways. For one, AI can be used to target and personalize job advertisements. So if you think of the platform ZipRecruiter or LinkedIn, they have these features, which can suggest job postings to candidates based on other candidates who employers have already rated highly. Another use case is in CV parsing. So AI services can essentially read through CV’s for keywords. But as AI is developing, they’re getting smarter and the smarter ones are actually using something called semantic search technology, which allows you to pick out concepts. So if you are an employer, and you are looking for, you know, you want to pass CV’s for the kind of key term IT security, the kind of more developed AI systems aren’t necessarily just looking for that keyword, they’re looking for that concept. So any kind of synonym, or anything that people might have put related to that will also come up as suggestions. But I think one of the most interesting AI applications I’ve come across is one that’s been developed to help veterans find employment. So the U.S. departments of Labor, Defense and Veterans Affairs put out this challenge to the private sector to fund the development of an AI system that could match the really unique skills of veterans to jobs, because that’s often have these really unique skills that are valuable to employers, but they can be hard to translate. And so this is an example I think of federal agencies really looking to AI to solve a kind of mission-specific workforce problem in a really kind of interesting way.

Jared Serbu: That does sound innovative, I was actually going to ask about applying some of the stuff in the public sector, because not just the federal government, but a lot of I think state and local governments, too, tend to have requirements or at least habits that caused them to do all of their recruiting and job posting on a single centralized website, which makes some of that targeting more difficult. Are there ways you can use AI even in that context, where you’re running everything through one web portal?

Hodan Omaar: I definitely think so. Because if you think of platforms like LinkedIn, that applies to a number of different people and in different employers are all using this one platform, but for their own kind of services and for their own particular needs. So I definitely think that’s an option.

Jared Serbu: One of the big areas that your paper gets into is the biases that we all have in the existing process that can sometimes be amplified or mitigated by AI. So what are the things that both public sector and private sector hiring activities need to be thinking through to make sure that they get some of the benefits of AI when to reduce bias and not amplify bias?

Hodan Omaar: Yeah, so I think there’s a couple reasons why AI systems can kind of amplify biases. One of those reasons is that they are just more complex. There can be – I’ve discussed some of the perhaps simpler ones. But, you know, there are also facial recognition systems and AI systems that are looking at, you know, video recordings and audio recordings and kind of extracting things from those. And those can be really, really complicated. And the other side of it is scalability. So if you have a biased human employer, they might just be looking at 100 CV’s, but an AI system might be looking at thousands and thousands of them. But on the other hand, as you said, these systems can actually help employers create fairer employment practices and that actually do promote diversity and inclusion. And I do think it’s something that should really be a priority for government agencies. There was this recent study that came out, that was in the New York Times that said that Black women were half as likely to be hired for state and local government jobs than white men. And so a couple of the ways that AI can can kind of mitigate some of those issues is, it can look at a job description that might be written by a human and look for words that may discourage certain groups from applying. So research shows that women apply at a lower rate for jobs in male dominated fields. When those job descriptions use words typically associated with stereotypes, so you know, think of male stereotypes like strong leader and determined by swapping those words, for more neutral terms, they can actually attract some groups that wouldn’t have otherwise applied. And also they can, they can use AI systems to redact demographic and socioeconomic indicators from from job applications to combat unconscious bias. And also, just outside of the hiring side of things, employers can also use AI and analytics to kind of evaluate the compensation processes, promotion, training termination practices, to ensure that they are fair and unbiased.

Jared Serbu: One of the key points that your paper also makes is that you’re urging governments to be early adopters of AI. Are their obvious places beyond some of the hiring processes that we’ve already talked about, for governments to start with AI?

Hodan Omaar: I think that example was a good one, because it’s an example of where agencies are starting with a problem that really benefits their employees or prospective employees. I think where AI technologies offer clear employee benefits or employee values are the best place for kind of government agencies to start because the workers are more likely to embrace them in spite of any concerns that they might have. So you know, how can a government agency use AI to help their workforce manage their mental health or wellness? How can they match them to find the right jobs? How can they use AI to ensure that continued training matches their speed, because not all employees, the kind of current ongoing training assumes that employees kind of go at a fixed pace, but some people need to go slower, some people need to go faster. And so these are all examples where it’s, “Here’s how AI is going to benefit you,” as opposed to “This is how we’re using AI to monitor you, or this is how we’re using AI in those other ways” because it can kind of build on this trust that employees have with the employer. And I think the other side of it is really about being transparent about the use of AI in the workplace. I think one of the big concerns that employees have is that these systems are being used without their knowledge. So there was this report that came out in the U.K., that said 50% of U.K. employees believe that companies are using AI systems that they’re not aware of. So just starting from a place of transparency, I think is also really important.

Jared Serbu: Yeah, let’s stick on that transparency piece for a little bit. Because this is a little bit unique in the public sector, right? I mean, not only do you need your workforce to be able to trust the employer and trust the algorithms – public sector hiring managers are kind of uniquely accountable as public entities to make sure that they’re not just not running afoul of EEO rules, etc., but that they’re following merit system principles that they have the public’s trust, and not just their workers’ trust. But one of the things that paper I think tries to do is balance the need to have that trust that you talked about while not having so much oversight, that you stifle innovation. So what’s the right way you think to strike that balance, especially in the public sector context with those additional considerations?

Hodan Omaar: I think that’s a great question. I think that the public sector, employers really need to build and disclose to the employees and also to the public the various oversight and accountability measures that they have to be able to ensure that kind of accuracy and effectiveness of their AI systems. I think one of the important parts of it is that what you said is really right, that the responsibility is on them as the users of these systems. I think sometimes it can, the onus can seem to fall on the AI developer. But that developer can never know truly exactly how a user of IT system is going to use that system. And so it’s up to the user of that system, whether that be a public sector entity to show how they are able to catch unfair or inaccurate decisions. And also, I think being able to have a process in place that if I, as an employee, believe that a decision has been made about me that I think is unfair, what is the process that I can follow up to let my employer know that? One way that employers can do this is by building more worker-facing tools. For example, if you are using an automated tool to extract information from resumes, applicants often can’t see that information in that application has been parsed correctly. So can we implement effective ways for individuals to provide oversight and feedback that the AI systems make? Because that’s going to really help build that trust.

Jared Serbu: But I think especially in the public sector, there’s always going to be a temptation on the part of the user or by oversight bodies to put a little bit at least of accountability on the AI developer, because the feeling I think is going to be “Well we can’t see what’s happening inside that black box.” So it’s very hard for us to evaluate whether or not this system is making more bias decisions than we would on our own. How would you respond to that sort of concern?

Hodan Omaar: I think that’s right. I think that there is, of course, if the AI developer has used inaccurate data or hasn’t really gone through the right processes to ensure that their system is as accurate as it can be, then there is onus on the AI developer. But there are many mechanisms available for that AI developer to show how they have trained that system, how they have ensured the accuracy of that system, things like impact assessments. So when a public sector entity is contracting with these, you know, private sector companies, make sure that they have used some of these available mechanisms, you know, because because they are the ones who are contracting with all these different people. So they have the ability to filter through to find the best one. And I think if the onus is on them to say, it’s up to you to make sure that you are doing your due diligence, to contract with the right people, then there’s an incentive for them to make sure that they are working with the best.

 

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories