The Department of Homeland Security has dabbled with affective computing to see if it detects lies among people seeking entry to the country. But Alex Engler sa...
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
Affective computing, a family of AI technologies that aim to be able to use biometrics to detect human emotions or someone’s state of mind, is a subject of active research in academia and the commercial sector. The Department of Homeland Security has also dabbled with the technology to see if it’s able to detect lies among people seeking entry to the country. But our next guest says now is the time to put some boundaries around potential government uses of affective computing. In a recent paper, Alex Engler, the Brookings Institution’s Rubenstein Fellow for Governance Studies argued the president should ban it altogether for federal law enforcement purposes. He joined Federal Drive with Tom Temin to talk about that argument.
Interview transcript:
Jared Serbu: Alex, thanks for being with us. And I think for our listeners, let’s just start out for a little bit defining what affective computing is just to set the stage for what we’re talking about here, and also to set some boundaries around what you think the president should ban.
Alex Engler: Yeah, sure, excited to talk about this. So affective computing is a pretty broad set of technologies. Almost anything in which a computer is trying to interpret your emotions or personality state is maybe what’s going to typically fall into that category. And that can mean a lot of different things. It can mean identifying veterans who are at risk of suicide. It’s not entirely clear that that works but that’s a meaningful application that might work. It can be driver monitoring systems. Amazon got in some trouble when it put affective computing into some of its its delivery trucks that monitor drivers to see if they look tired or not, right? That’s another example. This kind of all falls in that category. We’re trying to sort of learn something about a person about their affective state with any type of algorithm.
Jared Serbu: And why law enforcement particularly as an area of concern, in terms of a presidential-level ban only in this one area?
Alex Engler: Yeah, that’s a good question, right? Because that’s sort of the tricky part of this is affective computing, which maybe seems to sort of work sometimes. And there’s lots of active meaningful research about it, but isn’t particularly proven or obviously affective broadly. And so what it comes down to is kind of a subjective line in the sand around stakes. It’s okay, if we use effective computing in circumstances where it is only changing behavior marginally, right? And it’s not, that big of an impact. For instance, if you were trying to identify veteran risk of suicide, right, there are lots of other factors you can use. It’s going to be that whether or not the person has reached out for help, and whether or not the doctor thinks they’re higher risk and other factors maybe about their medications, so on and so forth, right? And so it’s sort of adding a little bit of information to that. What you get worried about is when you’re really relying on it as a core part of a big important issue, right? So if you’re going to say we’re going to use this to evaluate criminality or lie detection and you don’t really have any other ways to get the information, you’re putting too much weight on a thing that doesn’t work well enough with really high stakes. And that’s where you get worried. It’s possible this stuff will work in the long run. It’s pretty clearly not there yet, or no one has demonstrated its ability to do this yet. And the stakes [are] really what gets you to pause?
Jared Serbu: Yeah, and that totally makes sense. But that’s part of why I asked about whether an outright ban makes sense. And sort of the analogy to that that I think about is the way I understand the FBI has been using facial recognition software in connection with the Capitol rioter arrest, and facial recognition – I get it, it’s a big, complex and controversial topic of its own – but the way I understand the FBI has been using that is as only a factor in identifying people and then using some other independent piece of evidence to establish probable cause and not solely the facial recognition. So couldn’t you see affective computing being used in a similar way as a pointer rather than the sole piece of evidence that you’re using against someone?
Alex Engler: It’s possible, and I think they can be reasonable debate about that. But a big difference I’d like to point out is that when we talk about how well these two technologies work, even the biggest critics of facial recognition, don’t say it fundamentally can’t do the task it’s trying to do, right? There is real debate over whether the affective computing can do any of the tasks that it’s trying to do, right? So facial recognition, we’re talking about important flaws in a fundamental system that does accomplish what it can say. We’re talking about, does it work at all, right? And to be honest, I actually look at the federal use of facial recognition in law enforcement, and federal law enforcement a bit from a different perspective. I would say that their use of facial recognition is a bit concerning, not because it has no place in law enforcement but because a lot of agencies and individuals and agencies rushed into using systems that they hadn’t evaluated or tested or build best practices around or even documented how and when they’re using them. And that’s what the recent GAO report says, essentially, that law enforcement got way out ahead of its own sort of standards and a more formalized, rigorous process around using it. So in some sense that GAO report is actually the impetus for my concern, rather than sort of a justification for its use.
Jared Serbu: Yeah, and with affective computing, it seems like there’s two things going on at once. Right? There’s there’s creepy Big Brother privacy aspects to it. And there’s “it doesn’t work” aspects to it. So I guess, the question that brings up, is it worse if it doesn’t work? Or is it worse if it does?
Alex Engler: That’s a good question. We will certainly find out soon as this comes to retail stores near you, right? The private sector is going to roll ahead with affective computing, almost no question. And so if you are bullish on its use, there’s actually a lot of reason to think, well, there’s going to be money going into this. Industry’s growing, you can google market shares but they’re going to throw large numbers of millions and billions at you. I don’t really know which of those are accurate. But the reason the industry might grow really quickly is partially because there’s so much facial recognition and camera infrastructure set up. So if you have cameras and facial recognition everywhere, how much harder is it to drop an effective computing algorithm on top that says, “Well, how happy was this person walking down this aisle of my retail store?” Right, and so the fact that we already have this infrastructure that makes it easy to incrementally add this new technology, and I think that’s what we’re gonna see in the commercial space pretty soon.
Jared Serbu: Your suggestion that the president ban this really only for federal law enforcement agencies – is that is that really just an expediency point, just because it would be more difficult to exclude it from local law enforcement use?
Alex Engler: Yeah, that’s right. I believe that you could ban the technology for certain uses through Congress under the Commerce Clause, but that would require Congress – so that’s your issue right there. And this probably just on its own doesn’t rise to the attention of a pretty distracted Congress at the moment. Now, if we passed a larger artificial intelligence policy legislation, which is something that’s worth thinking about, right, as the European Commission is proposing, for instance, yeah this might be a thing then when you consider saying we’re going to ban affective computing, and particularly important uses. And other people, especially the AI Now Institute has proposed exactly that. One of the advantages of this proposal goes beyond its use in just federal law enforcement. It signals to local law enforcement that the feds don’t think this is a reasonable thing to do. And so because a ban at the congressional level for all law enforcement is tricky, and perhaps unrealistic in the near term, there’s value in doing this at the federal level, even if the feds were never going to do it, right? Throw in there that it is kind of reassuring to normal citizens, that we’re not using your emotions and what your face looks like and what you do with your tone to assess in any circumstances, whether you’re telling the truth or whether you’re guilty of something. And you I think that’s a meaningful way to signal a different approach than some of the authoritarian uses of AI that we’re seeing out in the rest of the world. So I think there’s value in the in the signal as well.
Jared Serbu: I guess the last thing I wonder here, Alex, is – are there strong reasons to believe that there is a big federal law enforcement interest in applying some of this stuff right now? Or are you just trying to get out ahead of the problem?
Alex Engler: Yeah, that’s a totally fair question and I think the best criticism of my piece, which is to say no, only a little bit. Right, there is some – I mentioned local law enforcement has been using eye tracker software. For instance, several governments, including the U.S., the U.K. and the European Union have funded an exploration into using these for usually at the border for lie detection. That was a program called [myAvatar]. It did not work even in incredibly generous circumstances, it did not work. And it doesn’t look like that’s been continued, though. They did invest in and test it for a while. And the other thing, I noticed that vendors are certainly starting to make some promises in the affective competing rounds that are really suspicious, like criminal intent detection, which let me be very clear – no computing can do that whatsoever. I don’t really think criminal intent detection exists at all, right? And so the fact that they’re making those claims is a little worrisome, especially if they build tools that individual officers or individual law enforcement agencies can use without going through an approval process without necessarily hearing things and that is how we started some of the facial recognition firms get their foot in the door. For instance, Clearview, right? They built an app that anybody can use, they let people test it and sort of got their foot in the door before going – but broadly, it’s fair to say this is a preventative for sort of pre-emptive measure. That’s true.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED