Artificial intelligence can do really dumb things with personal information

The Office of Science and Technology Policy is asking the public to weigh-in on the development of biometric principals. For what's at stake, Federal Drive with...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

Who owns your face? Or your fingerprints? And how should they be treated if you use them to gain access to an online system? These are very real questions on privacy. And rights to biometric data are amplified by the growing use of artificial intelligence. In fact, the Office of Science and Technology Policy is asking the public to weigh-in on the development of biometric principals. For what’s at stake, Federal Drive with Tom Temin talked with attorney Duane Pozza, a partner at Wiley Rein and a specialist in AI and related technologies.

Interview transcript:

Tom Temin: Mr. Pozza, good to have you on.

Duane Pozza: Thanks so much for having me.

Tom Temin: And you have written fairly extensively about this. And what are the issues for the government that might be collecting, if not collecting using, and I guess, in a sense, collecting that type of information as it seeks to make log on easier access to digital services easier?

Duane Pozza: Sure. As you mentioned at the outset, these issues are arising in the context of a broader initiative coming out of the White House, the Office of Science Technology Policy, or OSTP, which has put out a request for public information and comment on the use of biometrics specifically, and biometrics in the context of use by AI systems. And separately, although in connection with this, announced that OSTP is working on an AI bill of rights, or some sort of document that would attempt to categorize the different kinds of issues at stake here and set out sort of consumer individual expectations as to the way that AI, particularly biometric AI, is being used. So that is sort of their long term goal. And they raise a number of different issues, including sort of privacy concerns that you mentioned at the outset.

Tom Temin: Because to use a biometric stored at a site so that people can come back without starting the whole process over again, almost like having it stored on your phone is one thing, this would be stored in some cloud. And I guess the issue is, it could be purloined by hackers. Or it could be used in some nefarious way because people are always afraid of biometric, especially facial databases. I guess we should be used to fingerprint databases, those have been around for a century.

Duane Pozza: Yeah. So the privacy issues are numerous. One of them is protecting this kind of information. And there are various privacy laws on the books in different contexts that require protection of this information, because it is personally identifying and essentially your fingerprint, right. But things like facial recognition are interesting as well, because, obviously, your face is publicly available on the internet. So it’s not the same thing as like a fingerprint. But some of the questions they’re asking are around how that information is used. So for example, they raised the question of if AI is being used to scan your face or for facial recognition, for a certain kind of purpose, does there need to be notice? Should you have a disclosure that says that we’re using facial recognition in order to make a decision about something even for security? And those are, I think, some unanswered questions that they’re seeking comment on.

Tom Temin: Because we’ve been talking about this mainly as a piece of data to be used to compare it to another piece of data to give a yes or no answer for admittance or something in the log on scenario. But the other thing you’re writing about is the idea of discerning things from a piece of biometric that might be used by an AI system for analysis, gate recognition, you mentioned voice recognition, heart rate analysis. Or if someone is crinkling their face, does that mean they’re lying or some kind of thing? That’s where you get into some really gray areas where the policy, I guess, hasn’t kept up with what AI is capable of.

Duane Pozza: Right. When it comes to identification, I think there’s been a lot of gains in terms of testing the accuracy of different kinds of software, right. AI software that does facial recognition, or other kinds of biometric identification. NIST, the Department of Commerce, for example, has tests that they actually publish the results of different kinds of software. And the sort of background concern there is accuracy, obviously, but also bias, particularly if some of the algorithms are not as good or as accurate when it comes to, for example, certain racial groups. And that’s also the kind of information that this puts out. Some of the other issues that you just raised are, if you try to use similar sorts of biometric identification for the emotion recognition, or to infer sort of other characteristics about a person. The question there is, you’re getting past the science. Are you using the technology in a way that it was designed and how accurate is that? And should there be even sort of voluntary standards around how you test for certain kinds of things like emotion recognition, using this technology?

Tom Temin: We’re speaking with Attorney Duane Pozza, he’s a partner at Wiley Rein. And there’s kind of an irony here, isn’t there because in, say, the area of recruitment and hiring, a lot of corporations And increasingly, government agencies are looking at resumes without names, and without even information on whether the person is male or female. And certainly without pictures just to try to get bias out of any decision that might be made. And yet at the same time, some of these systems are relying on those very characteristics to make decisions. So it seems like the world is almost bifurcating in a funny way.

Duane Pozza: Yeah. And one key issue here, even outside the context of biometrics is when you use AI, you have to take close attention to the dataset that you’re using, I determine whether or not there’s any built in bias. So for the example you gave, just stripping out names of the resumes, if there’s some bias in the set of resumes that use to train the model, then it’s possible that the model will just replicate the bias, right. So I think companies who are innovating in this area are focused on trying to control and weed out the bias in the model, and then what the federal government is doing, I think, is trying to figure out what the approach would be in terms of do you want to set standards by which you could measure bias so that companies have a sense of this in certain areas like hiring. Is this really a regulatory approach or an enforcement approach? And I think this overall White House initiative is heading towards trying to answer those questions.

Tom Temin: And one of the things it’s trying to achieve is the development of an artificial intelligence bill of rights. And what’s your sense of what might be the potential elements in a bill of rights for AI?

Duane Pozza: That’s a good question. So the administration announced this in a op-ed in Wired Magazine, very 21st century way of doing it, and Dr. Lander, Dr. Nelson, who had OSTP row thought that they outlined a few things that I’m looking out for the first potentially in it is a right to know when and how AI is influencing a decision that affects civil rights or civil liberties. I refer to this sometimes it’s like sort of a transparency principle, the right to understand that it’s being used, there is a discussion around freedom from being subjected to AI that is sort of quoting him here that is biased. So that’s sort of a bias avoidance principle. There are discussions around privacy concerns that I expect would make in there. And then also, the right to meaningful recourse is something that highlights which is if something goes wrong, what is the redress for it? I mean, first of all, can you even figure out that there is a harm and then sort of what are consumers rights if AI goes awry?

Tom Temin: Right. So it sounds like they’ve got a lot of research to do. And I would think that one of the problems may be, or one of the challenges in this public call for comment is to make sure that you can weed out the cranks, and the people that generate 10 million identical responses to find out what is really well informed opinion on this.

Duane Pozza: Right. I mean, ironically, you could probably use AI.

Tom Temin: And in fact I think that’s something that they’re talking about in the whole rulemaking process, part of administrative government, which is another topic entirely. Alright, then what’s the deadline for people that want to respond to this OSTP? I haven’t seen that solicitation personally.

Duane Pozza: So RFI on isometrics the deadline I believe is January 15. So in the new year, the general solicitation of feedback on the bill of rights is pretty open ended. They’ve indicated that they want this to be a dialog. They had a few events earlier this year in which the public can participate. And they have an email address. So I think the short answer is January 15. For the formal request for comment, but it’s an ongoing opportunity to provide input.

Tom Temin: Attorney Duane Pozza is a partner at Wiley Rein. Thanks so much for joining me.

Duane Pozza: Thanks for having me.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories