Can agencies actually follow the White House AI order?

The White House has given agencies until the end of the year to make sure their use of artificial intelligence is safe and fair.

The White House has given agencies until the end of the year to make sure their use of artificial intelligence is safe and fair. It tells practitioners to keep humans in the proverbial loop and to let people opt out of AI applications. And it also wants them to stop using AI if they cannot meet the safeguard rules. How feasible is all of this? For one view, the Federal Drive with Tom Temin spoke with the Principal Security Consultant for the cybersecurity company NCC group, David Brauchler.

Interview Transcript: 

Tom Temin Everybody is really running around almost like a chicken with its head off over AI. And we’ve seen this type of technological wonder come into the government. They don’t always pan out. Do you think AI is going to be really big or is it the next blockchain?

David Brauchler Yeah, that’s always an interesting question. Whenever we’re dealing with a paradigm shift of technology, there’s always the potential that we’re dealing with the next internet. This is the next big thing. On the other hand, this might fizzle out in the next five years, and we realize that its use cases are a lot more limited than we had initially anticipated. So right now, we’re still in the honeymoon phase with AI. The government actually released a list of their intended use cases where they think that they could put AI to good use. But as it turns out, it’s almost a wish list rather than a direct plan of attack for the near future. So I think that we’ll see a lot of interesting avenues in the next several years, but whether or not they pan out is anybody’s guess.

Tom Temin And some of the use cases they mentioned, like facial recognition, this is not new. And it was never called AI. Are we kind of retrofitting the word AI onto just sophisticated logic algorithms or processes that were there all along?

David Brauchler I think if you go back five years, you’ll have found that we call it machine learning. And this is something that we’ve been doing for decades now. And of course, it’s been more sophisticated in recent years, but everything from cybersecurity, with detecting threats on systems to facial recognition, we’ve been seeing these technologies at play. But now, because AI is the big buzzword that’s been receiving big funding, all of these organizations are rushing to use that to apply to their use cases. Now, it might truly be some form of AI. But in the end, a lot of these aren’t truly novel, and they’re more iteration over what we’ve already seen.

Tom Temin And getting to the particular follow on to the executive order, I guess it was from last October. Now we have this new recent March of ’24 directive on the safety of it. What do agencies actually have to do here, do you think, to summarize at all?

David Brauchler Sure. Well, a couple of the key points that I noticed is that the transparency and accountability guidelines require individuals to be able to opt out of certain AI use cases. So, for example, if you go to the airport and they’re using facial recognition, an individual needs to have the ability to opt out of that facial tracking without it negatively impacting their ability to get on the plane. And personally, I think that that’s a phenomenal step forward for these individual rights to say, I don’t want to be a part of this tracking technology. They also indicated that they wanted to release these models to the public, which provides ample opportunity for what I call a public private collaboration, where the individuals and the public can see these models that are released by the government, poke at them. And we can use this to figure out. Here are the next things we can do to make AI better for everybody. Not just one organization or one federal agency.

Tom Temin Well, are they maybe creating a straw man there? Because remember when the first scanners came that showed the outline of the human body in some anatomical detail, and people were worried about being seen in that manner and whether those pictures were in fact kept. They weren’t kept. They were just used in that moment and then never stored anywhere. And then they abstracted the human anatomy so that you could still see what was in the pocket, but you couldn’t see what was underneath the pocket to put it that way. And they don’t keep the photo images until the matching is done. Then they’re disposed of. So is it a real fear there? And should someone have the right to opt out when in fact the picture’s destroyed the second it’s used?

David Brauchler Yeah. I think that it’s an interesting question in that regard. And a lot of individuals feel differently about what their personal risk tolerance looks like. And that’s something we talk about in the security industry. And with the NCC group a lot. Is that every individual in every organization is willing to deal with a different amount of risk. So for some people, they might say, I have decent confidence that this is going to be implemented correctly. And I’m not even going to say that it has to be maliciously. It could just be a mistake. But they’re saying, I’m not willing to put up with the idea that somebody might make that mistake and my information could be released out there somewhere. Another individual might say, who cares, I have a driver’s license. They already have my face. And so we’re dealing with individuals who have different risk tolerances, who are willing to put up with different amounts of toy with their personal data.

Tom Temin We’re speaking with David Brauchler. He’s a principal security consultant for the NCC Group. And you have postulated the idea of another chief that is a Chief AI officer, we’ve got chief data officers and ten other chief this, that and the other. What would a chief AI officer do that should not already be invested in the security people, the IT people, the program people and the privacy people.

David Brauchler It seems like there are more chiefs every year. But when it comes to these more specialized roles, like chief information security officer, or in this case, chief AI officer, we’re looking for individuals who have dedicated experience with the technology at hand. So while you might have a CEO of an organization who doesn’t know a whole lot about security, but they know a lot about business, you’re going to need to have a chief AI officer who is in the weeds for how AI is implemented. Now, that doesn’t mean that they have to understand all of the math and the calculus that goes on behind the scenes, but they do need to understand those privacy and security risks that AI introduces to an organization. And whether that be a business or an agency, these organizations need to be aware of how they’re using AI and how it’s impacting their data flows within that organization.

Tom Temin And on the question of the human in the loop, that’s emphasized very early and strongly in that executive order, that follow on order, I don’t hear anyone that promulgates AI without saying the same thing. Have you seen any use cases where you’d have a runaway situation, revenge of the robots or type of scenario? Maybe discuss how you keep the human in the loop and make sure that is a cultural norm that every agency would follow.

David Brauchler Absolutely. In terms of those runaway use cases, we’ve seen systems designed in such a way that we can end up with those unfortunate feedback loops. On the other hand, it is fortunate that we’re early enough in the days of AI that it probably isn’t going to result in any catastrophic implications. So, for example, an organization who uses AI to sign documents is probably not going to be in a business ending scenario if that goes out of control without human intervention. And on our research blog, research NCC group.com we’ve put some of those examples of how AI can go wrong when implemented improperly. But as these technologies are used in more critical infrastructure and in more sensitive use cases, it is imperative that somebody can step in at any given moment and say, this is being used improperly. And we need to make sure and effectively add checks and balances to the AI’s decision making process. We don’t want a fully automated health care solution. We’re talking about risk of life and limb here. And so when AI is from a big picture perspective and advanced automation solution, we have to have a person there double checking the results, I would say forever, but at least until we’re confident that AI can make decisions better than any person ever could.

Tom Temin And is it possible to make, say, the applications of generative AI accountable and traceable and auditable? I’ll make the analogy of Boeing, which is having issues with manufacturing quality. But every rivet and screw is documented. And if one screw goes in wrong and they replace it, that’s actually got to be documented so that everything can be audited later on. Can you do that with training data and training inputs to generative AI?

David Brauchler It’s an interesting case, and I would say tentatively yes. Organization can keep track of everything that goes into these systems and everything that comes out of them. However, when we put things into a model there is the risk that they, in a sense disappear into this larger amalgamation of data that is being processed. And so as a result, you can end up with a case where an organization or an agency is feeding a person’s data into this model. They throw away that data and think that they’re safe because it’s been lost in the ether of AI. And then somewhere down the line, a threat actor or a hacker goes in, talks to the AI and figures out the magic words that they need to whisper in order to extract this data from the model. So whenever an AI sees something, we can’t be confident that that ends up going away. Which is the same reason that no organization should be feeding third party AI’s confidential, internal information from their organizations. For example, I’ve seen individuals send source code to say open AI is ChatGPT. But they don’t realize that when they’re doing that, they’re effectively exposing it for the world to see. It’s just a matter of time before that is later leaked from the model itself.

Tom Temin So you might say the output of generative AI is a cake that can be unbaked.

David Brauchler It is a cake that can be split open, I would say. It’s difficult to remove data, but it is possible to see what was put into it to begin with.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Industry Exchange Data '24 TVAR Solutions Sam O'Daniel

    Protected: Industry Exchange Data 2024: TVAR’s Sam O’Daniel, Dell EMC’s Eric Krejcik on managing boom in AI-driven storage demand

    Read more