AI brings special challenges for security. Now the National Security Agency (NSA) has published what it calls guidance for strengthening AI system security.
It seems every agency is chasing after ways to use artificial intelligence. AI brings special challenges for security. Now the National Security Agency (NSA) has published what it calls guidance for strengthening AI system security. For what’s in it, the Federal Drive with Tom Temin spoke with the Acting Chief of the NSA’s AI Security Center, Tahira Mammen.
Interview Transcript:
Tom Temin And before we get into some of the specifics, is this advice only for the intelligence community? Or could some plain old civilian agency probably use it to advantage also?
Tahira Mammen Yeah, that’s a great question. So we are focused on protecting national security systems and the defense industrial base. But when we put out information that’s relevant to how owners should protect their AI, that really applies everywhere. So anyone who is deploying AI systems can benefit from the work that we’re doing to support the national security mission that we have.
Tom Temin And what are the special challenges in security for AI? It’s still software, it’s still online.
Tahira Mammen Right. I think with any emerging technology, the interest in scaling and ability and capability, is so prevalent. And so people want to move fast. And what we need within the national security mission for sure, is for people to also be moving securely and to be thinking about security as they move to implement next generation technology. In this case, artificial intelligence. And so securing the AI system involves an ongoing process of identifying what risks you might be creating, as you move forward, implementing mitigations for those risks and monitoring, monitoring your system, your network, everywhere where you’re putting AI capability for anomalies, for problems that might be presented in your AI environment.
Tom Temin And is one of the issues with AI, is that the outputs can change over time depending on what it learns. Whereas with regular application software, one plus one always equals two.
Tahira Mammen Right. That’s exactly true. And I think also we’re interested in, when we’re talking about national security systems, we have to be cognizant of the possibility of malicious intent by actors. And so looking for model poisoning attacks, looking for data poisoning attacks, learning how to understand what your model is doing. Not just in the safety aspect, but really with a focus on security.
Tom Temin All right. And within the NSA, how do you go about developing guidance like this? It sounds like it might be something you’re doing for your own AI deployments, which I’m sure you can’t discuss.
Tahira Mammen Well, yes. National security systems includes, networks with classified information. So definitely within the purview of the work that we want to do. So how do we go about creating guidance. For our first guidance out the door, it was really, okay, let’s decide what we need to do to help and assess owners immediately. Because the drive to implement the capabilities, like I said, it’s here and it’s now. And we want our national security systems to have the latest technology. And so we decided that the best place to start was in how to secure the deployment environment when you’re bringing in outside artificial intelligence capabilities. Not homegrown, but bringing in something from the outside, what do you need to do? A lot of that is relevant to the cybersecurity work that we’ve already been doing at the National Security Agency. Talking about zero trust, talking about access identity, and then layering on top with the expertise that we have within NSA, what it means to protect model weights. What it means to look a little deeper at monitoring within the system that you’re implementing. And so I think that our first, our first guidance that we put out in April, was that first sort of checklist of, okay, when you’re bringing in outside AI, here’s what you need to do. And here is also look at what we plan to publish over the rest of the year as we dive deeper into the different topics, to get more specific.
Tom Temin We’re speaking with Tahira Mammen. She is acting chief of the Artificial Intelligence Security Center at the National Security Agency. And you mentioned something in passing, model weights. And that’s really what it is that the parameters in which the AI executes its logic, giving certain weight to certain factors and inputs. And so protecting those, I imagine, is really crucial to making sure that you have consistent outputs and the outputs you design the system to have in the first place.
Tahira Mammen Absolutely, yes. There is focus on protecting, understanding who has access to your model weights, how you’re storing your model weights, use of secure [Hardware Security Modules (HSM)] for those really, really critical components. And so definitely that’s included in the guidance that we put out for the owners.
Tom Temin HSM meaning.
Tahira Mammen Hardware Security Modules.
Tom Temin Ok. You and I know, but the whole world may not know yet. And so once your model weights are secured, then that gets to the issue of another way of skewing results and that is, as you mentioned, poisoning or somehow altering the data that is fed. But that’s not really, is that a part of the deployment structures that you would want to have in place?
Tahira Mammen So I think data poisoning, when we’re thinking about just the deployment environment, what you want to be monitoring is really a lot of the basic cybersecurity for who has access to your deployment environment. What is the possibility for a nefarious actor or a malicious actor to gain access there? I think there’s a lot of content, that we are researching, and then we will focus on as we move forward about what data security means, in the AI space. But here really a lot of the really prime standards for cybersecurity are what we draw upon when we talk about data security and the deployment environment.
Tom Temin Got it. And then with respect to what comes next. So the deployment environment then I would imagine the data environment, which could be very wide, because the beauty of AI is how much data it can take in. And you’ve got operations producing all sorts of data. You and your fellow IC members. So I imagine the data and the storage environment, which is separate from the specific application deployment, is that next on the list?
Tahira Mammen Yes, absolutely. The data environment is a high priority, and you’re exactly right that the strength of AI is its ability to deal with such vast amounts of data in ways that human beings just can never do. And so we want to make sure that national security owners and national security system owners, right are capable of bringing in those capabilities and then know what it means to protect really, really critical and sensitive data.
Tom Temin Because many agencies for applications for verification of individuals or for maintaining accounts. And I also imagine this type of open source or commercially available data is also used by the intelligence community for different purposes. And that’s coming from an environment that you can’t necessarily control. How do you deal with that?
Tahira Mammen Right. So I think that’s a very interesting aspect of this sort of growth of artificial intelligence capabilities that we see, is that so much of it is in the open public. And so the availability to these technologies that are really futuristic capabilities is democratized. Everyone can have access to the capability. And that includes national security systems, need to also be a part of who’s working with the top tier industry capabilities. And so definitely making sure that the sort of test and evaluation before deployment is part of the process for bringing in new AI capabilities is paramount when we’re talking about AI security.
Tom Temin And in the larger picture, you do have the artificial intelligence security center itself that you are acting chief of. Tell us more about what’s going on there and the development of it and how it came to be.
Tahira Mammen Yeah, I would love to. So the AI Security Center at NSA is the focal point for the AI security mission across our agency. And so we are jointly invested between sort of the research hub of NSA that has deep technical capability into what it means to be researching and identifying the threat space for AI vulnerabilities, AI security, and the cyber security mission that I know you guys engage a lot with to sort of understand what we’re doing to protect national security systems, defense industrial base. And so when we marry those two missions together, the intent is to use the superpower of the National Security Agency, which is our foreign intelligence and cyber security missions, to understand what the threat landscape actually looks like, what our foreign adversaries seek to do, that would bring harm to the AI implementations that we are trying to implement. And with those and then engagement with the industry partners who are creating the capabilities that NSA owners want to use, we can have this very holistic threat landscape of what’s going on in AI security space with national security implications, and direct that to our researchers who can dig deeper on what it means to, to mitigate, to monitor, to counter the threat. And so it’s really a joint venture with a lot of expertise across the agency, but centered in one place. It’s hard to find there’s not a plethora of AI security experts in the world today. And so I think the concept of bringing them all together into one center where we can be really focused and really threat informed, which is so critical to being able to prioritize the work, that’s sort of the idea and the impetus for why we’ve made this move.
Tom Temin All right. And you sort of answered my final question. And that is the contractor base, which operates alongside your own employees in so many different domains of activity. That’s part of this idea of the ecosystem that needs to be protected.
Tahira Mammen Yes. So the defense industrial base that underpins national security systems are critical to ensuring the sort of secure fabric for weapons and space platforms, for networks that have classified information on them. Really the systems and machines networks responsible for our national security rely on a lot of small and medium businesses, who are part of the defense industrial base. And so you see a lot of focus within the Cyber Security Collaboration Center to making sure that they also are understanding what it means to have cyber security and how to do that work.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED