The DoD CIO wrote a Risk Management Framework overlay for securing the overarching AI environment. It’s “vague about what to look for and how to do the secu...
The Defense Department is laying the foundation for securing artificial intelligence systems and data within the department.
David McKeown, DoD’s chief information security officer, said his team developed a Risk Management Framework control overlay focused on what to look for in an AI environment and how to properly secure the training data, the inputs, the outputs and the models themselves.
Unlike other RMF overlays, which provide detailed instructions on compliance, this overlay is less “prescriptive,” and the goal is to have security guidance for the whole overarching AI environment.
“It’s a little bit more vague about what to look for and how to do the security, but it’s a start,” McKeown told Federal News Network after he spoke at the Meritalk’s Accelerate AI forum.
McKeown said that while the AI RMF developed by the National Institute of Standards is valuable guidance to securing AI, there’s a need for a more comprehensive approach to address the broad spectrum of issues related to the nascent technology.
“There are a lot of concerns there that are not cybersecurity, and there are a lot that are cybersecurity. I think I’ll augment my cybersecurity guidance to make sure that any system built using AI or an AI system is covered and the information is protected. But there’s just so much more that I think needs a higher level construct, like the AI RMF that NIST came out with,” McKeown said.
McKeown’s office is currently exploring the use of AI as a tool to analyze and assess the damage caused by a recent data breach. About two years ago, the Defense Department had a significant intrusion, compromising around six terabytes of data.
The DoD CIO is now working with the industry to understand how they can move away from manually inspecting and analyzing the damage done to the programs as a result of this data breach and use AI instead.
“We’ve been piloting that and working with the vendor to see if it’s in the realm of the possible. We’ve demoed it to [the office of the under secretary of defense for research and engineering], [the DoD cyber crime center], they think it is promising. I think it’s something we want to do,” McKeown said.
“We want to partner with industry on all of this. Certainly, we want to leverage it for cybersecurity. We have efforts where we’re looking at it for offensive, cyber, there’s lots of play here for AI,” he added.
McKeown said a lack of education and understanding that surrounds AI and its applications is a major reason the Defense Department is slow to adopt the technology.
“I think we’ve got a huge job in educating people about the benefits this technology can bring. And educating them on how to do that securely and safely,” McKeown said.
“We’ve got to figure this out and get it into a zone where we’re comfortable with it, and maybe even employ some of the zero trust concepts that we’ve been working on in the department to check and make sure that once we’ve already tested an algorithm, it seems to work that we continue to monitor that, that it stays within tolerance, and is delivering the results that we are looking for.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.