NSA working on new AI ‘roadmap’ as intel agencies grapple with recent advances

The intelligence community is grappling, like many industries and society at large, with rapid advances in large language models and generative artificial intel...

The intelligence community is grappling, like many industries and society at large, with rapid advances in large language models and generative artificial intelligence over the past nine months.

And despite intelligence agencies’ propensity for analyzing trends and forecasting future events, officials at the Intelligence and National Security Summit in Fort Washington, Maryland, this week largely agreed that the AI developments over the past nine months have been surprising.

George Barnes, deputy director of the National Security Agency, called it a “big acceleration” in AI since last November, when OpenAI publicly launched ChatGPT.

“What we all have to do is figure out how to harness it for good, and protect it from bad,” Barnes said during a July 13 panel discussion with fellow leaders of the “big six” intelligence agencies.

“And that’s this struggle that we’re having,” Barnes continued. “Several of us have actually been in various discussions with a lot of our congressional oversight committees, just struggling with this whole notion of how do we actually navigate through the power of what this represents for our society, and really the world.”

The NSA and other intelligence agencies have been working in the broader field of artificial intelligence for decades. The issue has become a major priority in recent years, with many policymakers looking to ensure the defense and intelligence communities keep pace with China on AI and related technologies.

Barnes said the NSA is now developing a new “AI roadmap” to guide its internal use of the technologies.

“That’s really focused on bringing forward the things we’ve been doing for decades actually, in foundational AI, machine learning, but then tackling these newer themes, such as generative AI, and then ultimately, more artificial general intelligence, which is beyond the generative and something that industry is still searching to grasp.”

Within the broader intelligence community, officials are looking at a range of use cases in the coming years for AI.

“I see widespread use of simulations, I see hybrid war gaming, the strategic use of red teaming, and anything that is getting AI in the hands of individual officers, regardless of their job, regardless of their role, regardless of their background, technical or not,” Rachel Grunspan, director of the IC’s Augmenting Intelligence using Machines (AIM) initiative, said during a July 14 panel. “And just maximizing the creative capacity of the entire workforce. That’s where I see us going.”

Lakshmi Raman, director of Artificial Intelligence Innovation at the Central Intelligence Agency, said the CIA is actively exploring the use of large language models.

“I think we’re with everybody else in the world in terms of understanding the inflection point in this part of the zeitgeist,” Raman said during a July 14 panel. “We are in that exploration, experimentation phase.”

So far, use cases center on “creativity and content generation,” she added.

“Could it create a first draft of something we could potentially edit?” Raman said. “I think another potential use case is if we summarize a corpus of documents, how do we do [question and answer] against that corpus?”

Jason Wang, technical director for Computer and Analytic Sciences Research Organization at the NSA, said the agency is “very active” in understanding the opportunities around large language models. Like the CIA, the NSA is looking at how the technology could be used to help develop first drafts or summarize large amounts of information, Wang said.

“But we do also recognize large language models, in the current iteration, they are fairly static in terms of the cost to train this massive thing,” he continued during a July 14 panel. “The outcome from a language model is going to be four to six months old. So in building responses at speed and scale, sort of chasing this dynamic information landscape, we are also keeping our eye on and continuing to advance research in other methods, like reinforcement learning, extreme machine learning.”

The intelligence community’s use of AI is further complicated by privacy laws and other restrictions on what kind of data intelligence agencies can use.

And while many large language models are trained on wide swaths of data from the Internet, most intelligence community systems are “air-gapped,” or separated from outside networks, due to security and classification policies.

“We work in a place where our systems are separate from the internet, and the rest of the world,” Raman said. “So that integration aspect also is something that we need to continue to work on and optimize.”

Officials also pointed to the need to understand both the data feeding the models, as well as the algorithms underpinning the decisions those models make.

“We have to trust the data before we do anything else,” Defense Intelligence Agency Chief of Staff John Kirchhoffer said July 13. “Secondary for us is making sure that the machine learning algorithms that we put in place aren’t just ethical, but they’re also tradecraft compliant, in the same way that we hold our human analysts accountable for tradecraft, we need to do the same thing for the machine. . . . we need to know what’s inside the black box.”

But questions around how large language models learn and the extent to which they understand context, logic and reason, remain topics of intense debate.

“Collectively as a technical community, we don’t fully understand the limits and boundaries of the behaviors of these models when we’re running them at scale,” Wang said. “There’s a lot that we collectively need to get together to figure out in the space of a safe, trusted and aligned operation under this new language model AI framework.”

But with the recent advances, Congress is now considering new laws that would require the intelligence community to adopt new policies around AI.

The fiscal year 2024 Intelligence Authorization Act advanced by the Senate Select Committee on Intelligence last month would require the director of national intelligence to establish policies governing all spy agencies’ use of AI within one year of the law’s enactment.

The required policies would include “guidelines for evaluating the performance of models developed or acquired by elements of the intelligence community,” as well as standards around the data used to train models that are acquired by agencies.

The Office of the Director of National Intelligence last published an AI strategy for the intelligence community in 2019 when it released the AIM initiative. More than four years later, Grunspan said the strategy is “definitely due” for an update.

“I’m glad that we didn’t necessarily put it out before now, because so much has changed,” she said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories