Raman said generative AI and large language models really "caught the intelligence community by storm" in late 2022, when its potential to augment human analyst...
The intelligence community has to be very careful about the ways it implements artificial intelligence, because of the stakes of its work and its proximity to policy. That makes it even more important to have humans in the loop to evaluate AI inputs and outputs, and why AI chatbots like ChatGPT won’t replace human analysts at the CIA any time soon.
“It’s very important that when we’re using AI systems to collaborate with our officers, we make sure their tradecraft is incorporating this new, sometimes novel technology that they might not have already used, or they’ve been using it for a while and things are shifting,” Lakshmi Raman, chief artificial intelligence officer at the CIA said on Federal Monthly Insights – AI/ML Systems. “Augmenting training for AI all the way from our general workforce to our senior leaders so they understand how it can help them do their jobs is also a huge focus of the work we’re doing at the agency.”
Raman said generative AI and large language models really “caught the intelligence community by storm” in late 2022, when its potential to augment human analysts became clear. And that’s precisely what the CIA wants to do: Augment its human analysts, not replace them, in order to enable the agency’s mission.
One of the top opportunities Raman sees, particularly for generative AI, is the sheer scale of the CIA’s data collection.
“It’s always been a challenge for our officers to sift through all of that information to find the relevant pieces of information they need,” she told the Federal Drive with Tom Temin. “And so we’ve made really significant progress in using [large language models] to help that workforce manage the open source data they collect. So they’re using it to cluster search and summarize events, which in turn makes it easier to identify important information and to delve deeper when needed.”
But the CIA certainly isn’t starry-eyed about these new technologies; Raman said the agency is prioritizing supply chain risk management, systems security monitoring, and other security processes in their adoption and implementation. One of the biggest risks she said the CIA is concerned with is data poisoning, where attacks seed malicious data into the training sets used by AI in order to bias or otherwise undermine the results delivered.
Raman said to mitigate this risk, CIA is implementing monitoring beyond just the usual application programming interface endpoints, with a focus on the outputs of their AI models.
“We’re ensuring that we are accounting for those specific needs,” she said. “How are we governing all of this? How are we ensuring that we are looking at the appropriate levels of risk mitigation at every portion of that process that I just described? And it also is about how do we make sure that we have the right training, tooling and our officers have the right tradecraft to understand what this looks like and what this means.”
Because the AI models CIA uses are more probabilistic than other software, which is usually more deterministic, that training is important to determine when changes in quality of the outputs are occurring. She said model drift is a function of data drift over time, so it’s important that governance is built into the agency’s AI governance plan, but also continuously evaluated.
And that’s where the training comes in: Many federal agencies are having a difficult time competing with the private sector for talent in the data science field. Upskilling current employees is one solution that many are exploring to help ease that challenge. Raman said CIA is also extending its partnerships with academia, national labs, and within the IC itself. One example of that is an IC AI model exchange, which allows them to share capabilities and encourages trust and transparency between intelligence agencies. But there’s still plenty of work to go around.
“The talent bottleneck is a huge challenge for us, and we at the agency have an incredibly talented AI practitioner workforce: data scientists, analytic methodologies, business analysts,” Raman said. “They are really top of the line. But boy, does the demand for their services exceed the supply.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Daisy Thornton is Federal News Network’s digital managing editor. In addition to her editing responsibilities, she covers federal management, workforce and technology issues. She is also the commentary editor; email her your letters to the editor and pitches for contributed bylines.
Follow @dthorntonWFED