The intelligence community is working to lay the foundation for adopting AI, including raw technology, training data and workforce education.
Artificial intelligence is a concept that seems tailor-made for the intelligence community. The ability to sort through massive amounts of data, seeking out patterns large and small, anomalies that warrant further investigation, that’s what intelligence analysts do already. Imagine what they could achieve when augmented by AI?
But it’s not as simple as just adopting it. Dean Souleles, chief technology advisor for the Office of the Director of National Intelligence, said on Agency in Focus – Intelligence Community that the IC is working now to lay the foundation for adopting AI.
“You cannot build a house without a solid foundation. The foundation of AI is data and computational technology,” Souleles said. “The intelligence community has spent much of the last decade on a program we call ICITE, the information technology enterprise of the IC. And that’s been about modernizing the technology infrastructure. And that is about getting cloud technology throughout the community, making basic computational capability available to our technologists just as it is in the private sector. But that’s not good enough, because the new era of computation requires sophisticated kinds of computing. We talk about GPUs, graphical processing units, or tensor processing units (TPUs), or neuromorphic chips or field programmable gate arrays, or any of the wide variety of things that are the specialized computation that enable AI computation. And we need to make the investments in those things.”
But raw technology is not all the IC needs to take advantage of AI.
“Second, we need to make an investment in data. And not the way most people think. It’s not about acquiring more and more data,” Souleles said. “It’s acquiring the right data and having it properly prepared for machine learning. So the reason you can do image classification is that Stanford, Princeton and others created a network, a database called imagenet. It’s got 14 million images in it, classified by people into 200,000 categories of things. That’s training data, curated training data. Well, most intelligence data is not that. It is lots and lots of raw data.”
Instead, intelligence data looks more like hours upon hours of drone footage. And in those hours and hours, there may be only a couple of minutes worth that is significant, like a car approaching a checkpoint, for example, or certain people congregating. Watching that data and noting which parts are or aren’t interesting is called “tagging” the data.
But there’s a manpower issue here too: there just aren’t enough analysts to develop a thorough library of tagged data to train AI, at least for intelligence analysis.
“How many images do you think Google has of cats? Lots and lots and lots, right? How many images do you think we have of missile launchers from space? That are high quality? Certainly not tens of millions,” Souleles said. “And that is the challenge. If you don’t have enough data to train your algorithm, they call it low shot learning in AI, which means you don’t have what would be sufficient data to train an algorithm. You need ways to do that. So that’s areas we’re investing in: low shot learning, or one shot learning, or zero shot learning, where we have no data, which means we might have to create synthetic data to train an algorithm. Those are all rich areas for exploration.”
And then there’s the matter of training the workforce. That’s not limited to reskilling or upskilling, though those are certainly part of it. There’s also training the workforce, not to mention leadership, in exactly what AI can and — more importantly — can’t do.
“So AI as it exists today is narrow, it can do very specific small things. What it can’t do is anything people think it can do: it can’t replace human cognition in any significant way,” Souleles said. “It doesn’t have judgment. It doesn’t think. As I’ve said before, it’s easily fooled. I like to say that most trivial biological intelligence runs rings around the most sophisticated artificial intelligence today.”
That means ensuring leadership doesn’t get caught up in the hype and start looking at AI as a get out of jail free card.
It also means training people to understand bias when working with AI. Most people think of bias as a terrible thing, skewing data, and thereby an AI’s decisions, against a certain group of people. But Souleles said bias is inherent in all data, because someone has to choose what data to use to train the AI. Understanding that bias is the key.
“For intelligence purposes, I’m interested in bias because I want to know how the data is biased so that I know where I can use it. And more importantly, where I can’t use it,” he said. So for example, “if I train my full motion video to look at checkpoints over the desert, I should not expect that it will work in checkpoints over the jungle. They’re different areas, the bias is towards the desert, right? So that kind of bias, we just need to understand so that we use it for appropriate purposes. I like to think of it as on-label and off-label use. Don’t use AI for off-label use. If you don’t understand what it’s intended to do, don’t use it for that thing.”
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Daisy Thornton is Federal News Network’s digital managing editor. In addition to her editing responsibilities, she covers federal management, workforce and technology issues. She is also the commentary editor; email her your letters to the editor and pitches for contributed bylines.
Follow @dthorntonWFED