AI & Data Exchange 2024: DARPA’s Kathleen Fisher on prepping for AI’s future through high-risk, high-reward research

DARPA is laying the groundwork for AI's future by tackling high-risk, high-reward research into emerging technologies — often in cases with national security.

Agencies across the federal government are looking at artificial intelligence tools to transform the way they meet their mission. But at the Defense Advanced Research Projects Agency is laying the groundwork for the future of AI, by tackling high-risk, high-reward research into emerging technologies — often in cases with national security implications.

Kathleen Fisher, director of DARPA’s Information Innovation Office, said about 70% of the agency’s portfolio of work involves some form of AI.

“Use cases of AI are pretty much the entirety of DARPA’s mission,” Fisher said during Federal News Network’s AI and Data Exchange.

“DARPA’s mission is detecting and creating strategic surprise, so looking over the horizon for opportunities and threats that might be coming and creating technology to take advantage of the opportunities and ward off the threats so that we’re well prepared for what might be coming as a nation,” she added.

“It’s a challenging time because of how fast the space is moving. Whenever DARPA is thinking about a space, we’re always thinking about, ‘Where’s the surprise coming from? How is it going to impact national security?’ ”

AI-powered cybersecurity

Among its projects, DARPA is looking at AI tools to better defend networks from a growing volume of cyberattacks and ransomware attacks. That work take place under it Cyber Agents for Security Testing and Learning Environments (CASTLE) program.

“The common practice for when a system is the victim of a cyberattack is to take the system offline, and then go and find all the places where there’s attacking code and clean the system, and then put it back online. But taking the system offline is another way that the attacker is attacking the system because now you have a denial-of-service attack, where the legitimate owners of the system can’t use the system. It’s bad for normal practice, but it’s also bad for military, national security kinds of applications,” Fisher said.

“The idea behind the CASTLE program is, can we defend this system in a different way? Can we have a cyber agent that can actively defend the system while it’s under attack?”

As part of CASTLE, the research team trains AI models both to simulate advanced persistent cyberthreats and also to defend against those cyberthreats.

“They attack each other, or play off each other in the cyber gym, where the attacker pretends to be a bad guy, and the defender is a good guy, and they learn how to attack and defend, to be able to protect the system while it’s under attack,” Fisher explained.

DARPA, through its resilient systems portfolio, is also looking at AI to help build systems that are much less vulnerable to attacks.

“What AI is going to let us do is basically speed and do things faster and at bigger scale, which is critically important, because the world is faster,” Fisher said.

DARPA, as part of its AI Cyber Challenge is also partnering with some of the top computer scientists and AI experts in industry. The agency announced the AI Cyber Challenge at last year’s annual Black Hat convention in Las Vegas.

The agency is partnering with OpenAI, Google, Anthropic and Microsoft, and providing access to its frontier scale models and large language models. The goal? Finding and fixing those bugs at speed and scale.

“DARPA’s founding document gives us the right to partner with pretty much anybody. And that’s what we do,” Fisher said.

Flagging AI-manipulated media

Although DARPA has been working on AI capabilities for decades, Fisher said the Biden administration’s recent AI executive order raises awareness for that ongoing work.

“The EO is all about making sure that AI is being developed in a way that is well-aligned with the needs of society, the needs of the country and the needs of national security. And DARPA’s mission is making sure that we are advancing what’s needed for national security for the country. So DARPA’s goals and missions are well aligned with the executive order,” Fisher said.

Among its goals, the executive order tasks the federal government with ways to distinguish official government communications from “deepfakes” and AI-manipulated media.

DARPA, through its Semantic Forensics (SemaFor) program, is looking at how it can easily detect manipulated media.

“The idea is often to be able to find inconsistencies — kind of like police interrogations. You put two people in a room, and you find out where they’re not consistent. So, you don’t necessarily know which one of them is lying, but you know one of them is lying,” Fisher said.

To distinguish the real from the fake, DARPA is able to flag microexpressions — small facial movements — that are unique to each individual, she said.

“We all have these microexpressions that are characteristic of how we speak. And we can use machine learning to train person-of-interest models that capture those characteristic microexpressions,” she said.

DARPA has been working on the SemaFor program for about four years. It used to take 10 hours of video for the agency to capture and identify distinguishing microexpressions. Fisher said the team can now extract unique microexpressions using about an hour and a half of footage.

“We can build those defensive models so that when a purported message comes out from a person of interest, we can compare the purported message with that defensive model and see, are those micro-expressions present?” she said.

Defining what makes for trustworthy AI

DARPA is also leading research into what it means for AI to be “trustworthy.”

Fisher said the first essential element of trustworthy AI is competency — in other words, does the AI model do what it’s supposed to do, and provide accurate and reliable results?

“A lot of research in AI has been about competence. And until I would say, relatively recently, that’s been the entire thrust of AI research. If it’s not competent, then the rest of trust really doesn’t matter,” she said.

The second challenge of trustworthy AI is whether the algorithm understands what it’s being prompted by a user to do.

“That’s a human-machine teaming kind of question. It’s about just communicating, what are you supposed to be doing, and can it understand what its instructions are?” Fisher said.

Finally, trustworthy AI requires a moral compass.

“Once it’s understood what it’s been asked to do, does it have a sense of like, ‘Well, is that something I should do?’ You could have an AI operator that is asking you to do something that is morally repugnant, and it should have a sense of like, ‘No, even though you’ve asked me to do that, I’m not going to do that,’ ” Fisher said.

She used the example of using large language models to provide details about how to make a bomb. “It turns out that those defenses are very brittle, which I think is a problem. Large language models are amazing at what they can do,” Fisher said. “They’ve essentially ingested all of the knowledge that is reflected on the web, which has both the best of humanity, but also the worst of humanity.”

Despite a wide range of uses, most AI systems still struggle with understanding what they’ve been assigned to do, unless given very explicit parameters. “Machine learning systems are kind of idiot savants. They get parts of it really well, and then they totally miss other parts of it,” she said.

DARPA, through a program called In the Moment, is also working to develop AI systems capable of making difficult decisions when there isn’t a single right answer.

Fisher said the agency is training the AI on medical triage situations, such as the 2017 mass shooting in Las Vegas that killed more than 60 concertgoers and wounded another 400 people.

“There were just many, many people wounded, and the surgeon who was having to deal with the cases in the hospital had to throw out the normal guide for medical triage because there were too many people and had to make many, many life-or-death decisions in the moment,” Fisher said.

Could AI help with this crisis situation and save lives? More research and time will tell.

Discover more articles and videos now on our AI & Data Exchange event page.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories