Given the expectation that advancements in artificial intelligence will bolster cybersecurity defenses, the Defense Advanced Research Projects Agency refuses to sit on the sidelines waiting for someone else to push the boundaries on what is possible. It’s entered the game in two major ways.
DARPA has launched projects within recent months aimed at harnessing AI to address vulnerabilities in software that form the basis for so many cyberattacks. Perri Adams, the program manager for both within DARPA’s Information Innovation Office, shared the details of the agency’s two AI projects during Federal News Network’s Cyber Leaders Exchange 2023.
“When you’re mining for things, you have a lot of dirt and other things you have to sift through to find those nuggets that you really care about,” Adams said. “We have the same issue today [in software security]. It’s an issue of being able to run automated vulnerability identification tools at scale.”
Vulnerabilities exist in all types of software, but not all computer bugs are equally important. DARPA’s goal for INGOTS is to develop automated tools that will help identify and fix “high-severity, chainable” vulnerabilities that hackers will exploit, Adams said.
“You have software developers who aren’t experts in computer security, but they’re great at software development,” she said. “We want to give them the tools to reduce the amount of expertise they need and reduce the amount of manual time and labor required to identify the things they care about the most and prioritize those. We’re not wasting time on issues that aren’t actually vulnerabilities that a hacker could leverage.”
DARPA is evaluating proposals for INGOTS. The three-year program will feature two phases:
INGOTS Phase 1 will focus on proving the viability of different approaches and techniques for using AI to help humans identify and fix high-priority vulnerabilities.
“We have improved our ability to automatically identify vulnerabilities, which is a good thing,” Adams said. “The need for the ability to triage and have characterization of those vulnerabilities to remediate them comprehensively has also grown as well.”
INGOTS Phase 2 will be devoted to maturing the most promising approaches that come out of the first phase.
“I’m incredibly excited about when we do finally have proposals selected and people on contract, we can kick off the program and start developing these automated tools and techniques, or semiautomated tools and techniques, to really help software developers secure their software,” Adams said.
AI Cyber Challenge: Tackling the need to improve critical software patching
Adams kicked off the second AI initiative, the Artificial Intelligence Cyber Challenge (AIxCC), in August at Black Hat’s annual cyber conference in Las Vegas. She also spoke about the challenge at the Def Con hacker conference in Las Vegas that same week.
“I have been gratified by just how many people are excited to participate and who see this as an opportunity to put their skills, whether it’s in AI or computer security, towards a really critical national security issue,” she said.
AIxCC is two-year effort to use AI to quickly identify and patch critical software bugs. Leading AI companies Anthropic, Google, Microsoft and OpenAI have all agreed to provide expertise and platforms for the competition. The Linux Foundation’s Open Source Security Foundation is also serving as a challenge adviser.
“We’re going to design these systems to fit within the software development process,” Adams said.
Teams who enter the challenge will design novel AI systems that compete to secure critical software code.
The deadline for proposals for the Small Business Innovation Research Track of the challenge is Oct. 3. Teams can begin registering for the “Open Track” of the challenge in November. The competition itself will kick off in February.
DARPA plans to award a total of $18.5 million in prizes throughout the competition. The agency will host the semifinals of the challenge at Def Con 2024, while the final competition, with a top prize of $4 million, will be held at Def Con 2025.
“I would love to see as much participation as possible from the research community,” Adams said.
Generative AI — the good and potential bad
Large generative AI language models like the open source–based ChatGPT have shown the ability to write software code. The potential use of such models to identify and fix vulnerabilities in that code holds promise too, Adams said, but added that’s still nascent technology for cyber.
“We’ve also seen the ability to apply some of these large language models to identify vulnerabilities within code, and in some cases, it’s been successful,” she said. “But there’s still significant progress needed. It can find some vulnerabilities, but it also has a lot of false positives. It gets things wrong quite a bit. Anyone who’s interacted with LMMs can see that they have significant potential, but they do have limitations.”
Adams said DARPA’s challenge will marry traditional computer science approaches like program analysis, a process that has long relied on the deep expertise of human beings, with AI to explore the art of the possible.
“Where that combination happens — how AI and traditional computer security approaches fit together — is going to be one of the core challenges facing competitors.”