The Defense Advanced Research Projects Agency is working with leading technology companies on a new challenge aimed at using artificial intelligence to find and fix software vulnerabilities in an effort White House officials hope will showcase the “promise” of AI.
DARPA’s “AI Cyber Challenge” announced today will be a two-year effort to use AI to quickly identify and patch critical software bugs. Leading companies Anthropic, Google, Microsoft and OpenAI have all agreed to provide “their expertise and their platforms for this competition,” Arati Prabhakar, director of the White House Office of Science and Technology Policy, told reporters in a briefing.
“This is one of the ways that public and private sectors work together to do big things, to change how the future unfolds,” Prabhakar said.
Teams who enter the challenge will design “novel AI systems” that compete to secure critical software code. DARPA will award a total of $18.5 million in prizes throughout the competition.
Teams can register to compete through an “Open Track” that doesn’t include any upfront funding. The agency will also separately award up to $1 million each to as many as seven small businesses under a “Funded Track” so those companies can compete in the initial phase of the challenge as well.
DARPA will host the semifinals of the challenge at DEF CON 2024, while the final competition, with the top prize of $4 million on the line, will be held at DEF CON 2025.
While the emergence of generative AI over the past year has sparked concerns about nefarious uses of the technology, including in cyber attacks, officials said the DARPA competition will help the United States stay ahead of adversaries’ offensive cyber capabilities.
The Biden administration has made it a major priority to better secure critical infrastructure systems from cyber attack. And patch management has become a major challenge for IT departments across government. Agencies have been under pressure to quickly find and fix an expanding set of “known exploited vulnerabilities” in their software systems as part of a 2021 binding operational directive issued by the Cybersecurity and Infrastructure Security Agency.
“As we look at the promise of AI, we also know from the huge risks we see in cybersecurity today, even without AI, that we need ways to accelerate finding and fixing vulnerabilities,” Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger told reporters. “We need ways to ensure we can look at a system at scale and identify what’s needed to make it defensible, to make it resilient.”
Officials compared it to previous DARPA challenge, such as the Cyber Grand Challenge in 2014 and the original “Grand Challenge” focused on unmanned vehicles created in 2004, that resulted in major advances in technologies.
“The DARPA grand challenge for unmanned vehicles jumpstarted the field of self-driving cars, and demonstrated the game changing potential of machine learning,” DARPA Deputy Director Rob McHenry said. “In the AI cyber challenge, our goal is to again create this kind of new ecosystem with a diverse set of creative cyber competitors empowered by the country’s top AI firms, all pointed at new ways to secure the software infrastructure that underlies our society.”
The AI challenge will also feature the involvement of the open source software community. The Linux Foundation’s Open Source Security Foundation will serve as a “challenge advisor” to help design a competition that can address “real world” challenges, Perri Adams, DARPA’s program manager for the Information Innovation Office, told reporters.
She said DARPA will also ask the prize winners to make their winning systems open source “such that the innovations produced by AICC can be used by everyone, from volunteer open source developers to commercial industry.”
“If we’re successful, I hope to see AICC not only produce the next generation of cybersecurity tools in this space, but show how AI can be used to better society by here defending its critical underpinnings,” Adams said.
The major research competition is kicking off as the Biden administration develops a “trustworthy AI” executive order aimed at ensuring “the federal government is doing everything in its power to advance safe, secure and trustworthy AI,” as well as manage its risks. Last fall, the White House Office of Science and Technology Policy released a blueprint for a “Bill of Rights” for the design, development and deployment of artificial intelligence and other automated systems.
And later this summer, the White House Office of Management and Budget plans to release draft guidance on AI within the federal government. It’s expected to detail specific policies agencies should use in developing, buying and using artificial intelligence.
Asked how the federal government expects to take advantage of the technologies that come out of DARPA’s AI cyber challenge, a senior White House official said the hope is to apply the winning systems across the federal government and even to critical infrastructure.
“We’re certainly particularly interested in approaches that, for example, help us identify bugs in the energy grid, bugs in signaling systems for transportation, and help us not only find that but fix them,” the official said. “So this fundamentally is about finding solutions. And then we’re eagerly looking for those solutions to then apply them both for federal government and for critical infrastructure.”