Biden admin pushing ‘promise’ of AI for cyber defense

AI could be a big factor in a potential second cybersecurity executive order, but federal cyber pros are also wary of the risks of relying too much on AI.

White House officials are contemplating a new cybersecurity executive order that would focus on the use of artificial intelligence.

Federal cybersecurity leaders, convening at the Billington Cybersecurity Conference in Washington this past week, described AI as both a major risk and a significant opportunity for cyber defenders.

White House Deputy National Security Advisor Anne Neuberger called AI a “classic dual use technology.” But Neuberger is bullish on how it could improve cyber defenses, including analyzing logs for cyber threats, generating more secure software code, and patching existing vulnerabilities.

“We see a lot of promise in the AI space,” Neuberger said. “You saw it in the president’s first executive order. As we work on the Biden administration’s potentially second executive order on cybersecurity, we’re looking to incorporate some particular work in AI, so that we’re leaders in the federal government in breaking through in each of these three areas and making the tech real and proving out what’s possible.”

Last year, the Defense Advanced Research Projects Agency launched the AI Cyber Challenge. The goal was to explore how cyber analysts can use large language models to quickly find and then patch software vulnerabilities.

Neuberger said the early results from DARPA’s challenge have been so promising that the administration is now launching a pilot to find and fix cyber vulnerabilities in the energy sector.

“There are nation state adversaries, China and Russia specifically, that are focused on finding vulnerabilities, finding zero days in our energy systems to potentially compromise those,” she said. “So we’re looking to take that work and say, ‘Can we not only find vulnerabilities, because that helps offense and defense, but generate patches much, much more quickly?’”

The move to embrace AI for cybersecurity comes after a busy three years for federal cyber teams. President Joe Biden’s May 2021 cybersecurity executive order directed agencies to make across-the-board improvements to their cyber defenses, including multifactor authentication, data encryption, and zero trust principles.

And now with agencies largely meeting many of the first cyber executive order’s deadlines, federal chief information Security officer Mike Duffy said incorporating AI is one of the next steps.

“We are at a point in the trajectory, in the roadmap of zero trust and cybersecurity maturity, where we frankly need something like this,” Duffy said during a Sept. 6 panel at Billington. “We need to automate, we need to streamline.”

While agencies previously faced a dearth of cybersecurity data, many have adopted improved logging capabilities, as well as new endpoint detection and response tools.

“The challenge now is, how can I get to the place where I’m making risk based decisions based on the full breadth of what I have at my fingertips,” Duffy said. “That’s very challenging in a spreadsheet. It’s not so much when working with emerging technologies like AI.”

But federal leaders are also being cautious about moving any models too close to the core of their cybersecurity operations.

The Defense Innovation Unit has successfully completed multiple cybersecurity projects that feature AI and autonomy. But Johnson Wu, head of DIU’s cyber portfolio, said those technologies don’t completely replace human cyber professionals.

“Of course, if you go out to the likes of the Black Hats and the DefCons, you see a lot of vendors talking about agentic AI and having AI agents actually make decisions based on what’s fed to it by the cybersecurity stack,” Wu said. “We have yet to embark on it. I think our goal is to enable DoD as a fast follower of the commercial industry and take a look at how these bleeding edge technologies that actually make decisions fair, before we actually let them be in control with the nuclear button.”

Wu said DIU also requires its vendors to sign Responsible AI Guidelines, developed in partnership with Carnegie Mellon University.

“We’re asking the vendors, when you propose, make sure that your technology abides by that and also for our program, for a prototype to graduate into production, we will have checks and balances to make sure that the graduating capability or product abides by the responsible AI guideline,” Wu said. “And in the past few years, we have 11 projects, 18 vendors across three portfolios and the capabilities adopted by eight agencies … all of them have evidence, solid evidence, of their capabilities.”

AI risks

Agency cyber leaders are also concerned about how U.S. adversaries will use AI to attack networks. The Cybersecurity and Infrastructure Security Agency is pushing technology providers to manufacture more secure products out of the gate, rather than relying on patching after the fact.

Matt Hartman, CISA’s deputy executive assistant director, said the agency’s “Secure by Design” initiative is crucial to defending against AI-driven threats.

“We are now just sort of really at the starting gate in what we are seeing from an offensive use of AI perspective, leveraging AI for more believable spear phishing emails for automated scanning,” Hartman said during a Sept. 3 panel at Billington. “The concern that we have is that when adversaries are able to build automation into the next steps, and they are able to leverage automation to gain persistence to escalate privileges. We have two options as a community. We can either sit here, as we have been doing for years and yell into the wind, ‘Enterprises, please patch,’ or we can do something differently to make it easier for them to protect themselves. And that is really is behind this push for product security.”

Federal cyber leaders are also working with industry to prepare for potential AI security incidents. CISA’s Joint Cyber Defense Collaborative, for instance, set up new group focused specifically on the technology.

Lisa Einstein, CISA’s chief AI officer, said the group is now working on an incident collaboration playbook.

“It’s meant to be an enduring operational community of AI technology developers, providers, security focused folks to really think, are there ways that we need to adapt, that we need to share information even more quickly or differently in the context of AI systems, bring in the correct organizations,” Einstein said.

And in addition to working closely with industry, cyber leaders emphasized that more research is needed on how AI actually works. Bruce Draper, a former DARPA official and now head of the Computer Science Department at Colorado State University, said “there’s a sense that we’re out over our skis.”

“We have never before adopted technology that we did not understand very well as thoroughly as we are adopting AI,” Draper said. “And so we have these wonderful systems, and they’re great, and they’re transformative, and we can’t predict when they’re wrong, and we can’t predict when they’re going off the rails … So one of the things that I would call on the government to do, and one things I would call on the broader research community to do, is to spend more time not just building the next best widget, but actually  trying to understand … the theoretical underpinnings of these systems, because without them, we’re never going to be able to trust them.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories