The Department of Homeland Security is looking to become one of the “early and aggressive adopters” of AI tools within the federal government, and is taking steps to protect critical infrastructure from AI-powered cyber attacks.
Defense and national security community officials, speaking Wednesday at the Institute for Critical Infrastructure Technology (ICIT)’s AI DC conference in Arlington, Virginia, said their agencies see AI as an essential way to maintain an information advantage against malicious actors.
Robert Silvers, DHS undersecretary for policy, said that DHS Secretary Alejandro Mayorkas is setting the department’s own priority areas for AI. Those priorities are meant to align with what the Biden administration has planned for an upcoming executive order on AI.
“One is we should be early and aggressive adopters of the technology,” Silvers said. “We should also be at the vanguard of establishing rules for responsible and ethical and safe use for our own programs.”
Silvers told Federal News Network on the sidelines of the conference that AI tools have served a critical role in advancing DHS’ mission, and that the department is looking at new opportunities to benefit from the technology.
“When you think about how it can be used for our security mission — from detecting and interdicting fentanyl, to streamlined and safer airport security screening — there’s tremendous promise, and we’re going to lean into that,” Silvers said.
DHS, Silvers told the conference audience, is also working on guidance to critical infrastructure companies on the “safe and secure deployment of AI technology” in critical infrastructure operations.
“There’s tremendous promise, as you deploy AI to operate the grid — water supply, hospital systems, financial markets — you’re going to want to make sure, though, that that is done in a safe and secure way. That if things fail, it happens safely, not catastrophically, that you have adequate auditing and testing, that you give due consideration to when should certain kinds of decisions be made with a human in the loop, for example,” Silvers told Federal News Network. “And so, we’re going to also be aggressively pushing guidance out to owners and operators across the country to make sure that they can embrace this really promising amazing technology which we should all celebrate, and do it in a safe and secure way.”
DHS is benefiting from the opportunities of AI, but guarding against emerging challenges. Silvers said that AI has had a “stronger impact, to date, on network defense than offense.”
“We haven’t seen sort of catastrophic cyber attacks land that used AI – at least not yet, or that we’re aware of. And so, we’re going to see this is going to develop fast. But let’s not be “pollyannish” about this, there’s going to be adversarial use, for sure,” Silvers said during the conference.
Silvers told Federal News Network that AI is going be used by network defenders and offensive actors alike, and DHS knows that malicious actors are already experimenting with it.
“I often get asked, ‘Who is this going to advantage more, the cyber defenders or the cyber offensive actors?’ and the jury’s still out on that. But to date, we’ve seen incredible use for defensive use, including finding vulnerabilities, scanning code and other things,” Silvers told Federal News Network “And we’re really going to be focused on how we can work with the network defense community to gain all that. We’re also very closely monitoring and trying to model out potential adversarial use.”
DHS is also partnering with AI companies to understand the capabilities of this emerging technology, as well as its limitations.
“We really have to be humble in the government about our level of understanding of this technology. It’s super complex, it’s super nuanced. Talent is at a premium, and we’re fighting for talent. Companies are fighting for talent,” Silvers said. “It will only succeed if we are literally shoulder-to-shoulder with the frontier technology companies that are developing the technology, and then also the critical infrastructure companies that are our end-customers, and putting that technology into operation. We have to be developing this together with their input to do this well.”
DHS is also setting new rules of the road for its internal use of AI tools. The department last month released a new set of policies to govern its own use of AI technologies.
“That’s all going to be at the forefront, so that as we embrace the benefits, we ensure that we’re protecting civil rights, civil liberties and privacy,” Silvers told Federal News Network.
Army Cyber Center of Excellence sees AI as key to ‘better, faster, cheaper’ decisions
The Army’s Cyber Center of Excellence also sees AI as a net positive for its cybersecurity mission.
Col. John Agnello, the director of information advantage, for the Army’s Cyber Center of Excellence, said the service is looking at the potential of AI to make critical decisions “better, faster, cheaper.”
“We want our data to be faster and more accurate than our adversaries, so we can have some type of information advantage versus our adversaries. And what we’re looking at is how do you use AI for that, and I think that that type of neural network is really where we’re going to use it,” Agnello told Federal News Network at the conference.
Agnello said the Army, much like DHS, is using AI on low-risk tasks, like supporting personnel on continuing monitoring to detect emerging cyber threats.
“Continuous monitoring to help defend and cyber hygiene, it’s low threat. So how can we use AI, where it’s a little more simple, and you don’t have a major threat associated with it?” he said.
While AI has the potential to serve as a force multiplier for the cybersecurity workforce, Agnello said there is a “human in the loop” to make key decisions.
“The bottom line is that we use AI to do those more menial tasks, which allow a human to actually be that button-pusher, whatever that button-pushing may be. It may be something from defending a network, to sending a tweet, to dropping a bomb. It can be anything from that full spectrum there. AI really helps us make a decision, but still, the human has to be the one [and] the commander has to be the one to make that overall decision,” Agnello said.