The Department of Homeland Security is moving out on a multifaceted effort to address both the potential applications of artificial intelligence to DHS’ many missions and potential threats stemming from the diffusion of AI technologies, as the department’s top official looks to respond to rapid advances with ChatGPT and related technologies.
In an April 20 memo, Homeland Security Secretary Alejandro Mayorkas directed the establishment of an Artificial Intelligence Task Force to be co-led by Dimitri Kusnezov, under secretary for science and technology, and Eric Hysen, DHS’ chief information officer.
“DHS must address the many ways in which artificial intelligence (AI), including revolutionary advances in generative Al, will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of the threats,” Mayorkas wrote. “We must also ensure that our use of Al is rigorously tested to avoid bias and disparate impact, and is clearly explainable to the people we serve.”
He directed the task force to outline a conception of operations and milestones within 45 days. The group is then directed to produce a progress report every 60 days for the remainder of its first year.
The task force will focus on specific applications, including how DHS could deploy AI to “to more ably screen cargo, identify the importation of goods produced with forced labor, and manage risk,” according to Mayorkas’s memo.
He also directed the task force to consider how DHS could use AI to “counter the flow of fentanyl into the United States.”
“We will explore using this technology to better detect fentanyl shipments, identify and interdict the flow of precursor chemicals around the world, and target for disruption key nodes in the criminal networks,” Mayorkas wrote.
Customs and Border Protection is already in the midst of a major AI acquisition along those lines.
Earlier this month, CBP released a solicitation seeking “anomaly detection algorithms” for to be integrated into the screening systems the agency uses to check cargo coming through the border and other ports of entry.
CBP is seeking algorithms that can help officers more quickly analyze x-ray images of cargo for potential contraband. A key goal of the program, solicitation documents note, is to both reduce processing times and improve security.
CBP Chief Information Officer Sonny Bhagowalia explained the agency’s approach at an April 21 breakfast hosted by Armed Forces Communications & Electronics Association (AFCEA) Bethesda.
“All those billions and billions of packages coming in, we want AI/ML to tell us which container to look at,” Bhagowalia said. “We just did the largest ever bust of fentanyl . . . that’s a combination of agents and officers who are brilliant, canines, some modeling and other stuff and some technology that we’re using, but we need AI/ML so that the agent and officer . . . they just need an assist.”
Mayorkas also wants the new DHS task force to examine how AI can be used as part of digital forensic tools to “help identify, locate, and rescue victims of online child sexual exploitation and abuse, and to identify and apprehend the perpetrators of this heinous crime.” DHS announced a new homeland security mission, “Combatting Crimes of Exploitation and Protecting Victims,” as part of the latest quadrennial homeland security review released last week.
Finally, Mayorkas directs the task force to also focus its initial efforts on “working with partners in government, industry, and academia, assess the impact of Al on our ability to secure critical infrastructure.”
The council, at the direction of Mayorkas, has established two subcommittees: one to look at the potential use of AI to advance DHS’s missions, and the other will examine threats posed by adversarial AI.
‘Downstream safety consequences’
The Cybersecurity and Infrastructure Security Agency is the DHS component responsible for securing critical infrastructure. And CISA Director Jen Easterly earlier this month raised a stark warning about the speed at which AI is developing.
“We are hurtling forward in a way that I think is not the right level of responsibility, implementing AI capabilities and production without any legal barriers without any regulation, and frankly, I’m not sure that we are thinking about the downstream safety consequences of how fast this is moving,” Easterly said at a April 6 Atlantic Council event.
Easterly mentioned the “weaponization” of cyber, genetic engineering, and biotechnology as potential threats stemming from rapid advances in AI.
She said policymakers will need to consider legal and regulatory regimes for AI “in the very near term.”
“I have been trying hard to think about how we can implement certain controls around how this technology starts to proliferate in a very accelerated way,” Easterly said. “I think this is the biggest issue that we’re going to deal with this century.”
Lawmakers are also keen to get ahead of further developments in generative AI and other technologies. Senate Majority Leader Chuck Schumer (D-New York) earlier this month announced steps to establish a regulatory regime for AI.
The idea is to establish guardrails that “would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology,” Schumer said.