DHS AI roadmap stakes claim to “lead” government in responsible AI use

Multiple use cases, ranging from immigration processing to emergency management, are envisioned under the DHS AI roadmap.

The Department of Homeland Security is pursuing multiple artificial intelligence pilot projects this year, while also establishing an “AI sandbox” for testing the use of large language models, under a DHS AI roadmap released this week.

The roadmap details how DHS will test out generative AI and large language models for uses ranging from training U.S. Citizenship and Immigration Services officers to aiding law enforcement investigations.

In a statement released today, Homeland Security Secretary Alejandro Mayorkas said the DHS AI roadmap will guide the department’s efforts in 2024 “to strengthen our national security, improve our operations, and provide more efficient services to the American people, while upholding our commitment to protect civil rights, civil liberties, and privacy.”

“What we learn from the pilot projects will be beneficial in shaping how the department can effectively and responsibly use AI across the homeland security enterprise moving forward,” Mayorkas added.

The document also stakes out a claim for DHS to “lead across the federal government in the responsible use of AI to secure the homeland and defend against the malicious use of AI.”

The DHS AI roadmap lays out several pilot projects planned for 2024. USCIS will use LLMs to help train refugee, asylum and international operations officers. The technology will help train them on “how to conduct interviews with applicants for lawful immigration,” according to the roadmap.

The Federal Emergency Management Agency will also pilot using generative AI to assist communities with disaster mitigation plans. Local governments need those plans to be eligible for projects funded under FEMA’s Hazard Mitigation Assistance program.

Meanwhile, Homeland Security Investigations will pilot allowing officers to use LLMs for some investigative processes, such as searching through documents and retrieving information. “These tools should enable investigators to rapidly uncover key information and patterns,” the DHS AI roadmap states.

The Cybersecurity and Infrastructure Security Agency, which released its own AI roadmap late last year, also plays a key role in department-wide AI efforts.

CISA this year is launching a project to evaluate “using AI-enabled capabilities for cybersecurity vulnerability detection and remediation.” After the assessment, CISA will provide a report and recommendations to Mayorkas on further actions that should be taken.

DHS “AI sandbox”

Beyond discrete pilot projects, the DHS AI roadmap describes how the department will build the underlying “technical infrastructure” to support AI use cases.

DHS’s chief information officer will launch an “AI sandbox” that will allow “an initial set of DHS users to experiment with implementing the responsible use of LLMs in their systems.”

The project could allow DHS components to explore using LLMs for more sensitive tasks. DHS policy released last fall allows employees to use commercial generative AI tools so long as they don’t put any protected, internal data into those applications.

“The aim is to expand the AI Sandbox to additional DHS users within one year and integrate evolving testing and validation standards fitted to DHS mission and use cases,” the DHS AI roadmap states.

DHS’s Science and Technology directorate will lead efforts to build out a test and evaluation infrastructure for AI, according to the roadmap. The S&T Directorate will convene an “AI/ML T&E Working Group” and create a “federated AI testbed.”

The testbed “will provide independent assessment services for DHS components and homeland security enterprise operators,” the DHS AI roadmap states. “Initial build out will include initial use case, testbed capability stand-up, and a five-year execution plan.”

DHS also plans to subject its IT systems to a crowdsourced assessment of AI-driven vulnerabilities under a “HackDHS” exercise organized by the CIO’s office, according to the roadmap.

“The vetted researchers will be tasked with identifying cybersecurity vulnerabilities in these systems to drive further security enhancements to DHS systems,” the AI roadmap states.

The DHS AI roadmap lays out several other major “workstreams” and 2024 goals, including fully establishing an “AI Safety and Security Board” and issuing a new enterprise-wide AI policy.

DHS also has big plans for building out an AI workforce. DHS earlier this year set out a goal to recruit 50 experts into its new “AI cadre” by the end of 2024. DHS CIO Eric Hysen, also designated as the department’s “Chief AI Officer,” said officials are taking a “very aggressive recruitment approach” to filling out the AI cadre’s ranks.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories