DHS has a new AI policy, while Chief Information Officer Eric Hysen has been named as the department's first-ever "chief AI officer."
The Department of Homeland Security’s top official is calling for DHS to quickly and “surgically” prioritize areas where the department can adopt artificial intelligence, as DHS rolls out new policies on the acquisition and use of AI, as well as facial recognition.
DHS’ new policies come as leading AI executives and members of Congress point to the need for regulations and other guardrails in the quickly progressing field of generative AI and related technologies.
Homeland Security Secretary Alejandro Mayorkas sets out new requirements in a policy statement, “Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components.” The memo was signed out Aug. 8 and released Thursday. The policy was required by the Fiscal 2023 National Defense Authorization Act.
“DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas writes. “At the same time, we must also ensure that our use of Al is responsible and trustworthy, that it is rigorously tested to be effective, that it safeguards privacy, civil rights, and civil liberties while avoiding inappropriate biases, and to the extent possible, that it is transparent and explainable to those whom we serve.”
In addition to ensuring DHS’ use of AI complies with all applicable laws and the U.S. Constitution, the policy requires DHS components to follow a “trustworthy AI” executive order issued by former President Donald Trump in 2020.
President Joe Biden is expected to issue his own executive order on AI later this year. Meanwhile, the Office of Management and Budget is floating draft AI requirements for federal agencies.
Mayorkas’ memo further dictates that DHS “will not collect, use, or disseminate data used in AI activities, or establish AI-enabled systems that make or support decisions based on the inappropriate consideration of race, ethnicity, gender, national origin, religion, gender, sexual orientation, gender identity, age, nationality, medical condition, or disability.”
DHS earlier this year established an AI task force to focus on the specific applications of AI. Mayorkas’ latest memo establishes an AI Policy Working Group to develop a formal directive and instruction on the use of AI, as well as make further recommendations on any changes needed to the department’s governance processes.
The new policy developments come as the Homeland Security Advisory Council also issues recommendations to DHS on AI. During a meeting today, the council voted to approve a draft report and recommendations from its “artificial intelligence- mission focused” subcommittee.
23_0914_hsac_ai_mission_focused_subcommittee_final_reportAmong other suggestions, the report recommends DHS create “a centralized office or group” to ensure DHS keeps pace with the rapidly changing technology, while still allowing DHS components and offices to pursue their own developments.
In addition to overseeing requirements, the report suggests the centralized group could coordinate with the DHS Privacy Office and the Office of Civil Rights and Civil Liberties to address potential biases and other issues.
The report also recommends DHS encourage pursing off-the-shelf commercial solutions instead of “building everything in-house.”
During the meeting, Mayorkas emphasized the need for DHS to adopt AI quickly, regardless of whether it’s commercially acquired or internally developed technology.
“We have got to change the procurement capabilities of a government agency to actually move quickly and nimbly, so that when we’re dealing in a very dynamic environment, we can actually move with dynamism,” Mayorkas said. “I’m not suggesting moving to a sole source model, but we just have to be quick.”
He also stressed the need for DHS to prioritize where it will use AI, rather than attempting to adopt it across every mission and use case. The report points to combatting both fentanyl and human trafficking as use cases that could be “accelerated and championed” across DHS. But it also suggests DHS “integrate AI/ML into as many areas of the DHS mission as possible.”
“We’re going to need to prioritize what aspect of our mission should we really double down on to harness AI because I worry about diluting our focus too much,” Mayorkas said. “And I really do want to demonstrate, as quickly as is responsible, how this could really be a game changer for us in advancing our mission . . . we have to pick our spots here, in my view, somewhat surgically.”
DHS also announced today that Chief Information Officer Eric Hysen will also serve as the department’s “chief AI officer,” responsible for promoting “AI innovation and safety within the department, along with advising Secretary Mayorkas and department leadership on AI issues.”
During a hearing held by the House Oversight and Accountability Committee’s cybersecurity, information technology and government innovation subcommittee today, Hysen pointed to several areas where DHS is already using AI, including combatting fentanyl trafficking, investigating child exploitation crimes and verifying traveler identities at airports.
Hysen said the department views the technologies as “decision support” for its law enforcement officers.
“Ultimately, our officers are the ones responsible for making law enforcement decisions,” Hysen said. “I also see tremendous potential to use AI to remove repetitive paperwork and administrative tasks that our officers have to do that they would tell you . . . dulls their focus from their security mission.”
Meanwhile, Hysen said DHS’ AI task force is coordinating with the Cybersecurity and Infrastructure Security Agency on how the department can partner with critical infrastructure organizations “on safeguarding their uses of AI and strengthening their cybersecurity practices writ large to defend against evolving threats.”
DHS on Sept. 11 also issued a first across-the-board policy on the “use of face recognition and face capture technologies.”
It directs DHS components to only use face recognition technologies that have been “thoroughly tested to ensure there is no unintended bias or disparate impact in accordance with national standards.” DHS’ Science and Technology Directorate is responsible for overseeing the independent test and evaluation of face recognition and face capture technologies under the directive.
And it further mandates that face recognition technologies “may not be used as the sole basis for law or civil enforcement related actions, especially when used as investigative leads.” Meanwhile, it also requires U.S. citizens are given the option to opt-out of face recognition “for specific, non-law enforcement uses.”
DHS components like Customs and Border Protection and the Transportation Security Administration have been expanding their use of face recognition for traveler verification at airports and ports of entry.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Follow @jdoubledayWFED