When evaluating the use of AI, the core issue for any operation is data collection, storage, use and regard to it's future implications.
Like the majority of industry, the federal government, the Department of Homeland Security is grappling with how to apply AI to its mission. At DHS, the decisions become more crucial as the use of data collection impacts not just how the agency manages national security issues, but in the front-facing impact on American citizens, their personal data and the potential implications regarding everything from travel to criminal investigations.
Amy Henninger, a senior advisor for Advanced Computing at DHS, said on Federal Monthly Insights — Artificial Intelligence that a newly created AI Task Force is aimed at defining the goals and strategy for the use of AI at the agency.
“AI is somewhat nascent at DHS. We’re really just starting to get into it in a big way. And I think that’s demonstrated by the recent stand up of the AI Task force,” Henninger told the Federal Drive with Tom Temin.
She added that DHS is looking for applications of the technology that “have the most bang for the buck.”
DHS is working to prioritize the use of AI for functions that would be best served by the technology in speed, data management, and staying ahead of adversaries. Henninger noted that the priorities coming from the DHS secretary through the AI Task force are “looking at applications related to border security, supply chain security, disruption of fentanyl or synthetic opioid production.” Henninger said the agency is “very interested in digital forensics and combating online child exploitation. We’re very interested in adversarial AI, the types of AI security above and beyond cybersecurity that AI enables new attack services for. And then we’re looking at AI applied to critical infrastructure.”
Speaking about the approach the agency is using to prioritize AI applications, Henninger said she asks herself several questions, still relying on an approach developed by former DARPA chief George Heilmeier, “What are you trying to do? What are the limitations of the current practice? What’s new in your approach? Who cares?” The agency also looks at whether the AI functionality provides sufficient value? “Better, faster, cheaper? And finally, is it compatible with the longer term strategic vision and higher order guidance?”
Regarding how the agency is evaluating the use of AI related to data collection, storage, use and regard to it’s future implications, Henninger said, “I guess the other thing you have to make sure you have access to is appropriate data in terms of quality, quantity … sufficiency. If you don’t have that for your AI project, it’s pretty much dead on arrival. And you have to make sure it’s consistent with ethical and regulatory considerations,” Henninger said.
Henninger said the agency is evaluation processes and gauging success in AI as an exercise in improving a function without reducing the quality of service.
“If I get better, but I lose faster, cheaper, then maybe I haven’t done as much as I thought,” she said. “So for example, if we were to develop an automated chat bot capability and it is designed to increase output at an emergency center … but does that at the expense of a lot of frustrated people or customers … then it’s not much of an improvement.”
Henninger said that the agency has extensive regulations and policies in place to protect citizen data.
“Coming into DHS from the DoD, that was really sort of a whole new thing for me to get used to and to understand,” she said. “I can report as a citizen, I am thrilled. And so grateful that DHS is as careful with our data, our private data as we are. So I was a little bit shocked at all the oversight and regulations and policies, making sure that our data of citizens is kept private.”
Another issue faced by the DHS task force is monitoring AI for bias in data sets.
“You have to be very, very careful about watching for unstable trajectories because the model and execution starts to diverge from the initial training data sets in ways that are hard for the model to correct for,” Henninger said.
Regarding the speed of automation and risks brought on by large scale data collection, Henninger said, “all of these AI based systems especially can be tricked and spoofed or manipulated in nefarious ways.”
”This adversarial AI is going to be automated,” she added. “So we’re going to have to have automation to counter that at the speed of relevance.”
Henninger said that there is the risk that at some point the amount of data collected can become slightly overwhelming, but the technology itself will make the process of monitoring the information more manageable and effective.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Michele Sandiford is a digital editor at Federal News Network.