"Artificial intelligence,” “machine learning” and “robotic process automation” (RPA) get thrown around synonymously, but differentiating between them ...
The terms “artificial intelligence,” “machine learning” and “robotic process automation” (RPA) get thrown around synonymously, but differentiating between them is important to understanding how best to use them.
Brian Campo, deputy chief technology officer at the Department of Homeland Security, clarified that RPA is essentially “automation” — the act of putting manual tasks into a context or system where the same action can be done automatically and intrinsically. As for AI and machine learning, the difference comes down to how the data is used.
“So machine learning is trying to take data and make it intrinsically more informative, trying to take those automated insights and figure them out and find them in new and interesting ways, uncovering things that we wouldn’t necessarily be thinking about or something that wouldn’t occur to the operator,” he said on Federal Monthly Insights — Cloud and Artificial Intelligence. “Now, artificial intelligence is sort of different than that, in that it’s not about driving insights — it’s about actually making impacts to some operational activity.”
For all three technologies, Campo said, DHS wants to “cut through a lot of the hype” and determine their impact on department business. AI for AI’s sake, for example, can sidetrack the mission and produce undesirable results.
He said DHS’ new Chief Information Officer Eric Hysen has made a big push for examining what impacts AI, machine learning and RPA will have on customer and stakeholder experience across the agency’s wide range of activities.
“One of the things that we’re trying to do with any technology is trying to find those common elements, trying to figure out how do all of those technologies really impact and where can we find commonalities between them, so that as we push out and prioritize new technologies, we do it in a way that we can give the most benefit to the largest groups,” Campo said on Federal Drive with Tom Temin.
Something fundamental to AI and machine learning is data cataloging, and while DHS has several data architecture efforts underway Campo expects the bulk of AI activities will revolve around moving from data centers to cloud computing.
“We do some local prototyping and in a few different non-cloud assets, but when we’re pushing things out for mission impacting AI and machine learning, it is absolutely going to happen in the cloud, because what we want is — the scalability, we want to have the ability to gain those insights quickly — and cloud really leverages that for us,” he said.
But an omnipresent concern with cloud technology is the data extraction cost. Campo said that many applications use a cost model based on data ingress and egress, with the latter function typically more expensive. Therefore, edge computing — computing done close to the data source — has great appeal for DHS.
“Trying to process the data where it lives, it’s much easier to move compute than it is to move data,” he said. “It’s quicker, for one, even if you take the cost element out of it. You can move compute very quickly. We can build a new application in a new environment in a very small fraction of the time it would take us to move that data.”
Other tactics include breaking down AI and machine learning into smaller chunks, so that only the data most affected needs to be moved in or out of the cloud, and co-locating data processing into the system as closely as possible. Campo said building out core data services and plugging them into applications in a more modular fashion, can help.
The ultimate goal is to not keep data living everywhere, to make it less pervasive and more meaningful. At a lower level, DHS is trying to make datasets usable for many different components. Using the department’s various immigration systems as examples, Campo said his office is examining how data from one system can be leveraged by another.
“So we’re not necessarily trying to move that data right now really, a lot of what we’re doing is mapping data, coalescing data. But yes, I absolutely believe that as we get better, and as our data is used more widely, by systems that didn’t initially start with access to the data, we will absolutely look at some replication,” he said.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Amelia Brust is a digital editor at Federal News Network.
Follow @abrustWFED