Federal agencies are moving fast toward adopting artificial intelligence. Michael Kratsios, chief technology officer of the United States, just revealed that ...
This content is sponsored by Arrow NetApp
Federal agencies are moving fast toward adopting artificial intelligence. Michael Kratsios, chief technology officer of the United States, just revealed that federal spending on AI research and development has doubled in the last three years, with the Defense Department accounting for about $1 billion alone.
But AI is not a plug-and-play technology; barriers still exist to federal adoption. Two of the main ones, according to Rob Stein, vice president of U.S. Public Sector at NetApp, including dealing with the massive amount of data, and trust in the AI systems themselves.
“So in my view, unlocking the power of AI is really completely dependent on the data. So what does it mean, building the data pipeline, continuously acquiring, organizing and using that data in an optimal way, as more and more data gets accumulated? This is a big challenge, because not only is the volume of data massive, it’s everywhere,” Stein said during a Sept. 4 Meritalk webinar. “Just for example, there’s so much data just coming from sensors that more and more compute is going to have to be deployed at the edge just to analyze that data, reduce it, and get it to where the end user can actually make use of it.”
But agencies are often stymied on how to accomplish that. How do they transport the data in an efficient manner? How can they get the infrastructure and compute out to where they need it? Data silos and complexity of the technology are difficult hurdles in the road to a data pipeline.
That’s why Stein recommends a data fabric to overcome these challenges.
“What does the data fabric do? It creates this integrated data pipeline, from the edge to the core to the cloud, so the data can be ingested, collected, stored and protected, no matter where it resides,” Stein said. “And in my view, only then can really the data be optimally applied to train AI, drive machine learning and power the deep learning algorithms that are needed to bring AI to life.”
Stein also said connecting data scientists to IT organizations will be a critical step to managing this data with an eye toward AI adoption.
“I’ve gone to many universities and talked to their IT shops, and they said more and more researchers are reaching out to IT to provide the tools and the infrastructure and the capability around gathering and managing the data that researchers and data scientists need to really perform the critical functions of AI that help the organization either further their mission, or whatever their goals and objectives are,” he said.
And agencies are coming to realize that as well. Mason McDaniel, chief technology officer at the Bureau of Alcohol, Tobacco, Firearms and Explosives, said during the webinar that it’s important that agencies get a handle on their data as quickly as possible.
“I can’t emphasize enough how important it is to build a data team,” he said. “We’ve had the need for a long time to generate test data for applications. [We] haven’t necessarily always done a good job of that. So that’s been sort of a pain point for a lot of organizations. But that’s going to be magnified so much as we start moving more towards AI. Because … the results from AI are only as good as the training data you put into it, and how you actually trained the models. So you’ve got to have people that focus on how you actually collect that and then use it.”
And that need gets more urgent every day, as AI technologies get more commonplace. In fact, AI is already a component of many popular technologies already incorporated into day-to-day life. For example, AI underpins most call center technologies, which are used by most federal agencies at this point, especially those with primarily public-facing missions like the IRS. Some federal agencies also use chatbots and virtual assistants as part of their customer service options.
“Machine learning uses processed data to learn and take actions. So how do we ensure that we can trust the data and ensure that, for example, a critical target on the battlefield is truly a tank and not a civilian vehicle of some sort?” Stein asked. “So these initial applications will help us learn and gain this trust. And in fact, organizations such as the Defense Advanced Research Projects Agency have already begun developing technologies that would allow AI to better explain its reasoning, to not just give us an answer, but explain it. DARPA calls it explainable AI. So that’s one step towards what I think is very important. And that’s building trust in AI.”
And building trust early on is important, Stein said, because in the next five or six years, the technological complexity of AI won’t even be visible anymore. It will be embedded in applications, functioning in the background. That’s already beginning to happen, he said. Think of a traffic app, which automatically adjusts your route in response to real-time data. You don’t have to ask it to do that, or manually activate an AI. It just happens. And that’s going to get more common in the near future.
“Agencies are very optimistic about using AI as a tool to deliver on their mission. And those missions are varied, whatever they might be, whether they’re in the front lines, in a highly secure location with sensitive data, or they’re out there educating the next generation of leaders,” Stein said. “There’s so much optimism around what AI can bring. I think once organizations get ideas on quick win projects, and start to use them and do them, we’ll start to see a lot more momentum pick up.”
But those quick wins won’t be possible until agencies get a handle on their data. That’s why establishing a data fabric is the first step agencies can take to unlocking the potential of AI.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.