Disparate agencies encountering similar lessons, pitfalls in prioritizing AI efforts

As more agencies begin exploring how artificial intelligence can benefit their missions, one question keeps coming up: how to prioritize the potential use cases...

As more agencies begin exploring how artificial intelligence can benefit their missions, one question keeps coming up: how to prioritize the potential use cases to get the best return on investment, and best serve their constituents? For some agencies, that means applying AI to improve back-of-house business processes and workflows, while others are looking to supplement mission-oriented functions or streamline interactions with the public.

Michael Peckham, acting chief financial officer and director of the financial management portfolio for the Program Support Center at the Department of Health and Human Services, said during a Feb. 24 IBM webinar on AI that his agency has been implementing AI for the purpose of auditing grants. HHS has made its data available and engaged the private sector to allow them to do analysis as well. That’s allowed HHS to reduce a process that used to take four hours down to 15 minutes, and returned $142 million to the mission annually.

“When you start one of these journeys, you have to understand or define what your goal is. And is your goal to meet the needs of the many, or are you trying to meet an individual specific goal?” he said. “If I’m going to build an application, I want to build the application towards the needs of the many. So this is where you have to balance things very, very carefully, and understand what the user community is. … I love the idea of the integrated project teams. We’ve utilized that previously. That really will give you the user stories that are going to get you to that end goal of what you’re looking for. But the most important point here is data can be overwhelming. So you need to make sure that you’re giving the community that needs the data the data that they need.”

Also in the healthcare sector, Gil Alterovitz, director of Artificial Intelligence at the Veterans Health Administration’s Office of Research and Development, said during a Feb. 24 ACT-IAC webinar that his agency has been engaging with veterans around use cases in order to determine what’s most important to the community it serves. That’s led to a sort of combination approach, with priorities split between healthcare diagnostics and prognostics to better monitor veterans’ health, and natural language processing and form automation to streamline customer service and workflows.

VHA is also putting research into the underlying principles of AI in order to ensure it can be trusted and accurate.

“In terms of the technologies, we’ve been also looking at a number of technologies around deep learning, explainable AI, privacy preserving AI, [which is] very important for that clinical data,” he said. “Trustworthy AI, it was mentioned that there’s an executive order that recently came out on that. And so that’s an important area, looking at making sure that there are no biases and so forth. And AI for multiscale time series, looking across time, how things are going on so that you can make recommendations based on that.”

Meanwhile, Adam Goldberg, acting assistant commissioner of the Office of Financial Innovation and Transformation at the Treasury Department, said the Bureau of the Fiscal Service’s efforts to implement AI are in their infancy. Because of that, the agency is still looking at a variety of use cases, including fraud detection, chatbots, process digitization and analytics. However, he said the agency is particularly interested in cognitive tasks, making judgements based on data inputs.

But he emphasized the importance of starting slow and building from the ground up rather than jumping right in, because the data needs to be probed and formatted correctly. As an example, Goldberg described an early pitfall his office stumbled on while attempting to apply AI to treasury warrants, which are essentially orders for the government to disburse funds.

“That language is published in XML, HTML and in a PDF format. And I thought we were going to be golden on this project, because it was an XML, it’s machine readable, it’d be great,” Goldberg said during the ACT-IAC webinar. “I’ve learned about things and people making assumptions about ‘we use the PDF because a human’s doing it now, and we capture page number.’ And folks were saying, ‘well, the XML doesn’t capture a page number, so I can’t use that.’ And so we have to be critical. And we have to rethink sometimes how we conduct our business and say, ‘Well, I don’t have a page number, can I use a line number instead?”

Goldberg also said that excitement had to be tempered somewhat when trying to implement chatbots, because in order to structure the chat and make it efficient, a great deal of work had to be done with the data first. He had to learn first what kind of data was needed, and determine how clean it needed to be before he could even start building the chat function itself.

Jamie Holcombe, chief information officer at the U.S. Patent and Trademark Office, echoed the need to crawl before walking. USPTO is prioritizing the workflow of its patent examiners, applying AI to its classification and search functions in order to cut down the time patents spend in their queue and get them to the right examiner faster.

“We are using RPA to automate a lot of our clerical and administrative processes,” Holcombe said during the ACT-IAC webinar. “Would it be possible to put it in front of the patent or trademark process flow? Yes, we have a lot of hope. But what we want to do is we want to get proficient in the use of those tools, before we apply it to the crown jewels, the golden goose of both patent and trademark workflow.”

But all of these efforts to prioritize early on can lead to greater successes in the future. Jamese Sims, senior science advisor for Artificial Intelligence at the National Oceanic and Atmospheric Administration and National Weather Service Office of Science and Technology Integration, said NOAA is an old hand at AI, having implemented different forms for around 25 years. She described a plethora of programs that use AI at the agency, including data assimilation, climate and weather models, and wildfire detection.

Now NOAA has its sights set on using AI to assist unmanned systems. Sims said NOAA has partnered with Google AI, Microsoft and SEA Vision, among other companies, to develop AI to assist monitoring of fisheries, deep sea exploration and satellite observations.

But even for an agency well-seasoned in AI implementation, keeping current on the fundamentals remains important.

“Our priorities are actually driven by our customer requirements, our requirements across the agency, to make sure that we’re following the mandates,” Sims said. “When it comes to things like standards and trustworthy AI, we look towards [the National Institute of Standards and Technology] as a federal agency for support standards. And then we are also partnering with [a National Science Foundation] AI Institute on trustworthy AI for weather, ocean, and coastal.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories