Optum Serve’s Amanda Warfield tells Federal News Network how agencies are tapping into generative AI to make federal employees even more productive.
Leaders across the federal government are seeing generative artificial intelligence and large language models (LLMs) as promising tools that will reshape how agencies deliver on their mission.
The Biden administration is calling on agencies to experiment with GenAI, and is touting the transformative role this emerging technology will have on government work.
“As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI,” President Joe Biden wrote in a sweeping executive order issue in October 2023.
The executive order underscores agencies’ caution over GenAI, but also signals the importance of experimenting with this emerging technology.
Amanda Warfield, the vice president of program integrity at Optum Serve, said agencies see GenAI as a tool that will enable federal employees to become more productive.
“In the last year or so, we’ve really seen an explosion in generative AI,” said Amanda Warfield, the vice president of program integrity at Optum Serve. “And the federal government has really been trying to apply the right set of guidelines around it, and figure out where it is the most valuable, where it can create the most efficiencies.”
Warfield said agencies see generative AI a promising tool to eliminate or reduce manual tasks, while empowering the federal workforce to focus on higher-impact tasks.
“Generative AI is just there to supplement and make those tasks that most people probably don’t like doing — busy work — whether it’s either data entry, or manual review of large documents. [It’s] things that take you a lot of time. What if you had a way to streamline that, to automatically have a tool that’s going to identify the right section of a 1,000-page document for you?” Warfield said. “For employees, they can then spend their time doing more of what their specialized skill is.”
Agencies are identifying GenAI use cases across a wide array for mission areas. Warfield said many agencies see opportunities to use it to provide a better customer experience to the public.
“For a given agency, how can that be applied to help streamline things, make their lives easier when they’re applying for program benefits, or things like that, that really add value and are meaningful to agencies’ missions?” she said. “It’s about being efficient, saving time and money, and then being able to really prioritize the workload that gives you the most value and the most [return-on-investment].”
Warfield said agency watchdogs, including inspector general offices, are also turning to GenAI as a “policy assistant” to tackle a growing workload of fraud cases.
“They have more cases than they can work. They have finite resources. They don’t have agents to work and prosecute every case that comes their way. So imagine being able to apply generative AI to streamline what today is very manual,” she said.
As IG offices evolve their toolkits to stay ahead of fraudsters, Warfield said GenAI helps investigators comb through hundreds — if not thousands — of documents, and to flag anomalies and build evidence in a case of potential fraud case.
“If we’re talking about a provider in health care, it’s looking at eons of claims data and comparing that to policy documentation, federal regulations and guidelines to essentially prove what the provider did, or what they billed, violated policy — and how can they prove that’s intentional,” Warfield said. “It involves a lot of manual research, combing through data, combing through these large documents, and to empower agents with a tool that that can easily condense down massive amounts of PDF files and documents and all sorts of data into a human-like Q&A format … [on] whatever case they’re prosecuting … it can provide an easy way for anybody who has health care experience or doesn’t to be able to interpret those big documents.”
GenAI can also supplement the skillsets of employees — allowing them, for example, to write code or parse large volumes of data, even if they don’t have a technical background.
“A lot of folks who support fraud, waste and abuse on the downstream side, in looking at cases for potential prosecution or other action, not all of them are technical individuals who know how to query data or write SQL queries or program. But they still have a need to access data, to aggregate data, to look at trends over time. And using generative AI in a way that allows a regular person to just go in and say, ‘Hey, can you tell me how many claims over the last year have been paid using this type of a procedure code?’ And then have that data automatically aggregated for you, or have the query written for you so that you can just go drop it in somewhere, or even produce charts and visualizations for you, that show you that data in a meaningful way that really gives you the insights right off the bat. Those are huge time savers, for individuals who typically would have to refer that to someone else, wait days or weeks to get the data back, it can really speed up that process.”
Warfield said IG shops can also use GenAI to ingest agency-specific user guides and standard operating procedures, so that newer employees can pull up reference materials faster then ever.
“Instead of you having to sit in a six-hour-long training and try to remember where the section was that was relevant to you, you can then use your Generative AI assistant to say ‘Remind me what our SOP is for whatever the process is,’ and be able to pull up that section really quickly — or just have it summarized for you in a nice, easy-to-read response,” she said.
Agencies see limitless potential — but also plenty of new risks — when it comes to incorporating GenAI into their day-to-day work.
Among the challenges, agencies need to understand the scope of what the algorithms they’re using have been trained to do, and ensure they don’t produce biased results.
“You can’t just go out and take ChatGPT and apply it to magically work for the HHS mission or in Medicare processes. You have to really take an approach that factors in agency-specific data, agency-specific expertise and context,” Warfield said.
Another challenge agencies face is understanding what datasets to train a GenAI algorithm on, and how to set clear boundaries on which data the algorithm can use.
“There has to be a way to ensure that data is always accurate, it’s always current. It’s the latest version that you’re accessing, so that when you actually apply it into your business processes, you’re getting the right answers and the right accuracy,” Warfield said.
Agencies are also thinking about the impact GenAI plays in cybersecurity. Warfield said agencies need to adopt a zero-trust mindset when it comes to fielding AI tools.
“You’re thinking about how the data is going to come in to your federal enclave. How are you going to ensure that the data never leaves your security boundary? What checks and balances do you have, that you can apply upfront, and make sure those are part of your selection criteria, that decisions are being made to factor those in? Those types of things are really important from a security perspective.
While agencies have much to consider for adopting GenAI tools, Warfield outlined a few best practices to keep in mind.
Agencies, she said, should consult with experts before deploying any generative AI tools.
“Having a way to select the right large language model for the right use case is really important. It’s not a one-size-fits-all approach. It’s really important to make sure agencies are consulting with the right experts upfront to have that selection criteria defined to make sure those decisions are made in a way that’s really effective,” she said.
Agencies also need to ensure that human employees still maintain decision-making authority, while using GenAI as means of making data-driven decisions faster than ever.
“You still need to make sure there’s a human in the loop, and you’re not just taking whatever the response is by itself,” Warfield said. “That human in the loop oversight is really important to monitoring the results of your generative AI’s answers: making sure they’re continuing to stay accurate, the training or retraining of the models that needs to happen to stay current and refreshed. All those processes have to be built into your overall framework.”
Listen to the full show:
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jory Heckman is a reporter at Federal News Network covering U.S. Postal Service, IRS, big data and technology issues.
Follow @jheckmanWFED