This approach also points more accurately to the right type of program to apply to the problem, whether RPA, a traditional AI algorithm or generative AI.
Discussions about how to get started in artificial intelligence tend to focus on use cases. Agency staff ask themselves, in effect, “What could we do with AI?” or with machine learning or robotic process automation.
Kathleen Featheringham, the vice president for AI and ML at Maximus, suggested a variation of that question, the answer to which will get agencies faster to real, measurable value from their AI investments.
AI success “is more about what are you struggling with? What are the problems, the mission elements, the things that you need to actually make things more successful?” Featheringham said during Federal Monthly Insights – Operationalizing AI. “Starting there helps really define what are good use cases.”
Featheringham said this approach also points more accurately to the right type of program to apply to the problem, whether RPA, a traditional AI algorithm or generative AI.
Program managers and agency executives needn’t bring AI expertise to the problem solving, just details of what they want to accomplish.
“It’s more about, hey, what are the issues, and then work with their technologists to be able to break it down for what type” of AI to use, Featheringham said.
Often, she said, organizations find that functions with rote activities, like sifting through large quantities of documents or images, often provide the uses cases with the fastest returns on investment. A useful approach comes from thinking as an assistant would, asking what would most aid a particular workflow and the person or people performing it.
“How many times you go and do searches in different databases,” Featheringham said. “Let’s say you have to search through seven different ones. Robotic process automation would be great for that.”
Even AI used to create new outputs, such as analyses or summaries of existing documents, Featheringham said, would not replace people doing those functions because of the need to check the work of the algorithm.
She likened deployment of generative AI to a new employee, regardless of how well trained and educated, asked to turn in a work product.
“Would you just turn in what they gave you? Probably not,” Featheringham said. “Would you go through it, fact check it, really refine it? Absolutely.”
It’s wise to ask people doing the day-to-day work about their obstacles and pain points. But Featheringham advised also observing people’s work directly.
“I’ve spent a lot of time going and standing behind people for, ‘show me how you do your job. What do you do?’” she said.
Featheringham added experienced people may become so skilled at integrating multiple tasks and processes that the observer needs to ask why people do certain things in certain ways. That questioning can lead to a much deeper understanding of where to apply AI.
“You get the initial [ideas] of what they think are their needs,” Featheringham said. “But then you also really see it in action. With any of these types of emerging technologies, you can’t just go straight off requirements, you have to see how it’s going to be put into action, how the people would be interacting with it.”
One function Maximus has helped agencies with is making work easier for communications writers. By generating some pro forma elements of a piece from automated research, the AI lets writers spend more of their often-limited time crafting creative pieces in their own voices.
Beyond specific applications at the individual level, agencies must think about how to deploy AI in the context of modernization and at the enterprise level. That in turn, Featheringham said, requires understanding how AI differs from traditional enterprise software.
“AI is not just the same as any traditional software application,” she said. “There are some nuances that come into play that have to be accounted for.”
Among them: The fact that AI requires training data which the agency must curate so as to avoid bias. Another is that AI, by definition, changes its own logic as it learns.
Featheringham recommended a ModelOps approach, modeled after the DevSecOps methodology many organizations have adopted for continual secure software development. It lets organizations deploy regular, incremental software releases with reliable functional and security characteristics. The alternative is slow, expensive “bespoke” applications that require custom work for every use case.
The key to ModelOps lies “in how you can build the controls and measures into the systems so that you can bring different types of AI models and tools in safely,” Featheringham said. “You can actually monitor what they’re doing.”
ModelOps, when designed properly, ensures the agency can see and account for the way given inputs result in changing outputs as AI models learn and adapt.
She added that ModelOps must also take into account that the different flavors of AI vary in the degree to which their performance is probabilistic — very high for generative but low for RPA, which is more deterministic.
Listen to the full show:
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED