Insight by Noblis

When implementing artificial intelligence programs, feds from across government recommend standing up an AI ecosystem

Artificial intelligence success stems from establishing a reliable ecosystem, having the right data and focusing on people. We talk with AI leaders at CMS, DLA, GAO...

Shape

Segment 1: Implementing an AI plan

The first step when it comes to AI is making sure that the data [agencies] rely on is complete, is accurate. … We certainly rely on mission data. We certainly rely on open-source data in developing our own machine learning capabilities.

Shape

Segment 2: Implementing an AI plan

We’re seeing a readiness to adopt and experiment with AI. I think a lot of it comes down to pilots, getting a few base hits here, here and there … starting small and then branching out from there.

Success in artificial intelligence depends on careful planning. Agencies must first establish what the Defense Logistics Agency’s Jesse Rowlands called an AI ecosystem.

At DLA, that requires answering four questions, said Rowlands, the agency’s AI strategic officer, while speaking on an AI implementation panel convened by Federal News Network and Noblis: “Do we have the data in place? Do we have the infrastructure and tools? Do we have the experts in the data scientists to facilitate these projects? And do we have that governance structure?”

In a separate discussion, we covered how to prepare for AI projects. Read more in this article, “AI success starts long before you apply data to an algorithm.” In this second discussion, our panelists talked about how, having done the groundwork, agencies can best go about implementing their AI plans.

For AI, start small and scale up

Rowlands cautioned against comparing federal and commercial applications in similar domains too closely. DLA staff do consider other large-scale distributors of multiple items, such as major retailers, he said, but added that a big difference is the risk tolerance.

“If Walmart runs out of toilet paper, it’s no biggie. If a warfighter runs out of the critical part, that’s a different problem,” Rowlands said.

Because demand patterns from DLA’s armed forced customers can be difficult to predict, risk mitigation is an important element in its AI projects, he said. Mitigating risk, in turn, calls for an iterative approach to AI, deploying quickly but also making adjustments quickly.

“Whatever projects we want to take on in the future, we can get through that pipeline quickly, we can iterate, we can experiment,” Rowlands said. “And we can learn, more importantly, from what we’ve done in the past.”

AI pilots are a good way to proceed, advised Chris Barnett, chief technology officer at Noblis. He said he sees that approach often across government. Pilots often center on the application of AI or automation in making a particular job function more efficient.

Equally important to implementation is having complete data. Sometimes, for training algorithms, data gaps may exist. If so, agencies should consider using synthetically generated data, recommended Taka Ariga, chief data scientist and director of the Innovation Lab at the Government Accountability Office.

Synthetic data can help steer an algorithm away from biases that can creep in, Ariga said. GAO has used synthetic data to model outcomes of different financial controls on the stubborn problem of improper payments, for instance.

AI that’s both for and about people

Success in implementing AI initiatives, not surprisingly, relies on people in a couple of ways. All the panelists agreed that agencies view AI as an enabler rather than a technology that will replace federal employees. The goal typically is to free up people’s time so they can do less routine work and instead focus on high-level analysis and planning.

“How can we augment the humans so that the work they’re doing is faster, and [we’re] reducing some of the burden,” said Rajiv Uppal, chief information officer at the Centers for Medicare & Medicaid Services.

As an example, he cited the months-long, laborious process of documenting security controls pursuant to obtaining authorities to operate (ATOs) for new software programs. CMS is experimenting with natural language processing, applying it to the database of controls as a way of removing some of the burden, Uppal said.

Implementation success also depends on having staff members with sufficient AI skills, Uppal said. “We have a workforce upscaling program that we call workforce resilience,” he said. “We offer many tracks for our staff to get upskilled on human-centered design, which is an important component of how we do things.” There are other tracks that focus on data science and product management.

GAO’s Ariga said people dealing with AI must understand how to deal with results that typically are nonbinary. “What comes out of AI typically is probabilistic. So how do you interpret a 67% likelihood of something happening? How do you narrate that conversation?” he said.

Still, if an agency has established a strong AI ecosystem and then uses iterative, agile approaches to deploying it, it should be able to both audit and trust the outcomes, even if algorithms do have black box characteristics, Ariga said.

Barnett made an analogy to human intelligence. “You can’t see neurons functioning, but you can, over time, see the outcomes,” he said. “And to me, that provides validation that the system is working.”

Listen to part 2 of the show:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories