Insight by Noblis

AI success starts long before an agency applies data to an algorithm

Artificial intelligence can augment agency teams and help solve complex problems. But there’s a set of must-dos before you start. The Federal Drive’s Tom Te...

Shape

 Segment 1: Designing and Planning for Trusted AI Systems

Artificial intelligence is “perhaps one of the most fundamentally interesting technologies of the 21st century.

Shape

Segment 2: Designing and Planning for Trusted AI Systems

Data rarely speaks by itself. You need to understand the context.

Federal agencies are rightfully excited by the prospect of how artificial intelligence can help them modernize and transform their mission delivery.

In fact, AI is “perhaps one of the most fundamentally interesting technologies of the 21st century,” said Taka Ariga, chief data scientist and director of the Innovation Lab at the Government Accountability Office.

But successfully using AI requires not only careful planning and a sound strategy but also preparing your organization for its adoption, said Ariga, who spoke on a panel convened by Federal News Network and Noblis.

Ariga named several capacities an organization must address before implementing AI. These include dealing with privacy and data integrity considerations and ensuring that AI is applied to inherently governmental tasks in a bias-free way.

Overseers such as GAO will be “looking for … iterative steps in documenting decisions around variables, around organizational roles and responsibilities, and around stakeholder input,” he said.

Setting a successful AI plan in motion

Noblis Chief Technology Officer Christopher Barnett said planning starts with “ideating” sessions with agencies. Such sessions involve “their mission analysts and getting an idea for what the system has to do and what the objectives are.” This planning approach pulls in experts from several disciplines “to really get at the root of the challenge, what the mission requires and what system needs to be designed first,” Barnett said.

Beside the IT and business owners, the AI team should include data scientists, and cybersecurity and privacy subject matter experts, and legal and civil liberty practitioners too.

It’s also important to know the desired outcomes, participants on the panel said. That helps define iterative steps in documentation and in AI application development, Ariga said.

While it’s OK to think big, start small, said Rajiv Uppal, chief information officer at the Centers for Medicare & Medicaid Services. “Let’s make sure we start on a small part of a bigger program,” Uppal advised. “Let’s make sure that we do risk [analysis] before we go out and tackle something really huge.”

Uppal suggested consulting with end users to discover the biggest pain points in their work and then starting with those. That will yield the largest return on investment in AI early in a program, he said.

An agency’s AI development team needs to understand the program to be augmented by AI and its requirements, Uppal said. Without adequate context, the team runs the danger of choosing the wrong training data or misapplying it.

“Data rarely speaks by itself,” he said. “You need to understand the context.”

Remember: AI models must be trained

At the Defense Logistics Agency, AI Strategic Officer Jesse Rowlands starts with the presumption that AI models can be fragile and prone to error. That’s why assessing the risks and potential costs of mistaken outcomes is a key practice for his group in gauging the potential ROI of any AI project.

After ROI, establishing accountability is also critical, Rowlands said. He made the analogy to people, the other “intelligent assets” besides AI models.

“Our people, they have supervisors, they have internal controls, they have performance appraisals,” he said. “We need to do the same with our AI models, if we want people to trust them. Who’s accountable for this model? Who’s watching the model?”

For that reason, when selecting an AI product, agencies should evaluate products that make their mathematical or statistical techniques transparent, Noblis’ Barnett said. Avoid “black box” products that make it hard for the agency to understand the algorithms behind a model, he added. That often will favor products that use open-source algorithms, Barnett said.

Only after establishing governance, risk management, ROI and iterative development frameworks should an agency select specific technology, the panelists agreed. In fact, doing the groundwork should simplify technology selection.

“The technology offerings today are amazing,” Rowlands said. “But they’re all useless if I don’t have the expertise for the people to utilize those tools.” Rushing technology ahead of planning could turn an agency into what Rowlands called “a parking lot for Ferraris.”

Finally, the AI team should make sure everyone in the agency understands the purpose of AI, Barnett said, adding that they also need to know that these projects focus on augmenting not replacing people with automation.

“How we deal with a communications plan and bargaining unit conversation, I think it’s important,” Barnett said.

Check back for part two of this discussion on March 16. Listen to part one below:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Articles

    Tackling Government Challenges Through Science and Technology

    Read more

Featured guests

  • Jesse Rowlands

    AI Strategic Officer, Defense Logistics Agency

  • Rajiv Uppal

    Chief Information Officer, Centers for Medicare and Medicaid Services

  • Taka Ariga

    Chief Data Scientist and Director of the Innovation Lab, Government Accountability Office

  • Chris Barnett

    Chief Technology Officer, Noblis

  • Tom Temin

    Host, The Federal Drive, Federal News Network