Insight by Guidehouse

AI & Data Exchange 2024: Guidehouse’s Tracy Jones on 3-step plan for effective and ethical AI

“To really have trust, you have to have transparency — you have to have explainability,” shares Guidehouse’s Tracy Jones. She offers three steps that ag...

Artificial intelligence, like the proverbial hockey puck, moves fast. Federal organizations that want to use AI effectively therefore need to anticipate where the technology is heading not just consider what’s possible now.

That means understanding AI’s evolving potential and charting milestones based on that understanding, advised Tracy Jones, associate director for data management at Guidehouse.

“We’re already seeing the impacts that AI is starting to have,” Jones said. “We have teams that are out there doing fraud detection, using elastic search to really improve insights for decision-making, using optimization for software development,” she said. “But I think we’re really just at the tippity tip of the iceberg of where this is going.”

Jones cited several recent studies concluding that, within a few years, AI capabilities will reach a medium level of human capability. What does that mean? AI will have the power to take over increasing numbers of routine tasks, Jones said during Federal News Network’s AI and Data Exchange.

The traditional model in which government adoption of new technology lags industry simply won’t work when AI accelerates change in so many domains, she said.

“If we utilize AI ethnically and transparently, it’s going to determine how we show up in the global market — with the economy and political leadership — and also how we respond to global changes and needs,” Jones said.

That’s why, she recommends in the current momentary “lag period”  that agencies, bolstered by the White House executive order on artificial intelligence, lay the groundwork for effective AI adoption and trustworthy use.

“There are some really core foundational things that government should be focusing on to prepare, so that when they do launch full string, they’re doing it in the most productive way possible,” Jones said.

Foundational steps include looking at the agency’s AI governance structure, establishing how to build AI into strategy, and understanding ethical frameworks and principles, she said. Beyond that come increasing data and AI literacy among staff members and ensuring both management and staff understand that AI will change how they do things.

“And then, how do we think about change management and culture cultivation?” Jones said. “How do we think about that workforce to really keep them engaged and keep work meaningful?”

Planning for AI should therefore include leadership plus “bringing together voices from across the different business units, as well as technology,” she said.

Step-by-step guide for rolling out AI

Jones laid out a menu of three actions agencies should take to prepare for successful AI deployment:

  • Establish governance that includes data and data management: After all, she said, data “is feeding the AI and the product.”

Governance of AI itself goes beyond simply who manages the algorithm, Jones noted. It must extend to the lifecycle of the AI products, to ensure constant monitoring of results, then feeding them the results “back into your overall data ecosystem and using it for other potential AI or other insights to drive your organization.”

  • Select use cases with care, and ensure they use the right flavor of AI for the application: “AI encompasses a whole lot — everything from robotic process automation to logical decision-making to the deep learning that we’re now seeing” from large language models, Jones pointed out.

The challenge becomes understanding what the agency is trying to solve from a business perspective and what tools best align to the desired outcome, she said.

  • Choose the right data for the use case. Jones listed some of the data questions that an agency should consider: “Do you have the data to really get the solution that you’re driving for? Do you have the data to train the model? And then do you have the data to appropriately test it?”

Among the most important considerations besides AI technology management, Jones said, is ensuring ethical AI use.

“Especially for government, if we don’t have strong ethics in how we’re handling data and developing AI, we’re not going to have trust,” she said, and added, “To really have trust, you have to have transparency — you have to have explainability.”

It’s critical that the public believes that the government manages AI in an ethical manner, Jones said. That includes validating outputs before putting AI into production and continuously managing each program to ensure the outputs stay within intended parameters.

To do so requires establishing ethical principles, making sure people understand them and keeping the guardrails in place, she said. Agencies therefore need cross-functional teams consisting of people with diverse perspectives to design federal AI systems in the first place.

“That diversity of perspectives is really going to help you, early hopefully, identify ethical issues like potential biases and to remediate those … before it goes into production.”

Discover more articles and videos now on our AI & Data Exchange event page.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories