Insight by IBM

How to build trust in artificial intelligence

A useful way for federal agencies to think about artificial intelligence (AI) starts with asking two questions: Will the technology help solve a real mission-related challenge? And did the technology reveal to you how it solved the challenge?

If the answer to the second question is yes, “that’s what we call trustworthiness,” said Mark Johnson, vice president of technology for the U.S. federal market at IBM. Trust hinges on whether you are “able to point back to how the answer was developed.” And as the foundation of any AI solution, the how includes “understanding what data is being used, how that data is being used and the source or origin of that data,” added Susan Wedge, managing partner for the U.S. public and federal market at IBM Consulting.

Critically thinking about these key considerations and potential risks while building or implementing AI systems is especially true for federal government agencies. Part of the challenge is that AI is evolving so quickly that frameworks, tools and guidance need to be continuously updated and improved.

To date, “AI has really been about pattern recognition,” Wedge said. “As we begin to also leverage generative AI, it’s becoming more about pattern and content creation.”

As federal agencies gear up to integrate generative AI into their operations, the key question therefore becomes, “how do we use AI in a way that adds value, while also being traceable and trackable,” said Johnson. In other words, the development and deployment of AI must be transparent and open yet made available “securely and safely.”

With a commitment to the creation and deployment of responsible AI systems, IBM sees an opportunity for AI-driven innovation to provide invaluable support in ways that help empower government workers to make better, data driven decisions.

Copyright © 2023 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories