Insight by Ingram Micro and IBM

Is AI trustworthy? That depends on the models, boundaries and governance

With the right approach to models, boundaries and governance, it’s possible to develop trusted artificial intelligence disciplines, say IBM’s Mark Johnson a...

This is the eighth article in our series, The Power of Technology.

We’ve all heard of the concept of the last mile in technology references. IBM’s Mark Johnson offered a comparable notion when it comes to the use of artificial intelligence: the last 5% to 15%.

“We’ve realized that trusted AI, something that can actually be used in a business sense, needs to be trained on curated data. You can get to 85% or so using existing available AI models,” said Johnson, vice president of federal technology at IBM. “Then, you need to do that extra 15% with your own data — a business’s data or an agency’s data.”

That is what ensures integrity and accuracy in the results, he said during a discussion for the Federal News Network The Power of Technology series.

We sat down with Johnson and Tony Celeste, executive director and general manager for Ingram Micro Public Sector, to talk about how organizations can develop trusted AI disciplines.

AI, not as new as you think

For starters, AI is built on earlier work in business intelligence, high performance computing and big data, Celeste said.

“The government has had interest and involvement in this since its infancy,” he said. “From a technology standpoint, it didn’t just ‘Poof!’ arrive on the scene. Things have a way in our industry of gaining momentum. If you think about this, in the early 1980s, we were using the first form of artificial intelligence — in configurator tools.”

He also laughingly pointed out that IBM’s Deep Blue beat chess champion Garry Kasparov 25 years ago — in 1997, after Johnson noted that the machine learning in the company’s early version of Watson, the one that bested the competition on “Jeopardy!” in 2011, has a lot in common with OpenAI’s ChatGPT.

“It was great to have a game-playing supercomputer,” he said. “But I draw a lot of parallels between that and ChatGPT. Watson, at that time, was trained on a broad set of information — had to know about everything because IBM didn’t know what topics were going to come up — much like ChatGPT is trained on the broad set of internet data, which may or may not be factual.”

In the time since Watson’s game show debut, IBM has radically pivoted in its development of and approach to AI technologies: It’s not about knowing everything and crunching every bit of data, it’s about creating boundaries and integrating in that critical 15% of the right data, Johnson said.

Blending foundational AI models with curated AI models

Agencies already use narrow AI and robotic process automation to help sift through large datasets more efficiently than people can, Celeste said.

But the current interest in AI/ML focuses on getting to useful insights faster, he pointed out. “How can we get a higher level of fidelity, a greater level of accuracy, get through more information more quickly to get to a result?”

That’s possible by blending foundational models pretrained on a particular set of data with an organization’s curated data, Johnson said. IBM has made this  blend  of pretrained and fine-tuned data the heart of its new watsonx AI platform, which just began rolling out.

“It can very quickly be put into use by the government,” he said. “It’s also trackable, so you’re able to say, ‘Hey, here’s the reference for this particular answer that I’m getting or why it generated that particular solution.’ That’s what helps generate trust in the output of the AI models.”

Such transparency is essential, Celeste said. If an agency fails to do a good job of vetting and testing the datasets that it’s applying AI algorithms against, then the results and output are going to vary, he said.

“How we program the algorithms matters too,” Celeste said. “That’s been a big point of discussion with legislators: making sure that we’re leaving biases out of the programming models, that the models are really just looking at the information.”

Maintaining ownership of data and monitoring its use in AI workflows

Another trust factor for agencies should be ensuring data ownership when working with industry partners or using publicly available models. “If you put your data in a model what’s happening to the data?” Johnson said.

Setting boundaries on data should always be part of the upfront work in AI, he suggested. When IBM partners with an agency on an AI project, “you still own your data. You still control that data,” he said. “We want to make sure that we’re not taking and using that data in some other way, even if it’s just to train our models.”

Both the data and the programming therefore need to be monitored, Johnson advised. This needs to be done as a collaborative effort, not just by the IT organization.

The people best able to define and track such use are those who understand how the data and any AI results will be used for a particular mission, Celeste added. “They might not understand the underlying technology, but they understand what they’re going to do with that information.”

Establishing AI governance

IBM has an AI ethics board that Johnson characterized as very robust. It reviews the trustworthiness of AI models, what happens to data, who’s running the models and whether the work is something society will be comfortable with.

“Agencies will have to be doing that on their own too,” he said. “They’ll have to have their own ethics reviews, and vendors should also be working with the government to make sure that what we’re doing with AI is really adding value to society and not hurting individuals or groups.”

One of the new components of watsonx, coming in fall 2023, is watsonx.governance, which should help organizations monitor their AI initiatives and programs.  “The toolkit will enable trusted AI workflow … and operationalize governance to help mitigate the risk, time and cost associated with manual processes and provide the documentation necessary to drive transparent and explainable outcomes,” according to an BM press release.

Celeste called for the government to lean into this type of approach to expanding AI/ML because of the potential opportunities the technology can deliver.

“If you can anonymize the data in the health statistics of people, imagine the power of applying this tool on a large data set to look for correlations in diseases and health care risks. Imagine the impact that can have,” he said. “The processing power is available and so are the storage and the networking bandwidth to take advantage of it.”

To read more articles in The Power of Technology series, click here.

Learn more about IBM’s offerings through Ingram Micro Public Sector on Xvantage.

Can AI help with supply chain security?

As agencies begin to rev up supply chain risk management efforts, IBM’s Mark Johnson sees an essential need for artificial intelligence.

When it comes to gathering and maintaining information for software bills of materials, “I am a believer that AI and machine learning are going to play a very big role,” said IBM’s vice president of federal technology. “Because who’s going to go back and look at all of the code that is been in place on all the legacy systems to start tracking new code? Even new code has old code in it.”

Using AI-type algorithms for this type of challenge will let agencies quickly amass potentially insightful information, Johnson said. “It will give us a great baseline to start building meaningful SBOMs of the supply chain, and that will help ensure security going forward.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories