Insight by SAIC

Artificial intelligence starts with a robust data policy

To ensure value out of artificial intelligence, first understand what AI actually is, versus a buzzword make popular by emergent generative applications. That...

To ensure government agencies receive value from artificial intelligence, first you must understand what AI actually is, versus a buzzword made popular by emergent generative applications. That’s according to Jay Meil, chief data scientist and managing director for artificial intelligence at SAIC.

“The first thing I ask our customers in the federal government is, what are you trying to do? Because AI is really just a means to an end,” Meil said. Answering that is not a trivial matter. Too often, people say they want artificial intelligence because there’s some sort of AI mandate in place or because everyone else seems to be launching AI projects.

“You really have to dig into and understand what the customer’s mission is,” Meil said.

Still, many agencies do have a clear idea of what they want to accomplish. A common requirement, Meil said, coming from the defense, civilian, law enforcement and intelligence domains: how to create human-machine teams. That is, “look for different ways to ease the cognitive load on users, and to make informed decisions faster and better,” Meil said.

Data underlies the ability to create man-machine teaming, Meil said. It’s a matter of using algorithms to discern patterns or anomalies in amounts of data too large for people to parse.

“Humans and machines can both think, in different ways,” Meil said. “Machines think in a way that’s very good at processing numbers, very quickly; large amounts of data, pulling out signals in the noise, or hidden patterns in the haystack.” But they can’t do the logical reasoning that goes into strategic decision-making by people. In that sense, AI can augment people’s main strength and help them avoid analysis paralysis or, equally bad, not seeing important signals because of an overwhelming amount of data.

Therefore, to have success deploying AI, an organization needs to reach a high level of data maturity, Meil said. He outlined five sights” in a data maturity model.

Hindsight means you can use data to understand what happened.

Foresight, the next level up, enables predictive analytics, the ability to evaluate possible outcomes of future actions.

Insight is still higher, resulting in what Meil called prescriptive analytics. “It’s if I know that I could have outcome A, B, or C with a given level of confidence, what can I do to drive towards one of those potential outcomes?”

Oversight and right-sight make up the highest level of data maturity, Meil said.

“Oversight is the ability to see everything in a common picture, all of your analytics in one place on one dashboard.” Meil describes right-sight as individual, deep learning into patterns or anomalies revealed by the oversight function, to help decision-making.

Verification and feedback

In getting to a state of good data maturity, Meil said, organizations tend to focus on “what the data looks like when we bring it in, how we shape it, what the model looks like and what the results are.”

But, he said, it’s also important to pay attention to what he called verification and feedback, once an AI program generates a result.

“We take it a step further and say, is that what we were expecting? Is that what the model should have inferenced? Is that the answer that should have been given?” That post-facto analysis feeds into ongoing training of the algorithm, Meil said, “so the models get smarter over time.”

He added, you don’t always need huge data sets to properly train AI programs. Having the right data is more important, or using synthetic data that might result in more accurate, less biased outcomes.

Also important: what Meil called pre-processing pipelines to convert non-machine-readable data that might be important, such as contained in PDFs, PowerPoint presentations or SharePoint documents. AI programs ultimately need data formatted in such a way that the algorithm can process it.

Meil said building high-impact AI applications in which the organization can have confidence depends on a robust data strategy and transparency, and “understanding how the machines are making the decisions they’re making.” He termed that second criterion, responsible artificial intelligence.

He noted that SAIC’s low-code, no-code orchestration tool for AI, called Tenjin, is “designed very deliberately to give our customers the power to do data science.” AI, he said, is complicated, and most agencies lack the number of data scientists they need. Data scientists often lack domain or mission expertise, so giving people throughout the agency some data science skills, via a tool set like Tenjin, can speed the deployment of responsible projects, Meil said.

“It empowers them, the analyst, the operator, the warfighter, the law enforcement officer, the diplomat, any enterprise user,” Meil said, “to be able to interact with data in a fundamentally different way. It creates self-service analytics.”

For further insights on artificial intelligence at SAIC, go to saic.com/ai.

Listen to the full show:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories