Insight by Pegasystems

For success in AI, start with a clear idea of the outcomes

In some ways, getting started presents the most difficult part of deploying artificial intelligence (AI). One way to get started, though: Think less about algor...

In some ways, getting started presents the most difficult part of deploying artificial intelligence (AI). One way to get started, though: Think less about algorithms and training data and more about the business problems you’d most like to solve.

Peter van der Putten, the director of the AI Labs at Pegasystems, offered these questions: “What is it that you’re trying to achieve? Is it about providing customer-centric, citizen-centric service? Or is it more focused on doing more with less and improving your productivity?”

As for AI itself, van der Putten added, “It’s not just technology. It’s also a way of working, becoming more evidence based as an organization.” He said AI is also a way of helping employees become more productive by removing repetitive or routine tasks. In such cases, you can think of AI as augmented intelligence. He suggested that used effectively and ethically, you can think of AI as “accepted” intelligence.

Van der Putten, in an interview for the Artificial Intelligence Federal Monthly Insights Series, named a third application domain for AI that’s neither improving direct online experiences to constituents nor quite improving internal operations. Namely, “getting people to understand how to interact with the government in the first place,” so they know “some of the benefits I’m entitled to.” He called that “nudging, similar to how commercial companies are nudging their customers to their products and services.” AI-powered Q&As, he said, can give people personalized views of the programs and services most relevant to each individual.

This all means that while the technology staff must be ready with the data and computing resources to handle AI, the decisions of if, and where, to use AI take a team approach.

“Ultimately, it’s the entire organization that needs to think about strategic priorities,” van der Putten said.

Deploy with care

AI projects generally don’t require armies of data architects and AI specialists, van der Putten suggested, but rather some careful thinking.

“It is much better to start with outcomes, the various parts of the organization where we want to improve,” he said. Then, work backwards towards the processes that contribute to the outcomes. He used the example of case backlogs, where AI might improve the routing of cases in a manner more closely aligned to the skills of the people making adjudication decisions.

Van der Putten said it’s important to continuously measure outcomes and check in with the people whose work AI is aimed at improving. He recommended an incremental approach, applying AI to one process at a time, constantly feeding data back to the algorithm for continuous improvement.

Such an approach can make modernization dollars go far, van der Putten said. Often AI can help integrate legacy applications so they improve process outcomes.

“If you want to modernize, you maybe shouldn’t rip and replace, but think more in terms of putting an agility layer on top [of existing applications]”, he said. That lets the agency improve the citizen experience or solve internal logjams, while buying time to rework backend applications in a more measured way.

Applications, of course, don’t work without data. When thinking about the data that will train and then work operationally with AI, van der Putten again advised starting with the desired business outcomes and decision processes that support them.

Too often, people start thinking about data immediately.

“Maybe we shouldn’t start with thinking about data. We should maybe flip it around and think about our business priority,” he said. “What kind of automated decisions do support agents or citizens need to have a more frictionless journey?”

That line of thinking prompts a sharper focus on precisely what data the AI deployment will need, such as from natural language processing or inbound emails.

“You work your way back from the top,” van der Putten said. He said you may discover that a small, carefully curated data set will achieve the most effective and bias-free algorithm performance. Avoid thinking that the more data you throw at an algorithm, the better.

It’s also important to periodically retest what your algorithms are putting out, to make sure results don’t drift over time.

Van der Patten said, “We need to continuously monitor all the decisions [for] whether there’s bias in our systems. Rules change quite often. So every time we make changes to rules, or changes to models, we need to retest.” He said to consider keeping synthetic data sets on hand to recalibrate AI systems, adding that agencies consider the generative AI services to build reference data sets.

Listen to the full show:

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories