Insight by Deloitte

Agencies need to break down barriers, engender trust of AI

RJ Krawiec, a principle in Deloitte’s Government and Public Services Practice, said agencies can take 6 practical steps to demystify AI to improve acceptance.

Krawiec called this combination of AI tools with humans who trust the technology the “Centaur model,” after the mythical being that is part horse, part person.

“It’s powerful as it combines the benefits of both. You’ve got the raw power of a horse with the articulation and mental abilities of a person to have a better overall, more powerful solution,” he said. “We’re seeing the same things here because the strengths of AI and humans are synergistic. AI is great at consistency, at precision and at analyzing large sets of data and seeing things that humans can do but take a lot of time to do.”

For many agencies at this point in time, the Centaur model is best applied to back-office systems. Over the last five or so years, agencies have been using robotics process automation (RPA) to address routine tasks. But now Krawiec said there is greater opportunity to improve efficiency by applying advanced and even GenAI tools to human resources or procurement or financial areas of the business.

“If you can reduce that administrative load, government workers can spend more time on innovative things, mission-focused things, human-focused things that can make a difference to the people that they’re serving, and those are the easiest things to do,” he said. “The strengths of AI are precision and performing rote tasks, analyzing large sets of data, so freeing humans up enables them to make even more of an impact for constituents on a daily basis.”

To create trust and comfort with AI, agencies should create sandboxes and other areas for employees to experiment with the tools.

But beyond just the opportunity to test the tools, Krawiec said leaders must find practical pilots that help solve “real-world problems” and give the workforce demonstratable outcomes.

“One of the important points here is that AI does not replace humans. It only supplements. That’s a perfect example of playing to AI’s strengths of looking through lots of data, analyzing it so that I can use my knowledge, my experience, my judgment and my values to then take that information and do something with it,” he said. “AI, right now, isn’t a silver bullet. Not only do you need to make sure, from a technology side, that the model can be trusted, the answers that you’re giving can be trusted, but also human have to be kept in the loop to apply that judgment in various places.”

AI not good for all jobs

Krawiec said AI is not the answer for all jobs. He pointed to a recent study by Deloitte of 19,000 job tasks from the U.S. Department of Labor that found some areas where humans are still outperforming AI, and may always outperform the technology.

“A lot of human-facing work, think about a coach or a counselor, the physical areas where AI was still outperformed by the human, so like vehicle maintenance, things like that, need a human to do the work,” he said. “The humans outperforming AI are around tasks with a lot of context variability, where all the information around the actual question matters greatly. It was also around some of the human-facing empathetic work. But the important thing to note is any sort of work task has multiple steps, so even in areas where, oh yeah, the AI performs well here, it’s always some areas, not usually all areas.”

Krawiec laid out six practical steps for how organizations can get their workforce better prepared to trust, use and rely on AI tools:

  • Educate and inform: This is about highlighting the benefits of AI, why it’s useful to people at all levels of an organization. This isn’t just a leadership thing. This isn’t just to let the new folks do it. It’s all levels of the organization. Also, in that step, it’s important to be sensitive to concerns.
  • Demonstrate value: Roll out projects first that are real world applications, demonstrate practical and measurable benefits to build that sense of trust and have people say, ‘Oh, that’s useful.’ “In this case, an example might be an AI tool that takes notes in a meeting, summarizes it and sends it out to everybody on a team, regardless of whether they were there because in that instance, where a person who missed the meeting would have to talk to somebody else to come up to speed, they now have a comprehensive and synthesized list of what happened there, which doesn’t drain team resources.”
  • Provide training as tailored as possible: The lack of education can be an obstacle to adoption. Krawiec said to reduce the intimidation factor, agencies should make training as simple to obtain as possible and targeted at all levels of the organization.
  • Foster inclusivity: This is meant to supplement and enhance an organization’s workforce by including them, asking them questions and for their opinions. How do they want to use this? What problems could it solve? And then take that feedback and build it into the process and the technologies. What that does is increase a sense of ownership and brings the workforce along on the journey so that they’re more likely to use it, Krawiec said.
  • Support and resources: Initial training is important, but ongoing support is critical to better enable the technologies to be used in the workflow. Krawiec said problems will inevitably arise, so troubleshooting them, and then as people learn the technology, letting them explore more advanced cases and providing the support to do so, is key.
  • Monitor and adopt: AI is not a ‘set it and forget it’ technology. Krawiec said agencies need to continue to assess the impact, to refine their approach and better align it with their organizational needs. “The key there is simple. It’s be open to feedback  and have as little sense of authorship or pride as you can to be able to pivot.”

“It’s really important for IT managers, developers and business analysts to look at the technology side of things and make sure that they’re focused on the user experience, how they’re using it, make it easy for them, understand whether the people using it are experts or more novices, and then build it around them,” he said. “Adoption is the problem, not the limits of the technology, and in terms of that, adoption in deployment, the more black box a technology or a process is, I find, the more hesitant people are to use it. I encourage as much transparency, as much accountability as possible to say, ‘here’s the policy, here’s the data and here’s the algorithm that goes into to crafting this response so that the human can say, Oh, I trust this.’”

For more in the series, Artificial to Advantage: Using AI to Advance Government Missions, click here.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories