Insight by Deloitte

3 ways agencies can start experiencing the impact of AI

Shrupti Shah, the managing director of government and public service practice at Deloitte, said by using AI, agencies can improve citizen services more quickly.

This interview is part of a series, Artificial to Advantage: Using AI to Advance Government Missions.

Agencies are well into their artificial intelligence journey. But recent data also shows a strong majority are still in their early stages of maturity.

The challenges agencies still must overcome include defining and developing trustworthy AI, developing the skillsets of their workforces around data and AI, and of course, how best to budget and resource these investments.

Despite these challenges, there is a lot of excitement and desire to see how AI can impact mission areas and improve decision making.

Shrupti Shah, the managing director of government and public service practice at Deloitte, said government leaders must manage the right risk level for employees to feel confident in using AI. With federal employees renowned for their aversion to taking risks, AI brings with it some of the biggest in recent memory.

“An agency needs to look at adopting AI not for the sake of AI, but where will it really enhance its mission? I think there are three key areas where AI can fundamentally help: In improving services to citizens directly like through chatbots or by being able to reduce wait times for services,” Shah said on the discussion AI and Government: The risk of adopting versus not adopting. “Another area is to be able to make better policy by being able to combine various sources of data and say this is where I need to pinpoint this initiative or policy or program to get the maximum impact. AI can actually add new nuanced layers to our policy making. The third one is actually improving back-office operations, such as procurement, day-to-day operations, some of which are often quite routine or require a common set of decisions.”

AI to free up workers from mundane tasks

Agencies ranging from the IRS to the Social Security Administration to the U.S. Patent and Trademark Office are either already using basic AI or looking at how certain capabilities can improve citizen services.

Shah said by using chatbots or similar technologies, agencies can free up workers to work on more difficult or complex problems.

The same can be true for back-office functions. Technologies like robotics process automation (RPA) has been demonstrating for much of last five to seven years that it can pull employees out of the mundane, repetitive and low value work.

“There are back office functions like an inspection regime that could go and randomly select some of customers to audit and to check that they’re compliant. AI can help identify those that pose the greatest risk by being able to combine sources of data and let employees make much more informed decision,” Shah said. “I think even in back office areas, you can demonstrate how, ultimately, it’s creating a public good and public value.”

AI’s potential impact on the third area, policy making, is all about bringing together disparate data sources to look for patterns or trends.

Shah said one policy that works well in an urban setting, may not work well in a rural community. She said because the AI tools can analyze data sets more quickly and identify potential nuances that would take a human days or weeks.

“AI is able to identify those patterns for you because unless you know what exactly what you’re looking for, you can’t see the pattern, but AI can identify patterns where you may not have been looking,” she said. “I think people are more comfortable with the use of AI when it comes to the direct delivery of citizen services because they can see an immediate impact in reduced wait times or better citizen satisfaction, better engagement, etc. I think that is naturally the one where agencies are dipping their toe in the water.”

Start small, learn from AI pilots

Shah offered an example from the government of the United Kingdom. The UK tax authority used AI to analyze data about a group of about 10,000 people who were constantly late paying their taxes.

“They used behavioral science, so lessons from how people make decisions, to alter 10 different letters. One group got the original letter, and nine different groups got a slightly different letter. With one letter, they were able to increase by 30 percentage points the number of people who return their taxes on time. They estimated that inspectors would have cost 30 million pounds,” she said. “If you can imagine what the potential is with AI and advanced calculations of trends and data and analysis, the amount of money that could be saved or the amount of resources that could be better targeted and the impact we could have, I think that is truly where the potential of AI really lies.”

While the case to adopt AI seems clear, agencies remain hesitant to move too fast or try anything more than small-scale tests.

Shah said leaders who say the risk is too large or the benefits are too small are leaving themselves vulnerable in several different ways.

“You are losing opportunities to tighten your security and reduce your vulnerability against malicious actors because they will be using this technology, without a doubt, so lower preparedness in general. I think it’s also important that as we seek to attract the brightest and best in technological minds to government, if we say we’re not embracing this huge fundamental part of new technology, we’re going to lose the war for talent, and we’re going to lose those minds to the private sector, where we really want them doing good in the public sector,” she said. “We also risk lagging in in terms of international competitiveness, and finally, in terms of citizen services, all of the major retail organizations that we interact with on a daily basis, whether it’s our internet shopping or other commercial services, are using AI. They’re using IT to improve the customer experience, and if the customer experience or the citizen experience from public services is so much behind that of commercial, we risk losing trust in government because we’re so behind and we’re not meeting expectations that have been raised by private sector counterpart services.”

Of course, Shah said, adopting AI, like any new technology, comes with its own set of complexities and challenges.

She said typically there are three main barriers to entry to using AI:

  • Simple capacity issues: Do I have the skills, capabilities and resources to be able to do this?
  • Cultural challenges: Shah said this is a big change for a lot of people around their everyday work and the fear of what the change will mean in terms of the job that they signed up for and the way that they wanted to serve the public.
  • Regulatory certainty. Shah said this is something the private sector doesn’t have in the same way as government. Agencies face an additional layer of concerns, quite rightly, around equity, equal access to services and privacy.

“Those can be addressed by, I think, really looking at job or job augmentation, rather than job replacement. Saying this actually frees up workers to do what they want to do more of and less of, like, the downside of their job, which is the lower value bureaucratic tasks,” Shah said. “I think piloting is a great way to address that risk aversion. There are some regulators around the world that have adopted regulatory sandboxes where they partnered with industry to test new products and services in a safe way so that they can assess whether that product or service is safe for public consumption at later times. They are partnering with industry to reduce later risks while not impeding innovation in the industry that they’re regulating. “

For more in the series, Artificial to Advantage: Using AI to Advance Government Missions, click here.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories