Agencies are facing a strategic imperative to use artificial intelligence in their missions. AI and machine learning as well as other emerging technologies are enablers for the workforce to collect, analyze and delivery insights to improve decision making.
At the same time, AI and ML exposes agencies to a new and possibly different set of risks. They also must consider new policy and governance approaches to mitigate and minimize those risks.
Mike Peckham, a managing director of advisory at KPMG, said agencies should examine their current level of AI risk preparedness and understand potential vulnerabilities.
[AI is] helping the human make that final decision, but allowing the human to make that final decision. That goes back to that point of are we not going to replace people because we inherently understand things about data that a computer never will.
Mike Peckham
Managing Director, Advisory, KPMG
“I think everybody is putting this large box around AI to say, ‘we’re not going to define it as any one particular thing.’ I like the term intelligent automation because as long as you’re starting the journey and you’re starting to use these tools in the way that they’re meant to be used in an ethical manner, you will slowly, but surely, see the benefits of layering on those technologies,” Peckham said on the discussion Modern Government: AI risk management. “The first time I played with natural language processing, we were looking at the single audits on grants. If you’ve ever read a single audit, I can tell you it’s probably one of the worst reads you can have in your life. A 600-to-700 page audit and coming from an accounting background, it’s incredibly boring. But natural language processing can go through that so quickly even though it’s unstructured data. Once you’ve made sense of it, you can understand where the information is throughout that entire 600 page document, then you can start to glean way more than you ever could have understood by just doing manual reviews. That to me is what people are starting to recognize is the best part about these technologies.”
The use of AI, or intelligent automation, isn’t necessarily new. Peckham said he used natural language processing and other early forms of automation in the early 1990s to do travel post payment audits.
He said by applying technology to this process, his office took a manual process that typically took 54 minutes down to 9 minutes.
Better understanding today
While that early example showed the power of the technology, agencies today have much better technology and understanding of what can be done using AI.
Peckham pointed to the success many federal CFO offices have had with robotics process automation, which many times is the first step toward intelligent automation.
“Now we start to understand what the human does with that information so that the next time around when we layer in intelligent automation and the AI tools, then it’s giving them options like 90% of the time when the tool saw these scenarios, it was option A 70% of time or whatever the outcome would be,” he said. “It’s helping the human make that final decision, but allowing the human to make that final decision. That goes back to that point of are we not going to replace people because we inherently understand things about data that a computer never will.”
Guidance for AI Risk Management
I think it is a great first step to understand what you have and how you're using it. If you don't understand that and if you haven't done an inventory, then AI or intelligent automation is going to create fear, just like anything new that you're doing.
Mike Peckham
Managing Director, Advisory, KPMG
To help agencies get their heads around how AI could impact their processes, KPMG developed an accelerated AI risk diagnostic tool. Peckham said the tool helps agencies develop an inventory of how they’re using AI today.
“I think it is a great first step to understand what you have and how you’re using it. If you don’t understand that and if you haven’t done an inventory, then AI or intelligent automation is going to create fear, just like anything new that you’re doing,” he said. “But if you’re able to talk to somebody who’s already been down that path and they can say, ‘yeah, it’s a little bit scary. But guess what, here’s what we did, here’s how we handled it.’ This is where our diagnostic can really help folks tackle those challenges and move forward at a faster pace.”
Ethics and speed are paramount
At the same time, agencies can’t sacrifice ethics for speed. Peckham warned agencies need to do their homework about not just how they are using the algorithms but about the results they are getting back from the tools.
“We’ve seen what’s happened with biometrics. There have been hearings on the Hill about the use of these tools in identifying folks, specifically folks of color and how there have been perceived biases and real biases using AI. It’s a problem,” he said. “But I do believe that a lot of that problem falls back to the idea that the information that was used because these are algorithms and that’s all they are. The algorithms were built around the largest population that they had available to them. In the case of the biometrics, it happened to be for the most part white men so they have very strong algorithms in that case.”
He said KPMG tests AI tools using synthetic data that creates personas for fictional people where you can change the skin or eye color just enough to make sure the algorithm works and contains fewer biases.
Peckham added before agencies can jump into the AI pool’s deep end, they must understand the best use case to test out these tools.
“It’s a little tricky to start, but I think with the governance and all the guidance that’s coming out, it’s getting a little easier. You can’t be afraid to take that first step on the journey to understand how you can use AI,” he said. “Do I have a use case? Do I have the right technology to address that use case? And how can I move forward? Talk to folks, like KPMG, who have been there before and help you get down the best path to learn and be successful.”
Listen to the full show:
Copyright
© 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.