AI and government fraud: How agencies can gain the upper hand

Government Accountability Office data released earlier this year estimated that federal agencies lost as much as $521 billion due to programmatic fraud between ...

Government Accountability Office data released earlier this year estimated that federal agencies lost as much as $521 billion due to programmatic fraud between fiscal 2018 and 2022. While it is difficult to lock in on a universally accepted figure, as Federal News Network reported, there is no denying that combating fraudulent activity remains a formidable post-pandemic challenge. 

Artificial intelligence has proven to be a double-edged sword for agencies pursuing fraudsters. With generative AI, bad actors can more quickly and effectively orchestrate fraud schemes, challenging the Financial Crimes Enforcement Network (FinCEN), the departments of Justice and Defense, the IRS, and other federal and state agencies. At the same time, these agencies increasingly recognize the critical role AI can play in remaining a step ahead of criminals. 

Throughout the pandemic, the fraud conversation gravitated towards the front of the house: Agencies doling out relief checks and stimulus payments struggled to strike a balance between weeding out fraudulent claims with the desire to rapidly deliver funds to legitimate American consumers and business owners who desperately needed help. 

However, now agency decision-makers are increasingly adopting AI to enhance financial fraud investigations. To do so effectively, they must develop the optimal mix of training, tools, processes and policy.

Educate and train on AI, not just tools 

Government agencies eager to reap the benefits of AI for fraud investigations must avoid the temptation to shortchange the education and training required to maximize their investments. AI education must balance messaging on what AI is capable of with what AI cannot do.

Last year’s Ernst & Young AI Anxiety in Business Survey found broad-based workforce concerns about AI that agency leaders must factor into education and training. Of those surveyed, 75% of employees were concerned AI would make specific jobs obsolete — and roughly two-thirds (65%) were anxious about AI replacing their jobs. 

Encouraging within the data are signs that workers want to learn more about AI, how to use AI tools, and how to do so responsibly. Sixty-five percent of workers Ernst & Young surveyed are anxious about not knowing how to use AI ethically, and a majority are concerned about the legal risks (77%) and cybersecurity risks (75%). 

But the AI enthusiasm is there: Our 2024 public sector AI survey reveals that 82% of public sector fraud investigators and decision-makers believe that the benefits of using AI outweigh the associated risks. 

Similarly, agencies recognize that the benefits of proactively building out AI leadership teams and meeting sector-wide directives and guidance can only be supported if the workforce is ready and able to use the technology. Earlier this year, Vice President Harris called for each federal agency to hire a chief AI officer and launched the National AI Talent Surge, affirming that AI success in the public sector will include stakeholders at all levels. 

However, workforce education is not a one-time initiative but a continuous process. While training workers on the tools themselves is important, more significant is workforce education on the technology itself. Nearly three-quarters (74%) of those surveyed indicated that they had not received any prior training in AI skills related to financial investigations. 

Agencies that invest in AI training for fraud investigators should find a receptive audience. Despite the lack of prior training, 79% of the attendees expressed a significant interest in acquiring AI skills specific to financial investigations, underscoring the growing recognition of AI’s importance in optimizing financial investigative processes within the public sector.

Separate AI tools from AI hype  

Agencies are focused on AI applications that deliver tangible, near-term results. This perspective is essential when evaluating and deploying AI tools that directly impact program and mission success — operationally and financially.

It is early days for AI usage by fraud investigators: 20% of those we surveyed reported using AI frequently in their work, while another 20% indicated using it sometimes. As the adoption of AI-powered fraud investigation tools grows, agencies should ensure that the tools they evaluate, adopt and deploy are capable of delivering a set of core capabilities: 

  • Detecting patterns and anomalies. Traditional investigation methods involved human analysts sorting through larger financial data sets and scanning for irregularities, a tedious, time-consuming process. AI-powered algorithms scan data sets in moments, spotting patterns and other relevant data points that identify suspicious activities and flag them for further investigation. 
  • Automating time-intensive processes. A significant challenge identified by 82% of government financial fraud stakeholders we surveyed is the need to reduce costs associated with time-consuming investigative processes. This emphasizes the importance of leveraging AI tools to streamline and optimize these processes for greater efficiency and effectiveness. By leveraging natural language processing capabilities, AI platforms can extract crucial information from unstructured data sets to help investigators recover lost assets with reduced manpower resources and time. 
  • Real-time monitoring. AI systems analyze transactions more efficiently than legacy investigation methods, flagging suspicious transactions in real time so government agencies and financial organizations can take action immediately.
  • Behavior analysis. Perhaps one of the most critical AI uses in fraud investigations; AI algorithms can detect behavioral analyses that highlight deviations from normal user behavior. AI systems can create profiles of typical user activity and detect if a transaction or multiple transactions differ from traditional spending habits. 
  • Predicting future fraud behavior. Finally, AI systems help investigators prepare for future and emerging threats and vulnerabilities. Analyzing historical data allows AI platforms to predict potential fraud attacks, enabling investigators to be one step ahead of bad actors.

Align AI guidance with fraud investigation processes 

In October 2023, President Joe Biden launched an executive order to infuse the federal workforce with 500 AI experts by 2025. This ambitious talent acquisition plan, a critical first step in developing a more AI-focused government workforce, underscores the government’s commitment to accelerate digital transformation.

On the local level, more than 500 officials from 200 state and local governments have joined the GovAI Coalition, recently formed by technology officials at the City of San Jose. The group collaboratively shares resources on how to use AI responsibly. Since forming last fall, the organization has published documents that help agencies adopt and implement AI technologies.

The message is evident across all levels of government: Artificial intelligence is here, and federal and state/local government agencies are ready and eager to implement it to improve efficiency across numerous initiatives. 

It’s easy to see why demand is high. By implementing AI into fraud cases, government agents can expand their analytical capabilities by incorporating a wider range of data points and variables into the analysis process. With more comprehensive solutions than previous legacy systems, investigators are armed with a more holistic view of each case, which leads to more precise fraud detection and gives fraudsters a taste of their own medicine by turning the artificial intelligence advantage against them. 

Benjamin Chou is president of Personable Inc.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories