In 2021 alone, the Council of Inspectors General on Integrity and Efficiency reported over $4.5 billion in confirmed government benefits fraud. Fraud in the public sector is a serious issue that can cause American citizens to be left without support and lead to wasted or inefficient use of taxpayer dollars, overspending, and create a negative image of governmental services. It’s a big problem and the pandemic only served to make the problem worse. In a rush by the public sector to expedite financial aid and provide digital access to many in-person essential benefits like unemployment and food stamps (SNAP) meant many systems and services were without adequate security. The result provided the perfect platform for fraudsters armed with advanced tools such as generative AI generated deepfakes to take advantage.
As generative AI technology has become more sophisticated and accessible, creating and deploying fake identities is now easier than ever. In turn, vital programs such as food stamps, unemployment insurance and the tax system are extremely vulnerable to fraudsters. Today’s cybercriminals have the ideal combination of technology, tools and expertise to systematically plunder the public sector benefits industry at scale. Defrauding the government has become a big business for cybercriminals. Since the onset of the pandemic, thieves have stolen nearly $80 billion of COVID relief money. In Los Angeles alone, $19.6 million in Electronic Benefits Transfer funds were stolen in 2022.
The unfortunate truth is that most public sector benefits programs are still not built to resist advanced attacks, such as deepfakes. This creates a huge blind spot for government agencies. When individuals apply for benefits online, how can they ensure the person is really who they say they are?
Addressing public sector fraud with biometrics
Public sector fraud must be addressed at the highest point of risk, which is the first point of contact when people sign up for access to government services. This is where an individual initially verifies their identity and is granted access to the system. Here, identity must be verified to ensure the individual is the live, genuine person they claim to be. Traditional security measures such as tokens or one-time passcodes (OTM) are no longer a sufficient form of verification and are easy to replicate and steal. In order for the government to ensure the person receiving benefits is the intended recipient, the government must rely on trusted identity verification. The most effective way to do so is by verifying a genuine document, such as a passport or driver’s license, against a genuine face using facial biometric verification technology.
Unlike traditional security measures that are contingent on something you have or know such as a token or a password, biometrics cannot be stolen. This is because facial biometric security is based on genuineness, not secrecy. Sometimes referred to as identity proofing, this type of identity verification is the most secure and convenient method for organizations to verify and authenticate identities online as it’s able to deliver national-grade security while remaining convenient for users applying for benefits.
With face verification, when a person applies for government benefits, they are prompted to complete a brief facial scan. Facial biometric verification technology is essential here, as only the face can be matched against a government-issued identity document, such as a driver’s license. This provides a trusted reference image from a government authority. Going one step further, biometric verification technology with liveness capabilities ensures the online user is the right, real person, verifying at the right time — not a bad actor with a mask or a video deepfake being presented to the camera.
Choosing the right technology
Investing in robust identity verification technology is essential for tackling fraud within the public sector. Not only can it effectively provide the security needed to ensure benefits are going to those who need them most, but it’s also a cost-saving measure for taxpayers. Investing in fraud technology for detection and prevention can deliver huge payoffs too – typically 10 to 100 times return on investment. Because generative AI technology has made creating new, fake identities so easy for fraudsters, manual intervention is now inadequate as they are virtually impossible to spot. Choosing a biometric technology solution can be overwhelming, but when looking to combat public sector fraud, organizations should look for three things:
A future-proof approach to security:
A comprehensive solution should protect against all types of attacks, from a low-level presentation attack — such as a bad actor putting a mask or photo up to a camera — to more sophisticated digital injection attacks like a deepfake. Digital injection attacks occurred five times more than more basic presentation attacks in 2022. These kinds of attacks are one of the most dangerous and scalable attacks cybercriminals use to undermine systems, and many biometric systems are not equipped to deal with these threats. The use of cloud-based, one-time biometric authentication is essential as it treats every device as if it were compromised and isn’t susceptible to vulnerabilities on the device itself.
Cloud-based architecture:
Once a criminal has a device, they can then gain access to the passwords stored within and change the credentials to lock individuals out of their accounts. With a cloud-based architecture, authentication happens server-side, independently from the device, which circumvents the vulnerabilities associated with devices. This means that a device affected by malware, for example, will not compromise the authentication process.
Maximum completion rates:
Completion rate refers to a user’s ability to follow the verification process through to the very end. Factors such as device accessibility and user experience significantly impact completion rate, and even a marginal increase in fail rate becomes a truly significant figure. A difference of just 1% in a solution’s completion rate could mean that 3.3 million Americans are being excluded from verifying their identity.
In addition to these three factors, a biometric verification solution must also provide equal access to as many people as possible, and not exclude marginalized communities. There are a number of considerations surrounding inclusivity: digital literacy, internet access, device types, accessibility concerns, biometric bias and more. Ensuring the solution you choose has a user-centric design at its core is imperative so all users, regardless of ability, education or technology available can use the technology easily.
Today’s cybercriminals are constantly finding new opportunities to infiltrate onboarding and security systems. As generative AI-based attack techniques such as deepfakes become easier to create and harder to detect, cybercriminals can scale their attacks faster than ever before and wreak havoc on public sector systems. In order to combat fraud, governmental organizations need to be investing in biometric verification technology that can ensure users applying for benefits and scheduling appointments are genuinely who they say they are.
Public sector benefits are a fraudster’s playground: Here’s how biometric security can help
As generative AI technology has become more sophisticated and accessible, creating and deploying fake identities is now easier than ever.
In 2021 alone, the Council of Inspectors General on Integrity and Efficiency reported over $4.5 billion in confirmed government benefits fraud. Fraud in the public sector is a serious issue that can cause American citizens to be left without support and lead to wasted or inefficient use of taxpayer dollars, overspending, and create a negative image of governmental services. It’s a big problem and the pandemic only served to make the problem worse. In a rush by the public sector to expedite financial aid and provide digital access to many in-person essential benefits like unemployment and food stamps (SNAP) meant many systems and services were without adequate security. The result provided the perfect platform for fraudsters armed with advanced tools such as generative AI generated deepfakes to take advantage.
As generative AI technology has become more sophisticated and accessible, creating and deploying fake identities is now easier than ever. In turn, vital programs such as food stamps, unemployment insurance and the tax system are extremely vulnerable to fraudsters. Today’s cybercriminals have the ideal combination of technology, tools and expertise to systematically plunder the public sector benefits industry at scale. Defrauding the government has become a big business for cybercriminals. Since the onset of the pandemic, thieves have stolen nearly $80 billion of COVID relief money. In Los Angeles alone, $19.6 million in Electronic Benefits Transfer funds were stolen in 2022.
The unfortunate truth is that most public sector benefits programs are still not built to resist advanced attacks, such as deepfakes. This creates a huge blind spot for government agencies. When individuals apply for benefits online, how can they ensure the person is really who they say they are?
Addressing public sector fraud with biometrics
Public sector fraud must be addressed at the highest point of risk, which is the first point of contact when people sign up for access to government services. This is where an individual initially verifies their identity and is granted access to the system. Here, identity must be verified to ensure the individual is the live, genuine person they claim to be. Traditional security measures such as tokens or one-time passcodes (OTM) are no longer a sufficient form of verification and are easy to replicate and steal. In order for the government to ensure the person receiving benefits is the intended recipient, the government must rely on trusted identity verification. The most effective way to do so is by verifying a genuine document, such as a passport or driver’s license, against a genuine face using facial biometric verification technology.
Learn how high-impact service providers have helped the government reinvent the way they deliver their mission and services to the public in this exclusive ebook, sponsored by Carahsoft. Download today!
Unlike traditional security measures that are contingent on something you have or know such as a token or a password, biometrics cannot be stolen. This is because facial biometric security is based on genuineness, not secrecy. Sometimes referred to as identity proofing, this type of identity verification is the most secure and convenient method for organizations to verify and authenticate identities online as it’s able to deliver national-grade security while remaining convenient for users applying for benefits.
With face verification, when a person applies for government benefits, they are prompted to complete a brief facial scan. Facial biometric verification technology is essential here, as only the face can be matched against a government-issued identity document, such as a driver’s license. This provides a trusted reference image from a government authority. Going one step further, biometric verification technology with liveness capabilities ensures the online user is the right, real person, verifying at the right time — not a bad actor with a mask or a video deepfake being presented to the camera.
Choosing the right technology
Investing in robust identity verification technology is essential for tackling fraud within the public sector. Not only can it effectively provide the security needed to ensure benefits are going to those who need them most, but it’s also a cost-saving measure for taxpayers. Investing in fraud technology for detection and prevention can deliver huge payoffs too – typically 10 to 100 times return on investment. Because generative AI technology has made creating new, fake identities so easy for fraudsters, manual intervention is now inadequate as they are virtually impossible to spot. Choosing a biometric technology solution can be overwhelming, but when looking to combat public sector fraud, organizations should look for three things:
A future-proof approach to security:
A comprehensive solution should protect against all types of attacks, from a low-level presentation attack — such as a bad actor putting a mask or photo up to a camera — to more sophisticated digital injection attacks like a deepfake. Digital injection attacks occurred five times more than more basic presentation attacks in 2022. These kinds of attacks are one of the most dangerous and scalable attacks cybercriminals use to undermine systems, and many biometric systems are not equipped to deal with these threats. The use of cloud-based, one-time biometric authentication is essential as it treats every device as if it were compromised and isn’t susceptible to vulnerabilities on the device itself.
Cloud-based architecture:
Once a criminal has a device, they can then gain access to the passwords stored within and change the credentials to lock individuals out of their accounts. With a cloud-based architecture, authentication happens server-side, independently from the device, which circumvents the vulnerabilities associated with devices. This means that a device affected by malware, for example, will not compromise the authentication process.
Maximum completion rates:
Completion rate refers to a user’s ability to follow the verification process through to the very end. Factors such as device accessibility and user experience significantly impact completion rate, and even a marginal increase in fail rate becomes a truly significant figure. A difference of just 1% in a solution’s completion rate could mean that 3.3 million Americans are being excluded from verifying their identity.
In addition to these three factors, a biometric verification solution must also provide equal access to as many people as possible, and not exclude marginalized communities. There are a number of considerations surrounding inclusivity: digital literacy, internet access, device types, accessibility concerns, biometric bias and more. Ensuring the solution you choose has a user-centric design at its core is imperative so all users, regardless of ability, education or technology available can use the technology easily.
Today’s cybercriminals are constantly finding new opportunities to infiltrate onboarding and security systems. As generative AI-based attack techniques such as deepfakes become easier to create and harder to detect, cybercriminals can scale their attacks faster than ever before and wreak havoc on public sector systems. In order to combat fraud, governmental organizations need to be investing in biometric verification technology that can ensure users applying for benefits and scheduling appointments are genuinely who they say they are.
Read more: Commentary
Ajay Amlani is senior vice president and head of Americas for facial biometric verification provider iProov.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Some feds continue to see fraudulent FSAFEDS deductions
Justice Department expands its procurement fraud strike force
COVID-fraud crackdown has netted over $3B since 2021