Login.gov set to embrace facial recognition technology – How GSA and users can mitigate risks

Login.gov adopting facial recognition technology is a significant moment for this technology and could spur more adoption for others moving forward.

In October, the General Services Administration announced that in 2024 it would add facial recognition technology to Login.gov, the single sign-on service that Americans use for government benefits and services online. GSA said that it would also add another digital identity verification option for those who don’t want to use facial recognition technology.

Though facial recognition technology has been around for decades, the iPhone X’s 2017 facial recognition feature, Face ID, was a significant catalyst for consumer adoption that effectively accelerated the trend and made facial recognition a more visible and practical technology for everyday use. Several years later, it’s reasonable to say that nearly one billion consumers have used the technology as a form of verification.

The benefits of the technology are clear. Firstly, it offers a highly convenient and fast way to verify your identity. Secondly, it’s more secure than traditional passcodes. Thirdly, there’s increased privacy compared to fingerprint scanners, which are unique and potentially more revealing than your facial features. It also provides an additional layer of transparency around who’s signing in.

The risks of facial recognition technology

However, while the use of facial recognition technology can be used to increase security, it may also pose threats to individual privacy and raise accessibility and equity concerns.

Previously, the technology has been accused of displaying racial bias and unfair impact to vulnerable populations. This is an area GSA has thoroughly reviewed over the last couple of years before making its decision to use the technology. Still, it’s something that will need to be monitored closely and consistently.

Beyond this, without physical presence or presentation of a matching ID, there is reduced validation surety. Furthermore, if a data breach does occur, there is an additional layer of sensitive personal information (SPI) that’s now exposed. There’s also the possibility that the system can be spoofed with photos or deepfake technology, and people with similar facial features may be able to gain access to others’ accounts. This can be particularly challenging when dealing with identical twins.

What can be done to mitigate these risks? 

When it comes to people trying to spoof the system with photos or deepfake technology, this is an area GSA will need to work hard to protect against. The understanding is that the facial comparison will be one-to-one – similar to using an ID card in-person – but there is risk if those trusted images are somehow compromised. The accuracy of the technology would be higher if it compared faces on a one-to-many basis. Of course, false positives and false negatives are both possible, and the algorithm results will vary across demographics.

To ensure facial recognition technology is living up to its promise, GSA will need to continuously test and re-evaluate its accuracy. Like anything else that presents a potential cyber threat, it’s not enough to clear a hurdle once. The technology will need to be able to withstand the bevy of attacks that threat actors use each and every day. Even the strongest wall, over time, can weather cracks that invite unwelcome guests.

Users also play a role

In addition to GSA putting safeguards in place to protect users, some of the responsibility will fall on the users themselves.

Firstly, users will want to keep their digital trails as clean as possible. Oftentimes, threat actors can gain access to one account by compromising another account. Using a secure password manager to store passwords and being vigilant about links you click on can make a big difference in keeping threat actors at bay. Users would also be wise to sanitize their social media accounts and make sure they’re not giving threat actors the information they need to easily break into their accounts. For example, if someone shares an identification photo with Login.gov upfront, they should only be using it for that sole purpose. Posting the photo on social media would not be smart.

To take this even further, there is an opportunity for Login.gov to offer safety trainings and/or address safety related to this topic explicitly via its website. This would ensure that users are familiar with best practices, and they’re not inviting additional risks beyond those that are inherent in the technology itself.

Looking ahead

Technology is improving all the time. What may have been too risky in the past could be your best option with a small tweak or two. Login.gov adopting facial recognition technology is a significant moment for this technology and could spur more adoption for others moving forward. Although Login.gov is embracing it, it doesn’t mean that the technology is free from risk altogether. Login.gov and its users need to each be smart about managing security and, when appropriate, work together to ensure that identities remain safe and protected.

Jennifer Mahoney is manager of data governance for privacy and protection at Optiv.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories