Some members of Congress want to ban federal spending on facial recognition technology. The bills are unformed so far, but apparently two are scheduled for introduction this week.
One would prevent the Department of Housing and Urban Development from using funds for facial recognition in public housing. According to CNET, it’s to be sponsored by Reps. Yvette Clarke (D-N.Y.), Ayanna Pressley (D-Mass.) and Rashida Tlaib (D-Mich.). But fear of and opposition to facial recognition draws members of both parties.
A second bill, which is only a fragment on Congress.gov, is from Tlaib. H.R. 3857 would “prohibit federal funding from being used for the purchase or use of facial recognition technology, and for other purposes.” Tlaib introduced it Tuesday but there’s no text yet available. It’ll first be taken up by the House Oversight and Government Reform Committee.
So we don’t know the details.
But the whole gambit looks like a clumsy attempt to control something members only vaguely understand. As a surveillance tool facial recognition has the potential for misuse. China is said to be rapidly expanding use of facial recognition as an instrument of state control over what little liberty the Chinese people have. The technology is not mature enough for face-matching evidence to be considered incontrovertible in criminal trials.
Worries about privacy are equally legitimate. But for federal researchers in facial recognition, the very respect for privacy is a limiting factor in the acquisition of large sets of face data for testing or training algorithms. Lord knows what Facebook does with the billions of facial imagery in its members’ accounts. No federal entity, though, can legally vacuum them up.
The CNET story and others bring up racial bias in algorithms — again, a legitimate concern. Federal practitioners are well aware of that potential. But context is important here. A given matching algorithm might give varying error rates according to skin tone. It’s incumbent on systems designers to combine complementary algorithms such that a technical weakness in one doesn’t result in an application that, for example, mismatches a greater percentage of members of one racial group relative to members of other groups. Achieving neutrality is an eminently doable proposition technically.
Concomitantly, designers should be prepared to show their work. Transparency in the design and testing results of biometrically-enabled systems together with objective, third-party review provide the best strategy for avoiding bias.
On the topic of algorithms, you need only go as far as the National Institute of Standards and Technology’s Computer Lab to see recent and numbingly detailed results from algorithm tests. Check out Chuck Romine’s testimony from just last month to the very Oversight and Reform committee. He noted NIST has been testing this technology for nearly 20 years, and the technology has come a long way.
The Government Accountability Office has also been working in the facial recognition and general biometrics area for many years. Just last month it issued a report warning about the FBI’s less-than-stellar adherence to privacy laws and policies in using its facial databases — which number something north of 640 million. In May it urged NIST to provide guidance on alternate ways, such as facial recognition, of ID’ing people who apply for benefits online.
Customs and Border Protection has one of the most effective and efficient facial recognition programs anywhere. My wife and I benefited from it last fall when returning from Europe. We got through Customs clearance and into baggage claim so fast, it showed what a snail the stupid airline was. The agency doesn’t keep the images used for this purpose once a person is cleared.
CBP has been catching flack over the breach of a contractor’s system, which resulted in the loss of 100,000 pictures of people and license plates. But that’s a cybersecurity and vendor oversight problem, not a biometric technology one.
In some ways, facial recognition is a branch of artificial intelligence. Agencies must exercise care in the design and deployment of any biometric. Congressional overseers have a vital and legitimate interest in how it is done. I could spend a thousand words describing potential applications in civilian and military settings. Congress should enable it with good oversight. Members ought to ask tough questions. But banning this technology would be dumb and harmful to modern, digital government.