A top Customs and Border Protection official told members of the House Homeland Security Committee that an investigation into a subcontractor’s handling of images taken of travelers crossing the U.S.-Mexico could result in criminal or civil charges.
The hearing focused on reports last month that a malicious attack on a CBP subcontractor exposed nearly 100,000 images of travelers and license plates collected at U.S. border checkpoint.
John Wagner, CBP’s deputy executive assistant commissioner of field operations, told lawmakers that the vendor, which the Associated Press identified as Perceptics, violated the terms of its contract by copying those images onto its own network.
Insight by ThunderCat Technology and Dell Technologies: Learn different ways agencies are taking more advantage of AI and ML tools to help exceed mission expectations by downloading this exclusive e-book.
“As far as I understand, the contractor physically removed those photographs from the camera itself and put them into their own network, which was then breached. The CBP network was not hacked,” Wagner said Wednesday.
However, CBP lacked the same level of network security for the pilot program it keeps for its main systems. As a result, no safeguards prevented subcontractor employees from sticking a portable media drive into the network and pulling those CBP images.
The subcontractor’s removal of those photos violated the terms of its contract with CBP, Wagner said, leading to CBP to terminate the contract and conduct an investigation. The Department of Homeland Security’s inspector general office will conduct its own review of the breach.
“Depending on the circumstances of how the data was taken and the intentions and why and how it was used, there potentially could be criminal actions,” Wagner said.
That lapse in security procedures sparked bipartisan concerns from members of the committee, and led members to question whether facial recognition pilot programs run by other parts of the Department of Homeland Security can adequately protect sensitive biometric data or provide accurate results.
Most of the committee members generally approved the agencies’ use of facial recognition technology, but called for greater transparency into the scope of those programs.
Committee Chairman Bennie Thompson (D-Miss.) said facial recognition could prove a valuable tool for national security, but said questions remain about the privacy, data security, transparency, and accuracy of agency programs.
“The American people deserve answers to those questions before the federal government rushes to deploy biometrics further,” Thompson said.
Rep. John Katko (R-N.Y.) said commonplace law enforcement tools, such as fingerprint and DNA testing, went through similar vetting procedures before gaining widespread acceptance.
“My concern is not with the efficacy of using it. My concern is that we get it right … I am very concerned about the accuracy, and that was a very big thing with DNA starting out, and now the accuracy is amazing,” Katko said.
The Transportation Security Administration, for example, has partnered with CBP since October 2017 to run a facial recognition pilot at three major airports across the country.
In its current phase, CBP and TSA are running biometric scans at the check-in, bag drop and TSA checkpoints at Atlanta’s Hartsfield-Jackson Atlanta International Airport. The TSA checkpoint scan consists of passengers standing in front of a camera that then matches the image to photos from other government records, like passport photos.
Passengers can opt out of the biometric scan, and can request transportation security officers to look over their passport and boarding pass.
The pilot stems from CBP’s existing authority to screen the biometrics of non-U.S. citizens as they enter and leave the country.
Wagner said biometric data on foreign nationals would get stored in DHS’s Automated Biometric Identification System (IDENT), while said images captured of U.S. citizens or permanent residents are held for 12 hours and then deleted from the system.
“The only reason we hold it for that short period of time is just in case the system crashes and we have to restore everything,” he said.
The Secret Service is also working on facial recognition pilot on the grounds outside the White House. The pilot looks to match images of Secret Service employees, who have volunteered for the pilot, as they move around the grounds of the White House.
Joseph DiPietro, the Secret Service’s Chief Technology Officer, said the agency retains 30 days’ worth of images at a time under the pilot, and delete all its images at the end of the pilot.
“We’re trying to match the individuals that are in the pilot, the volunteers, to the people who we’re seeing in those cameras. If there’s not match, there’s no record. If there is a match, then there’s a record,” DiPietro said.
It launched the pilot last December, and will conclude the pilot in August. That window gives the algorithm an opportunity to test whether it can pinpoint the same Secret Service volunteers in summer clothes and heavy winter coats.
As for concerns about accuracy, Charles Romine, director of the Information Technology Laboratory of the National Institute of Standards and Technology, said the “very best” facial recognition algorithms that NIST has tested boast a 99.7% accuracy rate, and performed within the range of the best human examiners.
But challenges still remain in getting these facial recognition systems to accurately identify women and people of color.
Romine said those discrepancies will shrink as the facial recognition technology improves, especially if it continues at the rate that it’s progressed over the past five years, but said those challenges won’t go away entirely.
“It is unlikely that we will ever achieve a point where every single demographic is identical in performance, whether that’s age, race or sex. But we want to know just exactly how much the difference is,” Romine said.
NIST will further examine these challenges in a report it will release this fall.
But despite NIST’s rigorous testing of off-the-shelf facial recognition products, the current state of artificial intelligence doesn’t allow for algorithms to show how they’ve arrived at their answer.
“We have no direct knowledge of the convolution neural networks or machine learning, because these are submitted to us as black boxes and we don’t examine that,” Romine said.