Tech firms unite to fight face checks ban

Tech firms unite to fight face checks ban

NEWS/COMMENT regulation Tech firms unite to fight face checks ban A coalition of technology industry groups – whose members include Facebook, Googl...

148KB Sizes 1 Downloads 34 Views

NEWS/COMMENT regulation

Tech firms unite to fight face checks ban

A

coalition of technology industry groups – whose members include Facebook, Google and specialist biometrics firms – are urging US lawmakers to go easy on facial recognition technology (FRT). The Security Industry Association last month joined forces with the US Chamber of Commerce, the IBIA (International Biometrics + Identity Association) and others to write to Congressional leaders, asking them to resist imposing a blanket moratorium on FR technology because it “insufficiently protects civil rights and liberties and individual privacy”. Noting that some US cities have already banned the public use of FRT, the coalition asked influential House speaker Nancy Pelosi and other leaders to instead provide a consistent set of rules for FR across the US. They wrote: “As organisations representing users, developers and vendors of FRT, we all agree that the technology should be used carefully with proper guardrails balancing privacy and civil liberties considerations. However, we are concerned that a moratorium on the use of FRT would be premature and have unintended consequences not only for innovation, safety and security but for the continued improvement of the technology’s accuracy and effectiveness.” They pointed out that biometric facial verification, done properly, is much more accurate than human facial verification. And with FRT used across airline passenger checks, criminal investigations and fraud detection, they warned that: “A moratorium on federal use would create uncertainty and have a chilling effect on innovation throughout the facial recognition ecosystem.” Meanwhile trade association NetChoice – whose members include Facebook, Twitter, eBay, Google and PayPal – last month launched its own campaign to support the public use of FRT in Massachusetts, where a ban is also on the cards. NetChoice is calling on stakeholders, community leaders and citizens to sign a petition urging Massachusetts lawmakers to reject the proposed moratorium. “Every day facial recognition technologies help law enforcement to generate leads in cases, such as homicide, rape, armed robbery and other violent crime, as well as identifying elderly persons stricken with dementia, finding lost and missing children, identifying homeless persons with mental illness and identifying deceased persons,” said NetChoice VP Carl Szabo.

12

Biometric Technology Today

“A moratorium on FRT not only goes against what Bay Staters want, it denies law enforcement tools needed to help keep our communities safe.” NetChoice said survey data from Pew shows that most Americans (56%) trust law enforcement to use FRT responsibly, and that a new poll by Savanta found Massachusetts residents – Bay Staters – are even more supportive of using FRT responsibly for law enforcement than the general population. Szabo added: “The results confirm that despite calls by some for a moratorium, people across Massachusetts value this technology to keep their communities safe and help law enforcement do their jobs more effectively.” These moves come in the same month as Berkeley in California became the latest US city to ban police and other governmental use of FRT – joining San Francisco and Oakland in California and Somerville, Massachusetts. The

state of California also recently banned its police from using body-worn facial recognition for at least three years. Meanwhile New York City councillors are also currently debating a new law that would regulate the use of FRT by landlords and business owners. Building owners using FRT would have to register with the city authorities, and business owners would have to post signs alerting customers to the fact that face surveillance is being used. • On a more positive note, face recognition platform provider FaceFirst has announced that it is partnering with Congressional lawmakers to influence future facial recognition privacy legislation as it affects retailers. A team of FaceFirst executives led by CEO Peter Trepp are briefing lawmakers to help push through a bipartisan Commercial Facial Recognition Privacy Act which will protect consumer privacy while allowing retailers to use FR technology.

COMMENT Sometimes biometric tech companies can be their own worst enemy. Take the story of Google’s efforts to improve the facial recognition on its latest Pixel 4 smartphone. Google wanted to ensure the phone’s software was as racially unbiased as possible, given the notoriously poor past performance of FR systems in this crucial area. So the company made efforts to gather as much appropriate image training data as possible, and asked outside contractors to source facial images of people with darker skin tones. So far so good. But what transpired, according to reports in the New York Daily News, the New York Times and Business Insider, is that for whatever reason, Google’s contractors set about gathering the images by targeting homeless black people and students. Researchers said they were told to rush subjects through consent forms and obfuscate exactly what the photos were being used for. According to one source, the subjects were offered a $5 gift card, and the researchers were specifically told to target the homeless because they were less likely to talk to the press. As a result of the ensuing bad publicity, a Google spokesperson told the New York Times and Business Insider that it was suspending its facial recognition research while it investigated the matter. What a shame if that damages Google’s efforts to combat algorithmic bias and deploy facial recognition that works across different skin tones and face shapes. (All this is quite apart from the other

controversy currently affecting the Pixel 4, that its facial recognition can reportedly be unlocked when the user’s eyes are shut, leaving the possibility that someone could hack their phone while they are asleep.) Google’s FR saga has echoes with our page 1 report of the continuing ‘war of words’ between Amazon and the American Civil Liberties Union (ACLU) campaign group over Amazon’s Rekognition face ID software. The ACLU says Rekognition falsely matched 27 sports stars to criminal mugshots. Amazon says ACLU only got these matches because it used an 80% confidence threshold (meaning two faces that are 80% the same are declared a match). Instead, Amazon says, ACLU should have used the 99% accuracy setting it recommends for law enforcement users. ACLU hit back and said 80% is the software’s default setting, so it was a valid test. Amazon in turn wants ACLU to release its test data to check the results, but ACLU has refused. Amid all this sparring, two thoughts occurred: why doesn’t Amazon sell Rekognition to its law enforcement customers with a default 99% accuracy? And why won’t the ACLU release its test details if it has nothing to hide? We asked both organisations those questions, and both of them ‘declined to comment’. Why not simply be upfront and answer each other’s accusations? Maybe Amazon and ACLU are, like Google, being their own worst enemy, and none of that helps the advancement of FR software.

Tim Ring

November/December 2019