NEW ORLEANS – Ahead of tomorrow’s Smart and Sustainable Cities Committee hearing on an ordinance to ban the use of facial recognition technology and increase oversight of the city’s use of surveillance tools, the ACLU of Louisiana and the Eye on Surveillance Coalition pointed to new evidence demonstrating the inherent racial bias of facial recognition technology. The ordinance is scheduled to go before the full City Council on July 16. 

Last month, the ACLU of Michigan filed a complaint against the Detroit Police Department for the wrongful arrest of Robert Williams, a Black man who was misidentified by facial recognition software as the perpetrator of a crime. Detroit police handcuffed Robert on his front lawn in front of his family and took him to a detention center, where he was locked up overnight. After an officer acknowledged during an interrogation that “the computer must have gotten it wrong,” Robert was finally released — nearly 30 hours after his arrest. 

“When a father is wrongfully arrested and jailed on the basis of a false facial recognition match, that should be a wakeup call to every community about the risks and inherent racism of racial recognition technology,” said Alanah Odoms Hebert, ACLU of Louisiana executive director. “Banning the use of facial recognition technology is a concrete step New Orleans can take to divest from dangerous policing tactics that target Black and Brown communities.” 

A 2018 study by Joy Buolamwini of the MIT Media Lab and computer scientist Timnit Gebru found “substantial disparities” in facial recognition accuracy, finding that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while nearly always getting it right for white men. A subsequent study by Buolamwini and Deb Raji at the Massachusetts Institute of Technology confirmed these problems persisted with Amazon’s software.

Lucy Blumberg of Jewish Voice for Peace New Orleans said: “We refuse to let New Orleans, a predominantly Black city, be used as a testing ground for invasive and racially biased surveillance tools like facial recognition that have no record of reducing crime, even as they increase incarceration. Tools like stingrays, automatic license plate readers, tracking technologies like Briefcam, and facial recognition are often implicated in human rights abuses nationally and around the world. Instead of investing in surveillance tools that generate profit for private companies using our taxpayer dollars, let's invest in community safety through things like job training, affordable housing, and mental health care that keep all of us safe.” 

Late last year, the federal government released its own damning report on bias issues in face recognition algorithms, finding that the systems generally work best on middle-aged white men’s faces, and not so well for people of color, women, children, or the elderly.

In December 2019, the non-partisan National Institute of Standards and Technology (NIST) published a study finding that face recognition algorithms perform more poorly when examining the faces of women, people of color, the elderly, and children.