Authorities announced that they will begin using live facial recognition cameras in London, a move that is being criticized by civil liberties groups.
The Metropolitan Police Service announced today that it will begin using the facial recognition technology in “specific locations in London.” Authorities said the cameras will be focused on a “small, targeted area to scan passers-by.”
BBC reports that trials of the technology have been used in Stratford’s Westfield shopping center and the West End of London.
“The Met will begin operationally deploying [live facial recognition] at locations where intelligence suggests we are most likely to locate serious offenders,” Assistant Commissioner Nick Ephgrave said in a statement. “Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences.”
According to London authorities, the areas where the technology will be deployed will be marked with signs.
The use of facial recognition technology among law enforcement has been criticized in the United States, with a coalition of nearly 40 digital rights and civil rights groups calling for a ban on the use of the technology by the government.
Some U.S. cities, including Oakland, California and San Francisco—among others—have banned the technology. Recent studies have shown the technology to have a racial bias. Other studies say the technology is prone to misidentifying people as criminals.
The decision by the Metropolitan Police Service to use facial recognition technology in London is facing similar criticism.
Big Brother Watch, a privacy and civil liberties group in the United Kingdom, called the announcement a “breath-taking assault on our rights.”
“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the U.K.,” Silkie Carlo, director of Big Brother Watch said in a statement. “It flies in the face of the independent review showing the Met’s use of facial recognition was likely unlawful, risked harming public rights and was 81% inaccurate. This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”
READ MORE:
- New study shows persistent racial bias in facial recognition
- 75,000 people call on Congress to ban on facial recognition tech
- What you need to know about Clearview AI and its facial recognition app
- This online campaign wants Congress to investigate Amazon’s ‘surveillance empire’