Facebook isn’t just really good at guessing which friends to tag in your photos. No, the social media site uses saved facial structures to suggest tags. And some people see it as an invasion of privacy.
In a suit settled last month, people in Illinois accused Facebook of breaking the Illinois Biometric Information Privacy Act. The law was created to block companies from gathering biometric data from Illinoians without their consent. They alleged Facebook did exactly this by using their personal data without permission through the suggested tag feature.
Facebook agreed to pay $550 million to settle the lawsuit. It now allows users to opt-out, as well.
Any James Bond fan has seen the spy use a facial scanner to unlock a secret room. But, what was once only featured in futuristic movies has increasingly become a tool for companies in a cross-section of industries.
It’s also raising a great number of concerns.
Biometric data scans a physical feature, such as their fingerprint or face, and crosschecks it against a database. When it’s used to unlock a device or app, it’s wildly liked.
But it is also revolutionizing the information companies and governments store from their users and citizens.
What is biometric data?
Biometric data is the method of identification that uses unique physical characteristics. And it’s more commonplace than. you think, for example, Apple users use it every day to unlock their device with their fingerprint.
Biometric data, which essentially boils doing to “life measurement,” allows companies newfound ways of securing accounts, but also of tracking people.
When a user opts into biometrics, a physical feature like facial structure, fingertip or voice is stored on a server. The biometric data is like a password for a person. The company recording the biometric then matches the characteristic to the user in a process of identification and authentication.
Your account is then opened, if you’ve chosen that option, or your identity is sent to the police, who are using the data to target individuals.
While fingerprint data is considered mainstream, voice and facial recognition are a bit more complex. To use voice biometrics, the user’s speech is digitized based upon vocal tones. Facial recognition operates on a similar idea. It maps up to 80 unique characteristics of a user’s face, such as distance between eyes, depth of eye sockets, and the curve of a jaw to create a set of nodal points it then references.
Having a password like that, that is entirely unique to each user is one of the safest forms of account security, according to Richard Bird, chief customer information officer from Ping Identity, which has researched the success of biometric usage.
“IT and security professionals also see identity federation (single sign-on) and biometric authentication as two of the top five most effective security controls,” Bird’s research stated.
Most companies are predicted to move towards the implementation of biometrics. Another IT company, Spiceworks reports that 86% of organizations will adopt some form of biometrics by 2020.
Privacy is the major factor deterring companies from using biometrics. Laws like that in Illinois discourage many companies from developing this form of data collection as the consequences of misuse are high. And like Facebook, it could result in costly penalties.
Who is using biometric data?
The oldest form of biometric usage was adopted by law enforcement with the use of forensics in criminal trials. But, recently biometrics have been implemented across industries from banking to tech.
PayPal, Facebook, Amazon, and Bank of America are just a few of the major companies that use biometric data on their mobile apps. While Facebook scans facial features to suggest tagging, banking firms like PayPal and Bank of America use biometrics to allow customers to unlock their apps.
Even at the airport, biometric scanners are popping up. Delta Airlines was the first to implement facial scanners in Georgia this past December, CNBC reports. Instead of tickets, passengers scan their faces. The end goal is to shorten line time. However, passengers are still given the option to participate.
Biometrics are not just expanding in the private sector. The biometric most commonly being used by governments worldwide is facial scanning and recognition.
In the United States, the FBI and state law enforcement use facial recognition to identify suspects in video footage. They then use a combination of artificial intelligence and algorithms to finds a person’s face, out of databases they have that store millions of faces in their database.
The Georgetown Law’s Center on Privacy and Technology reported in 2016 that half of American’s biometrics are in the FBI’s facial recognition databases. But, the agency is only allowed access to the biometrics from 16 states. The data is mostly pulled from state IDs.
The U.S. is not the only country to use biometrics to surveil citizens. Russia, China, Japan, Israel, and Europe have also adopted facial recognition according to the Guardian. China leads the pack in live facial recognition of public spaces.
Why should you be concerned with the use of your biometrics?
Anyone that is apprehensive towards giving companies or the government their personal data has valid concerns for the adoption of biometrics.
A server holding biometric data is hackable just like any other server with private information. But, a biometric server holds deeply personal information in comparison to just an eight-digit password. Hackers can steal biometric data which opens new doors for identity theft.
Also, numerous companies, like Clearview AI, have grabbed people’s images without their consent.
More concerning is the threat biometric data have over people of color.
Joy Buolamwini is a researcher at the MIT Media Lab’s Civic Media group. She studies how racism has been coded into facial recognition. Her study of facial recognition machines found an error rate of 0.8% for light-skinned men compared to 34.7% for dark-skinned women.
To demonstrate this, Buolamwini shows the blatant inaccuracies facial recognition engines like Amazon’s Rekognition and Microsoft’s Azure have when applied to famous women of color. In her video, prominent women like the first Black congresswomen Shirley Chisholm, former first lady Michelle Obama, champion tennis player Serena Williams, and philanthropist Oprah Winfrey were all inaccurately characterized.
Most were considered males by the AI systems.
“Old burns, new urns collecting data chronicling our past, often forgetting to deal with gender, race, and class,” Buolamwini said.” “Again, I ask, ‘Ain’t I a woman?’”
In the video, facial recognition is unable to recognize the style of hair Obama had as a child. Later on, it mistakes a photo of Oprah Winfrey as the former first lady with 52% certainty.
The current state of facial recognition inaccurately identifies people of color using their biometric data. It not only pushes racial biases against women of color, but the errors also perpetuate racial inequalities in the justice system.
ProPublica found that a risk assessment program used by a U.S. court inaccurately chose Black defendants as more likely to re-offend than whites. The Correctional Offender Management Profiling for Alternative Sanctions program flagged Black prisoners 45% to 24% more frequently.
Biometric data continues to penetrate both the private and public sector. But proper regulations have yet to be implemented to prevent racial biases from hitching a ride on the code that has the power to identify everyone in the world.
Although members of Congress have spoken out about the technology, nothing is in the works that might halt the spread of the technology.
READ MORE: