Advertisement
Tech

Facebook is asking users to upload photos to prove they’re not a bot (or worse)

Is it a smart idea—or a potential privacy violation?

Photo of Christina Bonnington

Christina Bonnington

how to follow someone on facebook

In an effort to crack down on bots, spammers, and impersonators, Facebook is introducing an unusual new CAPTCHA-style mode of identity verification. When the validity of your account is in question, Facebook is asking users to upload a photo of themselves.

Featured Video

To check whether it’s you who’s actually using your account, Facebook is prompting some users with a new message. “Please upload a photo of yourself that clearly shows your face. We’ll check it and then permanently delete it from our servers,” the prompt reads. Reports of Facebook testing out this feature first started appearing as early as April this year on Reddit.

It’s one of several techniques Facebook uses as a check against suspicious activity.

https://twitter.com/flexlibris/status/935635282564734977?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Fwww.wired.com%2Fstory%2Ffacebooks-new-captcha-test-upload-a-clear-photo-of-your-face%2F

Advertisement

A Facebook spokesperson confirmed to Wired that this feature is designed to help Facebook “catch suspicious activity at various points of interaction on the site, including creating an account, sending Friend requests, setting up ads payments, and creating or editing ads.” Suspicious activity includes triggers like a login from Turkey when you posted from California hours before.

According to people who’ve posted on Twitter, if Facebook calls your account validity into question with this prompt, you’ll be locked out of your account until you upload the photo and Facebook confirms it’s you. How Facebook spots suspicious activity, as well as how the company verifies the photo you upload is in fact of you, are both done algorithmically.

This is the second time this month that Facebook has begun using photos to fight inappropriate behavior on its site. Earlier in November, Facebook announced a new tool for fighting revenge porn: by having users upload the photos themselves. In the tool, being tested in Australia, you upload a photo yourself via Messenger, and then flag it as a “non-consensual intimate image.” Facebook then hashes the image and removes it from its servers. Then, if someone tries to upload it to the site at a later time, Facebook can spot it and prevent it from being posted.

In both of these cases, Facebook says it won’t store your image permanently on its servers. In the case of this CAPTCHA-style security measure, however, it certainly seems like a way to kill two birds with one stone: it verifies a user’s identity, but it also gives Facebook an opportunity to test and hone its facial recognition algorithms.

Advertisement

H/T Wired

 
The Daily Dot