Tech

Meta’s new plan to battle fraudulent reviews could still leave businesses languishing

Whether it changes anything is an open question.

Photo of Connor Bulgrin

Connor Bulgrin

Microsoft Advertising, Facebook Ads Manager, and Google Ads app icons are seen on an iPhone.

Meta recently launched an updated Community Feedback Policy detailing how it will moderate reviews for the 200 million businesses and countless products for sale on Facebook and its sister sites. 

Featured Video

The decision comes after years of complaints from business people concerned with fake negative reviews that sully their reputations, as well as false and manipulated positive reviews (often bought by offering incentives to customers) that boost the profile of competitors.

And thanks to a Federal Trade Commission (FTC) that is interested in reining in big tech, fake reviews have become an object of increasing regulatory scrutiny. 

In October, the FTC put Facebook and hundreds of other companies legally on notice, a practice that makes it easier to file civil suits and issue penalties, warning about the consequences of doing too little to combat fake and manipulated reviews.

Advertisement

“Fake reviews and other forms of deceptive endorsements cheat consumers and undercut honest businesses,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in an accompanying statement. “Advertisers will pay a price if they engage in these deceptive practices.” 

Facebook did not respond when asked if its policy update was motivated by the FTC’s recent actions. While many of the businesses listed were chain restaurants and clothing stores, which could allow fake reviews to flourish on its pages, Facebook is in the unique position of running the platform where these problematic reviews live. 

Facebook has long moderated reviews, but its process has not been transparent and businesses have complained they have no recourse when flooded with fake reviews. In 2018, Alta Strada Mosaic, a Virginia Italian restaurant, was spammed with 71 one-star Facebook reviews in a single day from accounts largely based out of town. Only after a local newspaper picked up the story, and a seven-month back-and-forth with Facebook, were the reviews finally removed.

Facebook’s tens of thousands of moderators are spread across the globe and highly compartmentalized. A former Facebook moderator and an organization that works closely with online moderators said they were unfamiliar with any previous human efforts to moderate reviews on Facebook. 

Advertisement

“I have no idea where they even do product review stuff,” the former moderator told the Daily Dot, “They don’t like to share any information about where the work is done.” 

When reached by the Daily Dot, a Facebook spokesperson did not respond to questions about moderating reviews but did link to a March press release where Facebook detailed a lawsuit it had filed against an individual that provided a fake review service intended to artificially boost businesses’ ratings.

According to Aaron Minc, an attorney who helps clients deal with fake reviews and negative publicity, prior to the updated policy, Facebook would only remove a review that was “a flagrant and/or obvious violation of Facebook’s community standards.” 

Until Facebook’s recent policy update, Facebook’s community standards did not explicitly forbid fake reviews, only the buying or selling of them. 

Advertisement

Facebook’s new Community Feedback Policy forbids reviews meant to defraud, deceive, and exploit others. Reviews must now be based on a reviewer’s direct experience and cannot be incentivized without complying with Facebook’s disclosure rules. The new rules are meant to promote honesty and transparency in Facebook’s review system.

Reviews have serious consequences for businesses and products on Facebook. Consumers are more likely to buy products on Facebook than any other social media platform and 80% of them are more likely to shop at a local business with positive reviews on its Facebook page. Facebook’s algorithm is also influenced by the quality of reviews a business receives. Businesses that are flooded with negative reviews often have to pay more to reach a wider audience when using Facebook’s advertising services. 

Facebook also announced that its enforcement procedures will utilize a combination of artificial intelligence and human review. In a statement, Facebook said that “it can take time for the various parts of our enforcement mechanisms to learn how to correctly and consistently enforce the new standard [for moderating reviews]. But as we gather new data, our machine learning models, automated technology and human reviewers will improve” in their accuracy.

Customers and businesses accused of violating Facebook’s fake review policy will face punishments that could deter businesses from trying to game the system. They could lose access to their product listings and advertisement services. In extreme cases, they could lose “access to any or all Meta products or features.” 

Advertisement

Facebook recently rebranded as Meta.

But many critics are not convinced. Facebook whistleblowers Frances Haugen and Daniel Motaung have critiqued Facebook for its reliance on machine learning to moderate content, arguing that AI will never be able to replace human judgment. Regulators at the FTC issued a similar warning, arguing that an overreliance on AI can promote inaccurate, biased, and discriminatory enforcement while leading to increased consumer surveillance. 

Business owners have encountered problems dealing with the AI that moderates another way Facebook interfaces with businesses, its advertising services. Many companies reported problems regaining access to Facebook’s ad services after the company’s AI mistakenly flagged content, causing account owners sudden and severe revenue losses as they waited for a human reviewer to process their appeal. It can take months for a customer service representative or one of Facebook’s human reviewers to resolve the issue.

But just announcing a new plan to combat fake reviews isn’t enough. It has to work, too. If it doesn’t as intended, the tech giant could still face lawsuits.  

Advertisement

Last month, the DOJ reached a settlement with Facebook after the company engaged in discriminatory algorithmic advertising that violated the Fair Housing Act.

And Facebook is placed in the unenviable task of handling moderation for businesses that wish to sabotage each other, angry customers who can buy reviews in spades, and bot programs that can flood a page. Sorting through all that, alongside genuine reviews, is a massive undertaking.

But while a new policy addressing a problem that’s long plagued the site is a good start, there’s no guarantee it will work with the kind of efficacy a small business might need when its livelihood is at stake. 


Read more of the Daily Dot’s tech and politics coverage

Advertisement
Nevada’s GOP secretary of state candidate follows QAnon, neo-Nazi accounts on Gab, Telegram
Court filing in Bored Apes lawsuit revives claims founders built NFT empire on Nazi ideology
EXCLUSIVE: ‘Say hi to the Donald for us’: Florida police briefed armed right-wing group before they went to Jan. 6 protest
Inside the Proud Boys’ ties to ghost gun sales
‘Judas’: Gab users are furious its founder handed over data to the FBI without a subpoena
EXCLUSIVE: Anti-vax dating site that let people advertise ‘mRNA FREE’ semen left all its user data exposed
Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.
 
The Daily Dot