Advertisement
Tech

FTC says AI companies need proof before they can claim their products work better than humans

‘You don’t need a machine to predict what the FTC might do when those claims are unsupported.’

Photo of Jacob Seitz

Jacob Seitz

ftc logo in front of data wave
Olena.07/Shutterstock (Licensed) Remix by Max Fleishman

The Federal Trade Commission posted guidance on Monday urging companies creating artificial intelligence products to be wary of using false or misleading claims in their marketing.

Featured Video

The guidance, which was posted on the FTC’s website, comes amid a surge in products that use the term AI. From art-generating bots like DALL-E-2 and DALL-E mini to writing and search tools like ChatGPT, AI products are cropping up everywhere—and other companies are looking for a piece of the pie.

Just today, Snapchat announced its own AI that would be incorporated into the app. Last week, the popular note-taking and productivity app Notion launched an AI to help edit and write notes for users. 

But not every AI has been useful. Microsoft launched a competitor to ChatGPT with Bing earlier this month, which has already had some embarrassing exchanges with people, including threatening to expose a reporter for murder.

Advertisement

Meanwhile, far-right social media site Gab also pledged to build its own AI.

The FTC guidelines urge companies to reign in their often lofty claims about what their AI can do.

“We’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior,” the guidelines said. “Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.”

The Commission also urged developers to be aware of the risk they take by creating and putting AI products out into the world, which includes absorbing the blame if something goes wrong.

Advertisement

“If something goes wrong—maybe it fails or yields biased results—you can’t just blame a third-party developer of the technology,” the FTC said. “And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test.”

Other guidelines include exhibiting proof that a given AI product can do something better than a non-AI product, as well as actually using artificial intelligence in AI products.

“Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it,” the FTC said. Some chatbots have come under scrutiny for saying they utilize AI, when in actuality they are just rules-based systems.

The FTC added that developers “don’t need a machine to predict what the FTC might do when … claims are unsupported.”

Advertisement
web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free
 
The Daily Dot