Advertisement
Passionfruit

Should creators be concerned about Instagram’s AI developments?

Instagram seems more interested in creating bad AI influencers than fixing its broken system.

Photo of Grace Stanley

Grace Stanley

a woman with a shocked expression next to an iphone and depiction of ai influencer with hand holding phone with instagram logo and a logo for passionfruit
Passionfruit
Featured Video

On April 22, journalist Emanuel Maiberg of 404 Media published a disturbing investigation into Instagram’s ad ecosystem. The report revealed that Instagram is profiting from multiple ads that explicitly invite people to create nonconsensual deepfake nudes with AI apps. One particularly egregious ad had a picture of Kim Kardashian with text that read, “Undress any girl for free. Try It.” 

These ads ran on Facebook, Instagram, Facebook Messenger, and Meta’s in-app ad network from April 10 to 20. The report is particularly troubling given that deepfakes are an issue specifically impacting teenage girls — a demographic Meta has repeatedly been criticized for harming.

Only a few states have any laws addressing deepfakes. However, Meta reps have said it does ban nonconsensual deepfakes. The company also bans ads containing adult content, saying it uses a mix of AI detection systems and human review to identify this content. Meta deleted these deepfake ads after 404 Media ran its story. But it clearly failed to detect and address them properly on its own.

Advertisement

So why did this happen? Well, as 404 puts it, it’s clear Instagram is either “unwilling or unable” to enforce its own policies about AI. And perhaps that’s because it has some twisted priorities when it comes to the AI tools it is investing in.

How is Instagram enforcing its AI policies?

“Instagram and Facebook are both decaying platforms that don’t just enable criminal behavior but actively profit from it and either don’t know or don’t care how to stop it,” Jason Koebler of 404 Media said on X. “Content moderation is hard. But Meta is at a point now where they are regularly unable to find obvious illegal content unless a journalist sends them a direct link to it.”

Advertisement

This is hardly the first time Meta has failed to enforce its own policies when it comes to AI. Back in March, NBC News found that it was running other ads for sexually explicit deepfake services. One even advertised blurred nude image of an underage actress …


In Today’s Newsletter:

Advertisement

In Body Image

Sign up for our Passionfruit newsletter for creator coverage like this:

Advertisement
 
The Daily Dot