A change to TikTok’s policies regarding deepfakes this week will see stricter rules placed on the spread of algorithmically generated (AI) content.
In an update to the platform’s Community Guidelines, TikTok said that moving forward all such videos must clearly disclose that they were created with the aid of AI.
Prior to the update, TikTok’s rules regarding synthetic and manipulated media only stated that users must not spread content designed to “distort the truth of events” or “cause significant harm” to any individuals featured in the video.
TikTok also added that all synthetic media containing “the likeness of any real private figure” will now be outright banned. Deepfakes that show public figures violating any of the app’s policies, such as uttering remarks deemed as hate speech, are also now banned.
Synthetic media that depicts a public figure endorsing a product without their consent is against the guidelines as well. The decision comes shortly after a deepfake advertisement of podcaster Joe Rogan endorsing a supplement went viral online.
With the spread of easy-to-use AI tools, deepfakes and other forms of manipulated media skyrocketed in popularity on TikTok. Videos that use AI voice technology to make former President Donald Trump and President Joe Biden argue over video games went especially viral in recent weeks.
TikTok’s policy change also comes amid an uncertain future as U.S. politicians attempt to ban the app. The social media platform was recently banned on government devices in the U.S. as well as a multitude of other countries.
Also this month, Republicans on the U.S. House Foreign Affairs Committee voted in favor of legislation that would give Biden the ability to ban foreign technology deemed a threat to national security, including TikTok.
Tiktok’s CEO, Shou Zi Chew, is set to testify before Congress this week.