Right-wing commentator Jack Posobiec aired a deepfake video of President Joe Biden this week announcing the return of the military draft and the beginning of a third world war.
Experts are split on whether it was a dangerous misuse of the technology or an actual example of how the medium can be handled responsibly.
The video, which Posobiec predicted was a “sneak preview” of things to come, was shared with his more than 2 million followers on Twitter.
The footage features a chyron that references “the realities of nuclear war” and depicts Biden invoking the Selective Service Act due to growing tensions with Russia and China.
“The illegal Russian offensive has been swift, callous, and brutal. It’s barbaric,” the AI-generated Biden states. “Putin’s illegal occupation of Kyiv and the impending Chinese blockade of Taiwan has created a two-front national security crisis that requires more troops than the volunteer military can supply.”
Once the clip finishes, Posobiec appears on screen to offer commentary on the AI-generated scenario.
“What we just played for you was a sneak preview, coming attractions, a glimpse into the world beyond,” he says. “Now that was an AI, I don’t want to say a recreation but maybe a pre-creation… a pre-creation of President Biden designed and scripted by our producers here for the show of what could happen if President Biden were to declare and activate the Selective Service Act and begin drafting 20-year-olds here in the United States.”
The segment proved controversial online, with some referring to the display as “nauseatingly irresponsible.”
“Creating something like this is so nauseatingly irresponsible, in such bad faith, and such a waste of resources I can only be completely unsurprised,” one critic tweeted.
Concerns were also raised over the ability of people watching to differentiate between legitimate and manipulated media.
Despite both the tweet and Posobiec stating that the video was in fact created with AI, one of the first commenters on the video nonetheless asked if what they were seeing was real.
“Is this legit, or some sort of AI deepfake clips?” one user asked.
Speaking with the Daily Dot, author and generative AI expert Nina Schick argued that the advancement of AI-related technologies will lead to further degradation of trust in content seen online even when it is specified as manipulated.
“We’re going to see a lot more AI-generated content that will be in the context where it is marked as being AI-generated but people are still fooled,” she said.
Twitter initially responded to the video by placing a fact-check notice on the tweet. The notice was removed soon after though, likely given that the footage clearly noted that it had been created by AI.
“They took the fact-check off of this tweet and BlueAnon is melting down saying I am now attempting to create a domestic insurrection,” Posobiec tweeted. “Stay mad, freaks!”
The Daily Dot reached out to Posobiec over the contact form on his media website Human Events but did not receive a reply by press time.
On the other hand, Sam Gregory, a deepfake expert and the program director at the human rights organization WITNESS, said that the video was an example of a “legitimate use of deepfakes.”
“The video is a visualization of a hypothetical or a potential future to further a discussion, and importantly it’s contextualized as such,” Gregory told the Daily Dot. “We’ve already seen AI-generated imagery used to present potential climate futures, offer unexpected satirical voices of politicians and ‘deepfake it till they do it’, and offer perspectives on current events and realities from people who have died with resurrection deepfakes.”
Gregory also said he appreciated that Posobiec didn’t use the video to warn about the dangers of deepfakes, which he described as “an over-used technique” that “seems to contribute to undermining trust in real media,” but to focus on a political hypothetical.
“As we approach deepfakes and synthetic media regulation and norms it’s important we protect expressive speech, political speech, and satire,” he added. “However, I do think that it’s key that we also think how—particularly now when people are not clear what can and can’t be faked—that we also emphasize visible disclosure and labeling in places like TV and web use of ‘news-like’ deepfakes.”
While feelings towards Posobiec’s video fall largely along political lines across social media, experts continue to argue that such issues are much more complex.
Synthetic media expert Henry Ajder likewise noted that although the video clearly labels itself as AI-generated, even honest deepfakes can produce the same issues as those made deceptively.
“This deepfake is clearly designed to grab viewers’ attention before being clearly disclosed as fake, but if taken out of context in our fractious information ecosystem, the audience response could be very different,” Ajder said in a statement to the Daily Dot. “In this respect, artistic or well-meaning deepfake content can still fall victim to the crude forms of media manipulation that are much more prominent online. Creators need to consider not just how they intend generative content to be received, but how it could be misrepresented when released ‘into the wild.’”
Such content will undoubtedly become exponentially more commonplace as the barrier to entry continues to drop for AI-related technologies. It’s genuinely unclear whether the public is truly ready.