The ongoing conflict between Israel and Palestine has led to an unprecedented wave of misinformation on X, the social media platform that owner Elon Musk claimed was working towards becoming “the most accurate source of information about the world.”
With each change from Musk, most notably the dismantlement of the verification system, X has not only permitted the spread of fake news but has even amplified it to the platform’s hundreds of millions of users.
But the problem isn’t just fake news. Now, X is even casting doubt on legitimate news.
The issue became apparent last week after Israeli government accounts on X shared an image of the charred remains of an Israeli baby killed following the attack by Hamas. The picture caught even more attention after conservative pundit Ben Shapiro distributed the image to his more than 6.2 million followers.
Shapiro was quickly hit with claims though that he had been spreading Israeli propaganda. The evidence? An X user who ran the content through an AI image detector.
“Ben Shapiro used an AI generated image to try and whip people into pro-war frenzy,” one user wrote. “Falsifying evidence and outright lies like this is exactly how we got into the Iraq war. Thankfully this time we have the technology to call out their lies.”
It turns out, however, that the image wasn’t fake and that users on X had relied on a shoddy and unproven tool for detecting AI-generated media.
To highlight the tool’s issues, others began showing how legitimate images, including the profile pictures of those who pushed the AI claim, were falsely flagged as computer-generated.
Even with pushback from forensics experts, the narrative surrounding AI-generated images has led many to deny the harm caused to both Israelis and Palestinians.
Musk’s takeover of X as well as the proliferation of easy-to-use AI tools has led to a perfect storm for misinformation. The mere existence of technologies such as deepfakes have given many the ability to cast doubt on real media content. And X’s Community Notes feature, designed to tackle misinformation, is already being abused to label accurate reporting as fake.
An image of Israelis in Panama preparing to head home to enlist in the Israeli military was also accused of being AI-generated after being shared on X by Israel’s embassy in the country. X users pointed to perceived distortions in the photo as proof, but once again, the image was legitimate.
Shapiro’s post featured a note highlighting the AI claim as real before it was eventually pulled down. But Shapiro’s post is just one of many. A legitimate image showing blood on the floor of a home in Israel was also labeled as being “staged” despite experts arguing to the contrary.
Clips from video games are fooling users into believing that they’re seeing legitimate combat footage. Even years-old videos out of Syria are being passed off as coming from current-day Palestine. In an example similar to Shapiro, Donald Trump Jr. had a post flagged as containing years-old footage even though it accurately showed a current attack by Hamas.
Prior to the war last month, a top European Union official labeled X as the biggest purveyor of fake news among all major social media sites. Since then, Musk has only solidified those concerns by promoting accounts known for spreading false information and failing to properly implement the site’s Community Notes.
With each passing day, the list of fake posts being promoted by X users who have purchased verification grows further. And with X now paying users for going viral, the incentive to post the most inflammatory and outrageous claims is higher than ever.
Despite the backlash, Musk has continued forward unabated. Based on Musk’s openly conspiratorial views, it remains unlikely that X will become a trusted source for news anytime soon.