Advertisement
IRL

The 8 biggest AI flops of 2023

AI won’t be taking all of our jobs—yet.

Photo of Tricia Crimmins

Tricia Crimmins

Woman smiling(l), Hand holding limes(r)

2023 was dubbed “the year of AI” deservingly: Adobe Photoshop unveiled its generative fill feature, the world got more comfortable with ChatGPT, and workers across the globe wondered if artificial intelligence would take their jobs

Featured Video

Lucky for those who see AI as a threat, the technology reminded us this year that it makes mistakes just like we do. Read on for the 8 biggest AI flops of 2023.

8 biggest AI flops of the year

1) Chatbot advises peoples to lose weight 

In March of this year, the National Eating Disorders Association (NEDA) helpline staff were abruptly fired and replaced with Tessa, a chatbot. But when survivors of eating disorders and individuals in recovery from disordered eating tried out Tessa, they found that the chatbot advised them to lose weight. Engaging in intentional weight loss while an individual is recovering from an eating disorder or disordered eating can exacerbate harmful symptoms.

Advertisement

In a May interview with the Daily Dot, activist Sharon Maxwell described the potential consequences of the chatbot’s recommendations. She said if she had talked to Tessa while she was struggling with an eating disorder, “I don’t believe I would be here today.”

2) AI creates inappropriate headshots for a Black woman

Creating professional headshots using AI was all the rage in July, but it turned out that the application making the photos, Remini, didn’t make all headshots equally. Lana Denina, a Black artist, shared the headshots that Remini had created for her on X: Photos show Denina’s cleavage and her wearing a blazer with nothing underneath it. Denina says the reference photos she submitted to Remini showed her fully clothed.

Denina said that the AI oversexualized her for having the features of a Black woman, which have been “fetishized for centuries.” Her experience with AI highlights the racial biases baked into its systems.  

Advertisement

3) AI creates 100+ homogenous images of autistic people

Artificial intelligence has proven time and again its biases and narrow view of the world. When Jeremy Davis, who is autistic, asked AI to generate photos of “an autistic person” in a “lifelike,” “photoreal” style over 100 times, each time it produced images of white, able-bodied, thin men. In a TikTok about the experiment, Davis said the system only produced two photos of women—who were also white.

He used his experiment as a public service announcement to anyone hoping to utilize AI in their work. 

“In order for AI to be an effective tool, you have to be smarter than the AI,” Davis says in his TikTok. “We must be aware of its limitations and pitfalls.”

Advertisement

4) AI creates bikini-clad women with deformed limbs and teeth 

Many are more familiar with the concept of the uncanny valley after it was trending on TikTok at the beginning of November. But at the beginning of this year, the uncanny valley was strongly represented in viral, AI-generated photos of women in bikinis. When viewers looked closer, they noticed that the women had extra limbs or strange-looking teeth.

Some of the women were more obviously fake, like one image that shows a woman’s head and shoulders growing out of a pair of legs. 

A meme even grew out of the bikini-wearing AI women, too, as people made fun of men who said that the AI photos signified that “it’s so over” for real women.

Advertisement

5) AI can’t make an orange slice look like a lime slice

There have been a couple of success stories featuring generative AI imaging this year, but it’s not all powerful. Sometimes, it can’t even make a photographed orange slice look like a lime slice. 

A TikToker tried out Canva’s generative fill imaging tool powered by AI to change an orange slice into a lime slice in a photograph, but the tool made the lime slice far too small and out of proportion with the hands in the original photograph. 

“I was testing what Canva’s AI could do,” the TikToker wrote in a comment on their video. “Which is [quite a] lot and not much at the same time.”

Advertisement

6) AI unsuccessfully tries to animate a photo to create a talking person

Deep fakes, or AI-generated videos of people in which they have been digitally altered to look like someone else or say something they haven’t said, look and sound convincing enough to spread misinformation and reduce trust in news outlets. But other AI-generated videos don’t pass the smell test. 

In June, a scammer using AI tried to animate a photo of a woman to make it look like she sent a video of herself speaking—but the woman’s mouth doesn’t even open while she speaks. 

“Hi baby! How are you? It’s me, Meme Heater. This is real, legit,” the voice in the AI-generated video says. “Once you pay your fee, you get your first payment immediately!”

Advertisement

7) AI fast-food drive-thrus make ordering a hassle

There’s collective uncertainty about whether or not humans will lose their jobs to AI—though, in most cases, the answer seems to be no. In one case, AI was put in charge of a fast-food drive-thru and did far more harm than good.

“Julia,” an AI voice-enabled assistant, was “hired” at White Castle in August to take customers’ orders at the drive-thru, which tends to be an already tedious process. But when customers arrived at the ordering screen, there was an extra step: They were made to verbally accept Julia’s terms and conditions.

And McDonald’s has been incorporating AI into its in-store operations over the past few years, but that doesn’t mean non-human “employees” always get things right. In February, one McDonald’s customer said that an AI drive-thru worker added an extra fountain beverage, plus eight additional sweet teas, to her requests. The customer didn’t pick up—or pay for—her order.

Advertisement

8) Google’s ChatGPT competitor gets basic facts wrong

A huge issue with AI language models is their hallucinations, or their habit of presenting falsities as facts, and even backing them up with fake references. That’s why it’s so important to fact-check any work that was created (in part or entirely) using AI. Google, however, failed to do this in an advertisement for its ChatGPT competitor Bard.

In a February tweet showcasing the possibilities of Bard, the AI model says that the James Webb Solar Telescope took the very first pictures of a planet outside our solar system. But that’s not true: Astronomers took photos of exoplanets before the telescope was even launched. One of those astronomers even responded to Google’s tweet about Bard.

“Speaking as someone who imaged an exoplanet 14 years before JWST was launched,” Bruce Macintosh tweeted, “it feels like you should find a better example?”

Advertisement
 
The Daily Dot