Navigating disagreements in a relationship can be fraught with challenges, from misunderstandings to not being willing to see from the other person’s point of view. That is hard enough, but Redditor u/drawss4scoress’s girlfriend has decided to add the language learning model (LLM) ChatGPT into the mix as a mediator, rather than asking a friend or therapist for advice.
“My girlfriend uses Chat GPT every time we have a disagreement. AITAH for saying she needs to stop?” the OP wrote in their headline on r/AITAH.
“Me (25) and my girlfriend (28) have been dating for the past 8 months. We’ve had a couple of big arguments and some smaller disagreements recently. Each time we argue my girlfriend will go away and discuss the argument with chat gpt, even doing so in the same room sometimes,” they explained.
“Whenever she does this she’ll then come back with a well-constructed argument breaking down everything i said or did during our argument. I’ve explained to her that i don’t like her doing so as it can feel like i’m being ambushed with thoughts and opinions from a robot. It’s nearly impossible for a human being to remember every small detail and break it down bit by bit but AI has no issue doing so.”
“Whenever i’ve voiced my upset i’ve been told that ‘chat gpt says you’re insecure’ or ‘chat gpt says you don’t have the emotional bandwidth to understand what i’m saying’,” u/drawss4scoress added, sharing how uncomfortable it makes them to have a robot regurgitate their girlfriend’s point of view back at them. “My big issue is it’s her formulating the prompts so if she explains that i’m in the wrong, it’s going to agree without me having a chance to explain things.”
“Am i the *sshole for asking her to stop using chat gpt in this context?” they asked.
Folks on Reddit had mixed feelings about the use of an AI like ChatGPT to try to bolster her side of the argument.
“Respond with ChatGPT until she gets the point,” u/Tangential-Thoughts recommended.
u/annebonnell and u/SnooMacarons4844 disagreed and thought OP should simply break up with their girlfriend, sharing the sentiment of “Who wants a robot for a girlfriend?”
Other Redditors pointed out that the girlfriend had fallen into the trap of LLMs: that they are inherently biased to the input of the end user.
“Show her how biased it is to the user’s input, it’s literally programmed to tell you exactly what you want to hear. Discuss her actions with ChatGPT from your perspective and it’ll do the exact same thing to her. Show her how it’s biased and only serves as an artificial form of self-validation,” u/Professional-Ear5923 said.
u/Kopitar4president shared their personal experienced with testing out ChatGPT, saying, “I noticed that very quickly f*cking around with it that it is programmed to reinforce your position. It’s machine learning to an absurd degree, but still machine learning. It asks people to rate the responses. She thinks it’s impartial because it’s a robot, but it’s a robot programmed to tell people what they want to hear.”