Tech

Facebook’s AI suicide prevention tool raises concerns for people with mental illness, disabilities

Is Facebook doing more harm than good?

Photo of Samantha Grasso

Samantha Grasso

Disabled icon with Facebook logo within wheel with motion lines

Warning: This post discusses suicide and thoughts of suicide.

Featured Video

Around Thanksgiving, Thea, 25, shared a post on Facebook expressing that she felt like dying.

She wrote she was “tired of living” and “exhausted.” Within 20 minutes, Thea told the Daily Dot, her post was either flagged or reported for expression of suicide.

When she returned to Facebook, the platform showed her a “resources” message suggesting she reach out to a friend, or contact a helpline or local paramedics. She said Facebook also displayed messages on the home pages of her partner and an acquaintance alerting them that she needed help.

Advertisement

As Thea scrolled through her friends list to message her partner, she quickly grew annoyed at the ordeal. It felt intrusive and embarrassing that Facebook friends she doesn’t interact with were notified. Thea said she thought her post was flagged by machine learning algorithms Facebook developed to detect posts that indicate a person could be contemplating suicide.

Last week, Facebook announced a global rollout of this tool, developed using artificial intelligence (AI), which aims to identify posts potentially expressing suicidal thoughts that would otherwise go unnoticed. (A spokesperson for Facebook disputes parts of Thea’s account, saying Facebook doesn’t contact friends of someone possibly contemplating suicide unless these friends were the ones who first alerted the platform to the concerning post, and that “resource” responses to posts such as Thea’s don’t suggest that users contact paramedics. Thea maintains her friends saw these messages and that contacting paramedics was one of the options suggested to her.)

Facebook’s new tool picks up posts by identifying concerning Facebook Live videos, or certain text or key phrases in the comments of a post that might show concern for the poster, such as “Are you OK?” and “Can I help?” The tool then flags the post to a member of the Community Operations team who is trained in suicide and self-harm, who then reviews the content and determines what kind of resources to send to the person.

Resources range from the suggestions Thea received to contacting local first responders if a team member deems the person needs immediate attention. A spokesperson for Facebook said that the process is the same regardless if the post is reported by another Facebook user or picked up by the AI, and that first responders are only contacted if the team member determines that the user needs assistance immediately.

Advertisement

Thea had originally shared the post because she knew she had Facebook friends who could relate to how she was feeling and might be able to take comfort in knowing they weren’t alone. But after this experience, she’s not sure she can continue to express these feelings on Facebook.

“I post these things because…it makes it easier for those people to talk with you and help, because they know how to handle it,” Thea said. “With the way the AI functions, it feels like we can’t trust anything anymore, and now less people are going to speak up about their suicidal thoughts, which is more dangerous to that person.”

Across mental illness and disability communities, Facebook users have shared similar concerns. In the spaces of the platform where people have built support systems with others in the disability community, or with people who also have anxiety, depression, or thoughts of suicide, the thought of having Facebook comb through posts and flag them is daunting.

Some have expressed fear that Facebook Community Operations team members could request that police check up on them over a post’s content. Claire Kooyman, a 34-year-old Boulder, Colorado resident, told the Daily Dot that she’s previously used her Facebook page as a way to reach out to friends when she’s struggling with her depression and anxiety. She’s found it to be a way to get positive reinforcement from others who experience the same symptoms, but worries about what she’ll share next, and if it’ll get flagged by Facebook’s new system.

Advertisement

Kooyman also expressed concern for flagged users who are then picked up on a mental health hold—an involuntary detention of someone who hasn’t committed a crime but is considered a danger to themselves or others. In some states, if psychiatric facilities are unavailable, then patients can be held in jail.

“I would be afraid, to be completely honest, for fear of my words flagging some stranger to read my private thoughts, which I purposely only share with some people in the first place,” Kooyman said. “Until the recent law was passed…if someone in my town said they felt suicidal, they’d be held in jail. That is a definite reason to keep your thoughts to yourself, if they start flagging.”

https://twitter.com/InkInOrbit/status/935380791546101761

https://twitter.com/SunDappledAsh/status/935278160312176640

Advertisement

Contacting first responders is a move that mental health peer specialist Chris Ditunno said could be harmful for people who have certain mental illnesses or suicidal thoughts in the first place—and potentially dangerous for queer or transgender people of color. People with cognitive or developmental disabilities may have difficulty responding to police showing up at their doorstep, Ditunno said, and could be misunderstood by first responders who don’t regularly work with people with disabilities.

Ditunno works at the Cambridge Somerville Recovery Learning Center in Boston, where the professionals are also people who have mental illnesses and can relate to their patients. The facility’s patient-centered model for addressing mental illness differs from the patient-driven model in that it puts the patient in control of their own decisions so that the patient can choose how they want to be supported, rather than being told by a doctor how to go about treatment.

Ditunno told the Daily Dot that the kind of wording that Facebook says the AI is looking to detect, phrases such as “How can I help you?” is often used within the peer-driven community to help assess how to best assist someone. In other words, whoever wrote that comment is likely there to listen and help—it’s the kind of response the poster is looking for. Facebook’s technology appears to be “consumer-driven,” Ditunno said, but calling upon first responders assumes that the consumer isn’t capable of making their own decisions.

“Does [Facebook] not get how ‘big brother’ this feels to everybody?” Ditunno said. “A lot of people with serious mental health issues—ones that are very symptomatic and potentially in danger, but not necessarily about to kill themselves—they distort reality. So this notion that Facebook has this AI logarithm and people are spying on them and calling the police can be really, really damaging.”

Advertisement

For Dr. Dan Reidenberg, the executive director of suicide prevention nonprofit Suicide Awareness Voices of Education, Facebook’s method for suicide prevention is more comforting than concerning. Reidenberg has worked in mental health for nearly 30 years and has helped Facebook and other technology platforms improve their suicide prevention strategies for the past decade.

Reidenberg told the Daily Dot that while these concerns—that Facebook’s AI may pick up “false positives,” and how first responders may treat people with disabilities and mental illness—are valid, Facebook and companies taking similar steps shouldn’t be feared, but should be commended for developing the technology to better assist someone who’s suicidal and whose post might not be detected otherwise. Reidenberg said Facebook’s Community Operations team works with different suicide outreach organizations and is trained on the science of suicide and mental illness, as are the employees who help develop the suicide prevention technology.

While Reidenberg agreed that law enforcement should have better training for assisting people with disabilities or mental illness, he said Facebook’s main concern is preventing suicide and, outside of education and training, doesn’t have control over law enforcement’s resulting actions. Additionally, Reidenberg said the field of suicidology hasn’t spent enough time overall addressing and researching the concerns of people with disabilities, but that this was an opportunity for people in the suicide prevention community to get them involved in the conversation. (Reidenberg encouraged people with similar concerns to email him at dreidenberg@save.org.)

However, regardless of the on-average 17 years it takes for research to go from conceptualizing to publishing, Reidenberg said it’s important that Facebook and similar companies are making an effort to help suicide prevention catch up, particularly when technological advancements tend to develop in such shorter spans of days and weeks.

Advertisement

“All of these [monitoring technologies] are new, and they’re not going to be foolproof, but if you look back over 50, 60 years, we haven’t been able to drop the suicide rate in the U.S. [even with] new treatments and psychological assessments,” Reidenberg said. “Yet, these companies are committed to this, are designating time and resources to develop tools based on the best-known science, and hopefully that will ultimately make a huge impact.”

For more information about suicide prevention or to speak with someone confidentially, contact the National Suicide Prevention Lifeline (U.S.) or Samaritans (U.K.).

Editor’s note: This story has been updated for clarity.

 
The Daily Dot