They're generally supportive, but concerns remain.
Earlier this week, Facebook announced plans to expand its artificial intelligence-based suicide prevention campaign. The company shared how the program has been working so far, essentially using "proactive detection" technology to scan posts for signs that the user might be suicidal and flagging them to Facebook moderators for next steps, which range from offering links to online resources all the way up to contacting first responders on the user's behalf.
The goal is to shorten the amount of time between a concerning post and Facebook being brought into the conversation. Previously, Facebook wouldn't get involved unless a friend manually flagged someone's post as seeming suicidal.
"When someone is expressing thoughts of suicide, it’s important to get them help as quickly as possible," wrote Facebook Vice President of Product Management Guy Rosen, on the company's blog.
Experts in suicide prevention are pretty excited about this plan, though some have minor reservations. Facebook's plans are "important and groundbreaking," Dr. Christine Moutier, chief medical officer at Americans for Suicide Prevention, writes in an e-mail, lauding this type of "creative and innovative solution."
"With the help of large tech companies like Facebook, we can reduce the suicide rate in the United States, furthering AFSP's Project 2025 goal: to reduce the rate 20% in the US by 2025," she writes.
AI has "demonstrated itself as an effective tool for identifying people who may be in crisis," writes National Suicide Prevention Lifeline Director John Draper, noting there are still some challenges when it comes to machine learning technology understanding the difference between someone joking with their friends and someone who may actually be on the verge of self-harm.
"We’ve been advising Facebook on how to provide assistance for persons in crisis through creating a more supportive environment on their social platform so that people who need help can get it faster, which ultimately, can save lives," he writes, adding that "notifying police is an absolute last resort."
Image via Facebook.
Joining the chorus of supporters of Facebook's plan is Dr. Victor Schwartz of The Jed Foundation, who has two caveats related to transparency and training.
"We applaud Facebook's efforts to enhance their ability to identify and respond to users who may be at increased risk for self-harm," Schwartz writes in an e-mail. "We hope that Facebook users would be made aware of this new protocol and would be alerted to the impending intervention, and that first responders would be properly trained to respond to those in possible crisis."
To the latter point, first responders should be properly trained to respond to suicidal individuals and others experiencing a mental health crisis, but are often not. For example, earlier this month, a suicidal woman in Cobb County, Georgia, was shot and killed by police after she grabbed a gun. In August, a similar scene unfolded in Florida.
There is no doubt that deescalating this type of situation, especially when the person involved is armed, is difficult. That's why ensuring proper training is so important.
Skepticism remains high in the online community, with some worrying that this life-saving tech could be repurposed for more nefarious pursuits in the future. Though Facebook has provided a pretty decent high-level overview of this new tool — which users apparently won't be able to opt out of — the company has been extremely light on details of how it actually works.
Responding to critics worried about how Facebook may use this technology in the future, the company's chief security officer, Alex Stamos, stressed that "it's important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in."
It's perfectly rational to have concerns over the use and misuse of AI, but the truth is that tech like this is going to play a big role in coming years.
Between this and concerns about ensuring that first responders are properly trained, Facebook's new approach to suicide prevention feels like something humanity can feel cautiously optimistic about — but only time will truly tell.