Facebook using Artificial Intelligence to search for suicidal posts
Facebook is rolling out a new automated effort to hopefully save lives.
The social media platform said this week it is using artificial intelligence technology to scan and flag text and video posts for patterns of suicidal thoughts or self harm.
"Starting today we're upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly," Facebook chief executive Mark Zuckerberg said in a blog post Monday. "In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times."
In addition to searching for words and phrases in posts, the AI will scan the comments, looking for reactions such as "Are you OK?" and "Can I help?" Can I help?"
"With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today," Zuckerberg said. "There's a lot more we can do to improve this further."
The thought of Facebook scrolling through people's posts might cause some distress among users, but it's already a reality in the world of social media, said Dr. Gary Swanson, a psychiatrist at Allegheny Health Network.
He said he was more concerned about false alarms.
"It's tricky when you think about AI, because if you say, 'I want to kill myself,' people may understand you're not serious," he said. "But AI may not be able to sort that out. Searching for comments like 'are you OK,' sounds awfully generic. There are going to be false positives. But I imagine Facebook feels a responsibility to closely monitor this. I'm sure it will help in some circumstances."
Hollie Geitner, vice president of client services for WordWrite Communications public relations firm in Pittsburgh, said Facebook is where people spend much of their time and communicate with others.
"Never before have we had such precise demographic data about target audiences, their buying preferences and how they consume information," she said. "We also know how they are feeling. Posts, emojis, memes, photographs, videos and quotes all tell us where a person is from an emotional standpoint. If we use technology to influence buying decisions, it would be irresponsible if we didn't use it to identify people in distress and offer help."
Facebook said it's dedicating more moderators to suicide prevention, working closely with National Suicide Prevention Lifeline and Suicide Awareness Voices of Education
Suicide is the 10th-leading cause of death in the United States, according to the American Foundation for Suicide Prevention. Each year, about 43,000 Americans die by suicide.
The human factor cannot be forgotten in helping people who are troubled, Geitner said.
"Algorithms are a start, but I'd really like to see empathy and compassion be our primary focus online and in person," she said. "Technology has walls and screens that separate us from the real person and that can be very dangerous."
Ben Schmitt is a Tribune-Review staff writer. Reach him at 412-320-7991, firstname.lastname@example.org or via Twitter at @Bencschmitt.