ShareThis Page

Facebook using Artificial Intelligence to search for suicidal posts

Ben Schmitt
| Wednesday, Nov. 29, 2017, 4:30 p.m.
The Facebook logo is displayed on their website in an illustration photo taken in Bordeaux, France, Feb. 1, 2017.
REUTERS
The Facebook logo is displayed on their website in an illustration photo taken in Bordeaux, France, Feb. 1, 2017.
In this Wednesday, June 21, 2017, photo, Facebook CEO Mark Zuckerberg speaks in preparation for the Facebook Communities Summit, in Chicago, in advance of an announcement of a new Facebook initiative designed to spur people to form more meaningful communities with Facebook's groups feature.
In this Wednesday, June 21, 2017, photo, Facebook CEO Mark Zuckerberg speaks in preparation for the Facebook Communities Summit, in Chicago, in advance of an announcement of a new Facebook initiative designed to spur people to form more meaningful communities with Facebook's groups feature.

Facebook is rolling out a new automated effort to hopefully save lives.

The social media platform said this week it is using artificial intelligence technology to scan and flag text and video posts for patterns of suicidal thoughts or self harm.

"Starting today we're upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly," Facebook chief executive Mark Zuckerberg said in a blog post Monday. "In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times."

In addition to searching for words and phrases in posts, the AI will scan the comments, looking for reactions such as "Are you OK?" and "Can I help?" Can I help?"

"With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today," Zuckerberg said. "There's a lot more we can do to improve this further."

The thought of Facebook scrolling through people's posts might cause some distress among users, but it's already a reality in the world of social media, said Dr. Gary Swanson, a psychiatrist at Allegheny Health Network.

He said he was more concerned about false alarms.

"It's tricky when you think about AI, because if you say, 'I want to kill myself,' people may understand you're not serious," he said. "But AI may not be able to sort that out. Searching for comments like 'are you OK,' sounds awfully generic. There are going to be false positives. But I imagine Facebook feels a responsibility to closely monitor this. I'm sure it will help in some circumstances."

Hollie Geitner, vice president of client services for WordWrite Communications public relations firm in Pittsburgh, said Facebook is where people spend much of their time and communicate with others.

"Never before have we had such precise demographic data about target audiences, their buying preferences and how they consume information," she said. "We also know how they are feeling. Posts, emojis, memes, photographs, videos and quotes all tell us where a person is from an emotional standpoint. If we use technology to influence buying decisions, it would be irresponsible if we didn't use it to identify people in distress and offer help."

Facebook said it's dedicating more moderators to suicide prevention, working closely with National Suicide Prevention Lifeline and Suicide Awareness Voices of Education

Suicide is the 10th-leading cause of death in the United States, according to the American Foundation for Suicide Prevention. Each year, about 43,000 Americans die by suicide.

The human factor cannot be forgotten in helping people who are troubled, Geitner said.

"Algorithms are a start, but I'd really like to see empathy and compassion be our primary focus online and in person," she said. "Technology has walls and screens that separate us from the real person and that can be very dangerous."

Ben Schmitt is a Tribune-Review staff writer. Reach him at 412-320-7991, bschmitt@tribweb.com or via Twitter at @Bencschmitt.

TribLIVE commenting policy

You are solely responsible for your comments and by using TribLive.com you agree to our Terms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sent via e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.