Facebook is upgrading its AI tools to help prevent suicide, the company announced on Monday. “Starting today we’re upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly,” Mark Zuckerberg wrote in a post on Facebook.
In March, Facebook began a limited test of AI-based suicide prevention efforts on text-only posts in the US. However, now the social network will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
In a blog post, the company detailed how AI looks for patterns on posts that may contain references to suicide or self-harm. In addition to searching for words and phrases in posts, it will scan the comments. According to Facebook, comments like "Are you ok?" and "Can I help?" can potentially be an indicator of suicidal thoughts. In the case of live video, users can report the video and contact a helpline to seek aid for their friend. Facebook will also provide broadcasters with the option to contact a helpline or another friend.
The social network says that it is now dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.
Guy Rosen, Facebook’s vice president for product management, said the company was beginning to roll out the software outside the United States because the tests have been successful. During the past month, he said, first responders checked on people more than 100 times after Facebook software detected suicidal intent.