Press release by Adam Mosseri, Head of Instagram
Instagram is working with experts to better understand the deeply complex issues of mental health, suicide and self-harm, and how best to support those who are vulnerable.
No one at Instagram takes these issues lightly, including me. We’ve made progress over the past few years, and today we’re rolling out more technology in Europe to help with our efforts. But our work here is never done and we need to constantly look for ways to do more.
We recognize that these are deeply personal issues for the people who are affected. They are also complicated and always evolving, which is why we continue to update our policies and products so we can best support our community. We’ve never allowed anyone to promote or encourage suicide or self-harm on Instagram, and last year we updated our policies to remove all graphic suicide and self-harm content. We also extended our policies to disallow fictional depictions like drawings, memes or other imagery that shows materials or methods associated with suicide or self-harm.
It’s not enough to address these difficult issues through policies and products alone. We also believe it’s important to provide help and support to the people who are struggling. We offer support to people who search for accounts or hashtags related to suicide and self-harm and direct them to local organizations that can help. We’ve also collaborated with Samaritans, the suicide prevention charity, on their industry guidelines, which are designed to help platforms like ours strike the important balance between tackling harmful content and providing sources of support to those who need it.
We use technology to help us proactively find and remove more harmful suicide and self-harm content. Our technology finds posts that may contain suicide or self-harm content and sends them to human reviewers to make the final decision and take the right action. Those actions include removing the content; connecting the poster to local organizations who can help; or, in the most severe cases, calling emergency services. Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%.
Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union, which made it harder for us to proactively find content and send people help. I’m pleased to share that, today in the EU, we’re rolling out some of this technology, which will work across both Facebook and Instagram. We can now look for posts that likely break our rules around suicide and self-harm and make them less visible by automatically removing them from places like Explore. And when our technology is really confident that a post breaks our rules, we can now automatically remove it altogether.