By Monika Bickert, Facebook Vice President of Global Policy Management
Today, the Washington Post published a story about people attempting to illicitly sell drugs on Instagram. I joined Facebook after a career as a prosecutor and saw firsthand the damage these drugs can inflict on communities and families.
So let me start with the obvious: there is no place for this on our services. It’s bad for society, bad for people, and against our values.
It’s why we’re working hard not just to find and remove this kind of harmful content, but also to help those suffering from addiction get the support they need. And while we still have work to do, we’re making progress:
We block and filter hundreds of terms associated with drug sales. People mostly find this content by searching for it. So we prevent these terms being suggested in search as well as limit the discovery of hashtags related to drug sales on Instagram. If someone begins to type a hashtag that might be related to drugs, we block the remainder of it.
We’ve made it easy for people to flag bad content so we can quickly review and remove it. In the case of the Instagram accounts the Washington Post raised to us, all but one were disabled within two days of being created.
We are able to proactively investigate profiles, Pages, Groups, hashtags and accounts associated with bad content we’ve already removed. For example, one of the accounts mentioned in the Washington Post was independently reported to us and we not only removed it, but then also took down three more associated accounts that had yet to be reported.
We require certification for addiction treatment services that want to advertise.
We work with outside organizations and experts to understand the latest tactics these bad actors use to mask their activity. We’re a member of the Center for Safe Internet Pharmacies organization and have partnered with LegitScript since 2015 so we’re up-to-date on the latest street names for drugs and developments in new pharmaceuticals.
We’re developing new technology to proactively identify when someone may be attempting to sell drugs. For example, we can now understand text that appears in images, which is important because bad actors share their contact information in images to evade detection.
We’re working to ensure that our services are also used to help educate people about the risks, prevent drug misuse, and raise awareness of the opioid epidemic. We work with the Substance Abuse & Mental Health Services Administration to provide addiction support when people look for help or try to find drugs on Facebook and Instagram. We also work directly with law enforcement and/or emergency responders, routinely responding to valid law enforcement requests for information, and we notify them of emergencies whenever someone may be at serious risk of harm.
An important thing to note is that The Washington Post’s story is based on findings from a research company called GIPEC – findings that we think are misleading.
GIPEC created an artificial Instagram feed by following only objectionable content and some brand accounts. And while the fact that they were able to create it in the first place shows we still have work to do, this kind of manufactured feed is not a real representation of what most people see on Instagram.
This kind of content is also not in our commercial interests. Our business is advertising, and no advertiser wants their ads shown alongside these types of posts. So we are investing heavily in more people and better technology to create a safe place for everyone.
We’ve made progress in the fight against illicit drug sales on our platforms, but we have more to do. We’re committed to making sure we do everything we can to prevent this kind of abuse.