The House Intelligence Committee published the 3,000 ads the Russia-based Internet Research Agency ran on Facebook and Instagram between 2015 and 2017.
Facebook gave these ads to Congress so they could better understand the extent of Russian interference in the last US presidential election.
In the run-up to the 2016 elections, we were focused on the kinds of cybersecurity attacks typically used by nation states, for example phishing and malware attacks. And we were too slow to spot this type of information operations interference. Since then, Facebook made important changes to prevent bad actors from using misinformation to undermine the democratic process.
This will never be a solved problem because we’re up against determined, creative and well-funded adversaries. But we are making steady progress. Here is a list of the 10 most important changes we have made:
1. Ads transparency. Advertising should be transparent: you should be able to see all the ads an advertiser is currently running on Facebook, Instagram and Messenger. And for issue and political ads, we’re creating an archive so you can search back seven years — including for information about ad impressions and spend, as well as demographic data such as age, gender and location. People in Canada and Ireland can already see all the ads that a Page is running on Facebook — and we’re launching this globally in June.
2. Verification and labeling. Every advertiser will now need confirm their ID and location before being able to run any political or issue ads in the US. All political and issue ads will also clearly state who paid for them.
3. Updating targeting. We want ads on Facebook to be safe and civil. We thoroughly review the targeting criteria advertisers can use to ensure they are consistent with our principles. As a result, we removed nearly one-third of the targeting segments used. We continue to allow some criteria that people may find controversial. But we do see businesses marketing things like historical books, documentaries or television shows using them in legitimate ways.
4. Better technology. Over the past year, we’ve gotten increasingly better at finding and disabling fake accounts. We now block millions of fake accounts each day as people try to create them — and before they’ve done any harm. This is thanks to improvements in machine learning and artificial intelligence, which can proactively identify suspicious behavior at a scale that was not possible before — without needing to look at the content itself.
5. Action to tackle fake news. A key focus is working to disrupt the economics of fake news. For example, preventing the creation of fake accounts that spread it, banning sites that engage in this behavior from using our ad products, and demoting articles found to be false by fact checkers in News Feed — causing it to lose 80% of its traffic. We now work with independent fact checkers in the US, France, Germany, Ireland, the Netherlands, Italy, Mexico, Colombia, India, Indonesia and the Philippines with plans to scale to more countries in the coming months.
6. Significant investments in security. We’re doubling the number of people working on safety and security from 10,000 last year to over 20,000 this year. We expect these investments to impact our profitability. But the safety of people using Facebook needs to come before profit.
7. Industry collaboration. Last month, we joined 34 global tech and security companies in signing a TechAccord pact to help improve security for everyone.
8. Intelligence sharing with government. In the 2017 German elections, we worked closely with the authorities there, including the Federal Office for Information Security (BSI). This gave them a dedicated reporting channel for security issues related to the federal elections.
9. Tracking 40+ elections. In recent months, we’ve started to deploy new tools and teams to proactively identify threats in the run-up to specific elections. We first tested this effort during the Alabama Senate election, and plan to continue these efforts for elections around the globe, including the US midterms. Last year we used public service announcements to help inform people about fake news in 21 separate countries, including in advance of French, Kenyan and German elections.
10. Action against the Russia-based Internet Research Agency. In April, we removed 70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — controlled by the Internet Research Agency targeted at people living in Russia or Russian-speakers in Azerbaijan, Uzbekistan and Ukraine. The IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people in the US, Europe and Russia — and we don’t want them on Facebook anywhere in the world.
As our CEO and Founder, Mark Zuckerberg told Congress last month we need to take a broader view of our responsibilities as a company. That means not just building products that help people connect — but also ensuring that they are used for good and not abused. We still have a long way to go. And will keep you updated on our progress.