Protecting Elections: Empowering People’s Voice through Authenticity, Transparency and Accountability August 2020 Give people the power to build community and bring the world closer together And this depends on people’s ability to express their unique voice, be heard, and exchange diverse ideas, experiences and information. And three billion people can do just that. We are committed to building a platform to own your voice and engage on important issues that your audience cares most about. 4 For illustration purposes only. With three billion people on our platform, we’re not naive that a tiny fraction may use their voice to cause harm or division, spread hate or try to undermine elections. We have a responsibility to address these risks. Bad actors tried to use divisive, polarizing issues on social media platforms to seed discord and interfere in the US 2016 election. While election interference is not new in politics, 2016 was a more sophisticated attack on social media. These influential messages didn’t always mention elections or candidates. They focused primarily on social issues like civil and social rights, the environment and immigration. SOCIAL ISSUES Heavily debated and highly politicized sensitive topics that can influence many people, may impact the the outcome of an election or result in legislation. People vote for leaders to advance their ideals. People speak out and call for change on important issues. Leaders build their agenda with important issues to inspire voters and drive change. Brands, advocacy groups and organizations share their voice on issues to influence the sentiment of people and shape culture. There are a variety of methods that people use to address social issues. We realize some might not see how their social issue ads could impact people and elections. While one ad from one advertiser is important and can impact an individual, the collective discussion on a sensitive topic at scale can be more persuasive and have greater impact on society. The discussion of social issues by many can influence the way people think about a topic and use their voice to shape culture: • helping people to change their mind or solidify their point of view • influencing the way people act • changing buying behavior • helping people choose which businesses to support, who they donate to, or which candidate to vote for, which may impact real-world outcomes, like elections of believe-driven buyers bought a brand for the first time because of its position on a controversial issue 61% 42% 67% of consumers will conduct further research on a social issue when prompted by a brand or organization are likely to vote for a specific candidate when prompted by brands or organizations taking a stand on an issue Source: 1. Sprout Social, #BrandsGetReal: Brands Creating Change in the Conscious Consumer Era, Nov 2019. 2. 2018 Edelman Earned Brand Report, Oct 2018 Our increasingly connected world makes it easier for these kinds of messages to spread. • The vast majority of content on our platform helps people and advertisers express themselves and drive positive change. • But can also make threats more harmful. We have learned lessons and taken action since 2016. 2016 • Launched third-party fact-checking program 2017 • Announced new tool to help people spot false news on Facebook • Banned Pages that repeatedly share false news from running ads • Removed first network of accounts for violating our Coordinated Inauthentic Behavior policy • Announced new authorization requirements before advertisers can run ads about elections or politics on Facebook or Instagram • Launched ‘Related Articles’ to give people more information on content rated false by third-party fact-checkers 2018 • Announced Election Research Commission to study social media’s role in elections • Introduced appeals for some content that we may have mistakenly removed • Updated our Community Standards to include internal guidelines used for decisions about content • Launched the Ad Library (with social issue/political ads archived for 7 years) and began requiring that advertisers running social issue/political ads confirm their location in the US and place “Paid for by” disclaimers on these ads • Began partnership with the Atlantic Council’s Digital Forensic Research Lab to study and investigate disinformation campaigns around the world, and continued to expand these external partnerships throughout 2018 and beyond • Allow people to see the ads a Page is running across Facebook, Instagram, and Messenger even if those ads aren’t shown to you • Announced new Pages transparency feature showing Page creation and name change dates • Launched new Pages authorizations requirements beginning with high reach Pages in the US • Expanded fact-checking to photos and videos • First physical Elections Operations Center for monitoring ahead of key elections • Added “People Who Manage This Page” section to show country locations in Page Transparency • Expanded voter suppression policies and announced new way for users to report potential voter suppression • Rolled out the Ad Library report to make it easier to see who is spending money on social issue/political ads on Facebook Source: Facebook’s Steps to Protect Elections, Highlights 2016-2020 We are working hard. 2019 2020 • Announced policy to take down misleading manipulated media • Announced new transparency features and controls for ads about social issues, elections or politics • Launched Election Operations Center in the US for all primary elections • Began requiring Pages to designate a “Confirmed Page Owner” to continue running social issue/political ads • Fact-checking program reached 9 partner organizations in the US • Started providing the location of certain Facebook Page and Instagram accounts with high US reach that are based outside of the US • Fact-checking program reached 60+ partner organizations • Began verifying the identify of people behind certain high-reach posts on Facebook in the US • Added new control that allows people to see fewer political and social issue ads on Facebook and Instagram • Started labeling state-controlled media on Facebook • Added US House and Senate ad tracker to Ad Library • Launched the largest voting information and registration effort in US history with a goal to register 4 million people • Launched fact-checking of Instagram content • Rolled out our ads authorization process for more advertisers globally • Launched policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote • Announced that organizations running social issue/political • ads in the US now have to go through a stricter authorizations process, including providing information such as a Tax ID or FEC Committee ID number • Introduced the Deep Fake Detection Challenge to develop new ways of detecting and preventing manipulated media • Expanded policy to ban ads that suggest voting is useless or meaningless, or advise people not to vote • Launched Facebook Protect to give campaigns, elected officials, their staffs, and others increased security protections • Announced stronger labeling of content rated false or partly false by thirdparty fact-checkers • Added Presidential ad tracker to Ad Library • Started showing the “Confirmed Page Owner” of US pages with large followings • Reached over 50 international fact-checking partners covering over 40 languages around the world Source: Facebook’s Steps to Protect Elections, Highlights 2016-2020 Authenticity + Transparency = Accountability With every action we have taken since 2016, we still fundamentally believe in freedom of expression for people and advertisers. Our decisions are rooted in: So people can better understand who is trying to influence them with ads and why. Of course, all advertising is trying to influence in some capacity. But we have learned that certain types of ads have the most meaningful impact on public opinion and how people vote at the polls — these are the ads that discuss, debate, or advocate for or against important topics. We developed a policy that creates a higher standard to run ads about social issues, elections or politics. Building an inclusive society depends on people sharing diverse perspectives. Our policies are based on the idea that people and advertisers can still have a voice to be heard and discuss sensitive topics. But in order to promote safe and healthy public debate on influential topics, advertisers need to complete more steps to confirm their identity so people can see who’s trying to influence them. Ads about social issues, elections or politics are: • Made by, on behalf of or about a candidate for public office, a political figure, a political party or advocates for the outcome of an election to public office. • About any election, referendum or ballot initiative, including ”get out the vote” or election information campaigns. • Regulated as political advertising. • About social issues; includes content with discussion, debate or advocacy for or against an issue in certain countries. To build an effective policy, we needed to categorize which ads are required to meet a higher standard. Learn more about our Advertising Policies. We’ve introduced an unprecedented level of ad transparency and authenticity so people can see who’s trying to influence them. To run ads about social issues, elections or politics, advertisers are required to: Authorization Complete the authorizations process in the country they want to run ads in Identity Prove who they are and where they are located Disclaimer Include verified “Paid for by” disclaimers in their ads to help us confirm the legitimacy of an organization and show people who’s behind the influential ads Transparency Ads marked as ‘about social issues, elections or politics’ are entered into the Ad Library for seven years Authenticity Transparency Accountability Facebook was built to give people a voice, but this doesn’t mean that people can say whatever they want. For years, we’ve had guardrails in place, in the form of our Community Standards and Advertising Policies, that explain what stays up and what comes down on Facebook. We have rules against certain speech that, for instance, will incite violence and suppress voting, and no one is exempt from these policies. Our Community Standards define what’s not allowed on Facebook. We have comprehensive policies that apply to everyone, globally, and to all types of content people post or pay for. We remove content that violates our Community Standards, such as posts or ads that may cause physical imminent harm or contains hate speech. We work hard to apply our policies consistently and fairly to people and their expression. Learn more about our Community Standards. 22 PROACTIVE DETECTION We use technology, trained global teams of reviewers or a combination to determine if a piece of content (such as posts, photos, videos or comments) violates our policies: • Our artificial intelligence that can proactively detect and remove violating content posted before anyone reports them and often before few, if any, people see them. • For example, 99% of the terrorist content or 99% of the graphic violence we action on is detected by our systems. • During reviews we may disable an account, Page, or Group or event for repeated or severe violating content. REPORTS BY PEOPLE In certain cases, a piece of content that’s already running can be reported by our community. If this happens, the content may be reviewed again, and if it violates a standard in our policy, we’ll take action. How organic content is reviewed Learn more in the Community Standards Enforcement Report . AD REVIEW We hold advertisers to even stricter standard to protect people – in part because they receive paid distribution, as opposed to something organically surfacing in people’s feed. • All ads are subject to our ad review system before they’re shown on Facebook or Instagram, which relies primarily on automated review (artificial intelligence) to check ads against our Advertising Policies. • During ad review, we’ll check your ad’s images, text, and positioning, in addition to the content on your ad’s landing page. • We use manual review to improve and train our automated systems, and in some cases, have trained global teams to review specific ads. • Ads are also required to follow our Community Standards . FLAGGED BY AI OR REPORTS BY PEOPLE In certain cases, a post or ad that’s already running can be flagged by AI or reported by our community. If this happens, the content may be reviewed again, and if found to be in violation of our policies and/or the ad is missing a “Paid for by” disclaimer, we disapprove it. Landing page Call to action Copy Ad image Learn more at the Business Help Center . How ads are reviewed Facebook company 24 Examples of ads with content that discuss, debate or advocate for or against social issues and require disclaimers For illustration purposes only. Ahead of upcoming elections and amid a global pandemic and protests for racial justice, we remain focused on the important work of removing hate speech on our platform and providing critical voting information. Facebook will take extra precautions to help everyone stay safe, stay informed, and ultimately use their voice where it matters most — voting. Section title We have no incentive to tolerate hate speech. While the vast majority of over 100 billion daily interactions are positive, a tiny fraction are hateful. But hate has no place on our platform. Our policies reflect this, and we have a zero tolerance approach and remove it. While we believe in free expression, we constantly reassess policies to draw the right lines. We want to do more to remove divisive and inflammatory language that has been used to sow discord. We’ll continue partnering with experts and civil rights organizations to adjust our approach as new risks emerge. As of 2018, our ad policies prohibit attacks directed at people based on their race, religion, or other parts of their identity. And in June 2020, we expanded this to: • Prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others • Better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them. This means we now remove ads like, “immigrants are a threat to the safety of our community” or “Muslims want to destroy us from within.” Learn more about our Advertising Policies. We have taken action on issues from the audit, such as expanding our policies to prohibit white supremacy and banning 250 white supremacist organizations from Facebook and Instagram 95% removed $1B committed 250 takedowns We find and remove 95% of hate speech before users report it to us, and a recent EU report found Facebook assessed more hate speech reports in 24 hours than Twitter and YouTube We have a goal of at least $1 billion in spend with diverse suppliers by the end of 2021 This work is never finished. We’re committed to continuing to make significant investments to tackle hate. • We have zero tolerance for hate speech – but this doesn’t mean zero occurrence. • We are listening and learning from employees, communities and small business to keep our platform safe. • We opened ourselves up to a civil rights audit – an independent two-year review by civil liberties and civil rights experts. Source: Facebook Community Standards Enforcement Report, August 11, 2020. Facebook Business Blog, “Sharing Our Actions on Stopping Hate,” July 1, 2020. Facebook’s Civil Rights Audit – Final Report, July 8, 2020. Facebook Newsroom, “New EU Report Finds Progress Fighting Hate Speech,” June 23, 2020. AUTHORITATIVE VOTING INFORMATION DURING A PANDEMIC New US Voting Information Center will provide accurate information; goal to help 4 million people register and vote Any post about voting, including from politicians, will link to the center for more context STEPS TO FIGHT VOTER SUPPRESSION Ban content that misleads you on when, how or who can vote; if ads suggest voting is useless or not to vote To quickly remove false claims about polling conditions 72 hours pre-election or about ICE agents at polls LABELING NEWSWORTHY CONTENT We now label content we leave up as newsworthy – a few times a year, we leave up violating posts if the public interest outweighs the risk of harm No newsworthy exemption for ads or content inciting violence or suppressing voting – even from politicians ✓ Continuing to update our voter suppression policies and give people accurate voting information. ✓ ✓ ✓ ✓ ✓ Our approach to misinformation. But like the other major internet platforms and most media, Facebook does not fact-check direct speech from politicians. Remove content violating our policies Reduce the spread of false content rated by factcheckers Inform people with more context and labels on misinformation FREE EXPRESSION & ACCOUNTABILITY We believe that, in mature democracies with a free press, political speech is the most scrutinized speech there is People should be able to hear from those who wish to lead them. By limiting speech, we would leave people less informed about what their elected officials are saying Accountability only works if we can see what those seeking our votes are saying and it’s scrutinized and debated in public. Even if we viscerally disagree with what they say. EXEMPTING DIRECT POLITICAL SPEECH If a claim is made directly by a politician on their Page, in an ad or on their website, it is considered direct speech and ineligible for our third-party fact checking program WHAT CAN’T POLITICIANS SAY? Politicians can’t violate our Advertising Policies or Community Standards by inciting violence or including hate speech; they can’t spread misinformation about where, when, or how to vote Ads are held to a higher transparency standard – they must include disclaimers and are made publicly available in the Ad Library for seven years. This helps ensure that politicians are equally held accountable for their words at the polls. ✓ ✓ X X Our approach with politicians. We believe that in a democracy, people should decide what is credible, not technology companies – we don’t believe a private company should censor politicians or the news. Our approach is grounded in our fundamental belief in free expression. ✓ X Scale opportunities to drive business outcomes Own your brand voice Be a part of the conversation