
Better Late Than Never; Twitter and Facebook Enact New Civic Misinformation Policies

It’s no coincidence we say that content can go “viral” on social media. Due to the instantaneous nature of online interactions and the extremely broad user networks of platforms like Twitter and Facebook, social media has become a super-spreader of information, where one post can go from three shares to a million in just hours. This viral content becomes dangerous when lies and unfounded rumors reach millions of users. This year, viral misinformation about the upcoming election has been viewed on social media by millions of Americans, and the president himself has posted Tweets that undermine public trust in the civic process (See example 1, example 2). Misinformation with mentions of election rigging, ballot tampering, and voter fraud aims to incite fear and uncertainty in citizens who deserve the right to cast their ballots in a free and fair election. Finally addressing the threat that online misinformation poses to the electoral process, Twitter and Facebook, two social media giants, are actively making changes to their platforms to curb the spread of misinformation.
What strategies are Twitter and Facebook using to respond to the ongoing and imminent threat of misinformation ahead of the election? After a conspicuously long wait, they are finally banning groups and content that promote QAnon, a fringe right-wing conspiracy group that supports Donald Trump, pointing users to authoritative and credible election-related resources, and implementing a series of new policies relating to labeling civic misinformation. In this article, I’ll focus on the last change, misinformation labeling, which will most directly affect the role viral misinformation will play in the 2020 election.
Flagging and taking down sensitive and problematic material is nothing new to Twitter and Facebook, but the companies’ policies regarding their regulation of political content have recently come under fire ahead of the November 3rd election. On Wednesday, October 14th, Twitter banned two New York Post articles on the grounds that they violated two of Twitter’s policies: their private information policy (the articles contained personal information such as emails and phone numbers) and their hacked materials policy (the articles contained emails given to the Post by Trump’s former attorney Rudy Giuliani from a hard drive purportedly belonging to Hunter Biden, the son of presidential candidate Joe Biden). Additionally, Twitter locked the New York Post’s Twitter account. That Friday, however, Twitter reversed course and unbanned both the articles, now allowing its users to share links to them. The company’s executive team announced that going forward, it will “label Tweets to provide context instead of blocking links from being shared on Twitter.”
Oddly enough, Twitter did not comment on the unverified status of the claims and evidence put forth in the New York Post articles, or concerns that the article is related to possible foreign interference campaigns, given the Post’s historical bias towards Republican candidates. Regardless of the article’s questionable credibility, Twitter’s handling of the situation ignited a fiery debate on the validity of content regulation policies. Many conservative voices on Twitter cited the article ban and the continued lock on the New York Post’s account as violations of the First Amendment. (Twitter announced that it will continue to lock the New York Post’s Twitter account until they take down the articles, due to a technicality in their policies.) Other users supported Twitter’s limitations on the sharing of the articles on the grounds of possible misinformation, referring to the unusual conditions surrounding the article’s publication and its potential to interfere in the election.
With the upcoming election and the past election in mind, Twitter made the right choice to limit sharing on the articles. The information in the articles cannot be independently confirmed by other news sources, and if they indeed contain misinformation, then they pose a serious threat to the validity of the election. This year, online interference in the electoral process is not just a vague possibility — it is already happening on a large scale. It is wise of Twitter to learn from their past mistakes. Their platform, along with Facebook and others, was shamelessly exploited by malicious actors in the 2016 U.S. presidential campaign. To give one example, Twitter disclosed in January 2018 that its security personnel believe that over 50,000 automated accounts linked to Russia were tweeting election-related content during the 2016 election (according to page 18 in the Senate’s investigation into Russian interference in the 2016 election).
Lastly, to set the record straight, Twitter’s policies do not violate the First Amendment, which only mandates that citizens’ right to free speech is respected by the government, not by private companies. As the Washington Post points out, you could also interpret the First Amendment as protecting the rights of Twitter to decide for itself what to publish on its platform. In any case, the New York Post still has the articles up on its own website, and they can be freely shared via any other email or texting app. Perhaps the voices crying “censorship” can step back from Twitter and get a little more inventive with their sharing tactics.

Election Misinformation Policies on Twitter and Facebook
As a result of either innate civic duty or immense public pressures, both tech giants have recently updated policies on what content will receive labels or face removal from their platforms. Based on my research, the different flavors of potential misinformation receiving the most attention from Twitter and Facebook include the following: the legitimacy of the voting process, COVID-19 misinformation relating to Election Day, suggestions of civil unrest associated with the election, and certification of election results.
Implementing misinformation labels
Theories suggesting the illegitimacy of mail-in ballots and rumors of widespread voter fraud are currently running rampant on social media. To mitigate the negative effects of these claims, Facebook said that it will attach an informational label to content seeking to delegitimize the outcome of the election or discuss the legitimacy of voting methods, such as claims that lawful methods of voting will lead to fraud. It will also remove posts that use COVID-19 to discourage users from voting. Twitter offered more details on its approach to content labeling. Broadly, it will “label or remove false or misleading information intended to undermine public confidence in an election or other civic process.” In Twitter’s recent policy update, the company specifically mentioned that it will begin to label unverified information about election rigging, ballot tampering, vote tallying, or certification of election results.
How effective will these new policies be at counteracting the spread of misinformation on these platforms? For one, they address one of the underlying reasons why misinformation spreads so easily on social media: the lack of fact checking. On Twitter and Facebook, posts from your raving QAnon uncle and the New York Times receive no differentiation in terms of credibility. The New York Times receives a score of “High” in factual reporting from MediaBiasFactCheck.com, while your uncle probably does not. On the same subject, the New York Post receives a score of “Mixed” in factual reporting. Twitter and Facebook’s decision to label blatant misinformation brings them one step closer to a system of credibility ratings for social media accounts with large followings, a move that would support the circulation of information from well-vetted, properly-cited sources. However, it’s too early to know if misinformation labels will dissuade already-polarized users from believing and sharing false content that supports their worldviews.
Certifying election results
Based on the new policies and features outlined in Twitter and Facebook’s recent announcements, misinformation regarding the official election results is a top-priority issue for both companies. Because of this year’s inundation of mail-in ballots, certified results from the election will likely be delayed as votes are counted on and after Election Day. Both companies seem to be bracing for the potential of damaging false claims and widespread confusion about the victor of the election.
Facebook’s official announcement:
“If a candidate or party declares premature victory before a race is called by major media outlets, we will add more specific information in the notifications that counting is still in progress and no winner has been determined. If the candidate that is declared the winner by major media outlets is contested by another candidate or party, we will show the name of the declared winning candidate with notifications at the top of Facebook and Instagram, as well as label posts from presidential candidates, with the declared winner’s name and a link to the Voting Information Center.”
Twitter’s official announcement:
“People on Twitter, including candidates for office, may not claim an election win before it is authoritatively called. To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls. Tweets which include premature claims will be labeled and direct people to our official US election page.”
Both Twitter and Facebook’s policies specifically mention the possibility of candidates for office themselves declaring premature victory. While this policy applies to both presidential candidates in practice, it seems to point with greater intensity at President Donald Trump, who in September said “we’re going to have to see what happens” when asked about peacefully transferring power if he loses the election.
Limiting the spread of misinformative posts
Lastly, Twitter (but not Facebook) has announced new measures related to sharing posts labeled for misinformation. Twitter has announced that users who attempt to Retweet a post with a misleading information label will see a prompt that points them to credible information on the topic before they are able to share the post with their network. Even stronger warnings and restrictions from Twitter will be placed on U.S. political figures (including candidates and campaign accounts), U.S.-based accounts with more than 100,000 followers, or those which “obtain significant engagement.” In the case that one of these accounts receives a label for misinformation on one of their Tweets, Twitter will require users to tap through a warning before viewing the Tweet and prevent users from liking, Retweeting, or replying to the Tweet. These Tweets will also be de-amplified by Twitter’s recommendation algorithm. Facebook has yet to announce any restrictions on sharing or interacting with content that receives a label for misinformation.
Twitter’s new sharing restrictions are an improvement to its previous policies because they place new responsibilities on users with wide networks. Political figures, pundits, and celebrities can reach millions of followers with their messages, and this year, we’ve seen a number of them use their platforms to spread fraudulent claims about the voting process. Twitter’s choice to push back on the exploitation of its wide-reaching networks will be conducive to stemming the spread of viral misinformation. Is Twitter’s new policy fair? I would say yes. If the influencers on Twitter have the power to instantly share content with millions of followers, then they have a civic responsibility to post factual information and be held accountable.
Is there still time?
Unfortunately, these steps have arrived too late to reverse the harm already inflicted on the democratic process by online foreign interference in the 2016 election. This year, Twitter and Facebook’s new policies combating electoral misinformation have only come at the most dire point for American voters. Misinformative posts have already been shared millions of times and undoubtedly have already inflicted some amount of harm on the civic process. While some might argue that spread of online misinformation is an inevitable, unlucky side effect of the nature of social media platforms, we should remember that these platforms are not natural phenomena. Twitter and Facebook are human-made enterprises with the potential for change and evolution, a point that is only underscored by their array of recent policy changes.
Corporate responses to the challenges of misinformation are in the hands of their management and CEOs, people who hopefully realize that they have the power to address the grave threat that online misinformation poses to the stability of the American democratic system. More than ever before, the country is in critical need of clear, honest, and accurate information regarding the civic process. At long last, it seems like Twitter and Facebook are finally taking steps with the greater good in mind.