When Twitter banned Donald Trump from its platform, it became clear that society had reached a point at which the scope and power of social media platforms far exceeded what anyone could have predicted. We had the “leader of the free world” using a free app on the same smartphone that many other Americans use as his go-to means of presidential communication. And this just scratches the surface.
Twitter, the company that developed the app, effectively had the power to silence–or do something close to silence–the “leader of the free world” by deleting his account at free will. This realization was not only a humbling experience for Trump but also for society at large–one singular player, a technology company, had the power to alter discourse relevant to a whole nation.
These impressive ramifications beg the question: how exactly did Twitter make such a monumental decision?
Though Twitter made the decision at will, it did not make the decision at whim. The decision was tough and even painful as the company went through its deliberations. It must not be forgotten that the decision to ban Trump from Twitter happened immediately after the January 6 riots. The nation was already shaken, as were Twitter’s employees, but the company was forced to make “the most important decision in the social media service’s 15-year history” with this background nonetheless.
Twitter was weighing the interests of two camps: one which felt Trump’s rhetoric was dangerous enough for the nation that it warranted silencing and another that felt that however damaging Trump’s rhetoric was silencing a sitting President was far too large of a restriction on free speech to be justified.
Originally, Twitter erred with the second camp–more so for business reasons than political ones. According to analysis by the Washington Post, Twitter was initially reluctant to truly “police” content because “use of Twitter by public figures to break news allowed the company to punch above its weight against giants like Facebook and YouTube.” As a result, society didn’t see significant censorship of political content by Twitter before 2016. However, in 2016, Twitter would be tested by one major event: the beginning of the Trump presidency.
Once Trump became president, he, the world’s most prominent political figure, began to put pressure on this relatively lax enforcement. Twitter’s strategy of “letting politicians be politicians” was putting its management under fire–both from the general public and its own employees. The Washington Post continues, “It was clear that he would use the platform to harass citizens, including a former adviser who was a Black woman, whom he compared to a dog. He also criticized a Gold Star family. He also shared falsehoods and conspiracy theories.” However, this strategy ultimately built Trump an audience and made Twitter money.
Ultimately, Twitter had to make a decision. Under fire from the public and its employees as mentioned, it moved from the second camp (letting politicians say anything) to the first (restricting content at times). Twitter decided to strictly enforce a previously drawn and agreed upon line. As the Twitter CFO notes: “policies are designed to make sure that people are not inciting violence.”
The line seemed agreeable and clearly delineated. The issue is that even defining “inciting violence” is subjective. What some deem to be “inciting violence” is not to others. Even if two groups agree on this, at times, some feel that content is not “inciting violence” explicitly enough to warrant banning.
This is what happened in the Trump fiasco. Some believed he had crossed the line to a sufficient degree and some did not. As in any other issue of political polarization, this subjective decision was seen by either side as an objective one. Thus, the public response when Twitter made the decision that Trump’s actions were sufficiently inciting violence to warrant censorship, the public response was, somewhat unreasonably, one of complete outrage.
There was an outpouring of hatred from those who supported Trump against Twitter and “big tech” in general, as people felt that this subjective decision around content moderation was an objective impediment of free speech. These ideas are also tied to the aforementioned surreal positioning of social media that Trump’s banning revealed. Platforms like Twitter controlled so much of public discourse, and people were scared.
In response, Trump made a new platform to cater to the group of people who were sympathetic to his “anti-censorship” side of the issue called “TRUTH Social.” The platform is set for its “full rollout in 2022.” (However, it has already made the news as “hackers had gained access to a private version of the social network.”) Per Trump, the goal of the platform is to “stand up to Big Tech,” which reflects on how the platform is built upon the outrage and fear of “Big Tech” following disagreements around censorship issues. Make no mistake, the platform is more of a business venture than anything else. In fact, the platform plans to roll out “a subscription video-on-demand service called TMTG+ that will feature entertainment, news and podcasts, according to the news release.”
It is also important to mention the obvious: the division around censorship issues, and thus the new TRUTH platform, has deep partisan linkages. Trump envisions the TRUTH platform to be a “a rival to the liberal media consortium,” illuminating that the anti-”Big Tech” group is overwhelmingly conservative.
This begs the question: will more liberal groups mirror on TRUTH the outrage conservative groups directed at Twitter?
The overwhelmingly likely outcome is yes. Already, the TRUTH platform seems to come with a negative connotation by liberal or “established” technological groups and analysts. There may be outrage around the platform spreading conspiracy theories or provocative misinformation. If the platform censors, there is the potential for the company to censor controversial liberal messaging that the company “deems violates its guidelines.” In any case, it seems impossible that the platform keeps everybody happy.
In truth, whenever companies make censorship decisions, they are bound to anger somebody–at least the person who is being censored. As these censorship decisions become more and more unclear, more and more people will be angered. So how can companies moderate content in the most agreeable and effective way possible?
The optimal option is to put content moderation in the hands of the users. In an abstract sense, content moderation should be democratized or at least crowdsourced. Users should be allowed to vote on whether the content violates pre-set guidelines. Though people are bound to disagree and there may be some “tyranny of the majority,” content moderation actions carried out in this sense will be far more transparent. Anger can be directed at “society” instead of specific platforms. Platforms that carry out content moderation in this way are platforms that more people will trust.