“Breaking News: Scientists discover inhaling steam is a cure for coronavirus.” A few weeks ago, this message appeared on my phone’s screen. It was a forwarded message on WhatsApp from a relative in India. A quick Google search revealed that it was a blatantly false statement. However, by the end of the day, I had received that same message twelve times—not a single one of those people thought to confirm the information before forwarding it to me.
Information is more accessible today than ever before. Still, at the same time, it’s becoming harder to discern what is true or false. Misinformation is rampant, especially on social media platforms such as Facebook, Instagram, Twitter, and WhatsApp. For example, sensationalized fake news was a key factor in undermining truth and creating chaos in the 2016 presidential election, the Rohingya genocide in Myanmar, and caste-based killings in India.
Unfortunately, the problem is only getting worse. On February 13, the WHO announced that fake coronavirus claims are causing an infodemic: misinformation about coronavirus is spreading faster than the disease itself, and the culprit is none other than WhatsApp.
Currently, WhatsApp is used by more than a quarter of the global population, including 400 million individuals in India, their largest market. The service is free, works on nearly all platforms, and requires little technological literacy, which is why the usage of the app has exploded in the developing world. One of WhatsApp’s distinguishing features is the ease of message forwarding. With just two taps, a message you receive in one chat is sent to any other chat. However, it’s this feature that is at the heart of the problem. When an individual receives a false, sensationalized image, video, or message, they can easily forward it to several groups at once. Much like how a disease spreads exponentially through a population, the forwarding feature allows misinformation of all types to travel quickly.
The AFP, an international newspaper, has thus far identified and invalidated over 348 claims regarding coronavirus circulating on WhatsApp. Some of them include that garlic is a cure for COVID-19, face masks should be worn inside out if you’re not sick, coronavirus is a type of rabies, and drinking water will prevent coronavirus from sticking to your throat. Unfortunately, individuals that believe these rumors feel a false sense of security. Convinced that following these steps has made them immune, they go about their lives typically instead of taking the appropriate action to limit their chance of contracting the disease. In India, for example, the entire nation went into a three-week lockdown in response to coronavirus, and on the first day the government eased the lockdown, the country reported its most significant one-day spike so far. Coronavirus containment in India is a result of people’s fears of the government, not the disease. It seems apparent that we should aim to stop the spread of misinformation on WhatsApp, just as other social media networks do.
However, content regulation on WhatsApp is not nearly as easy to justify. Unlike a social media network like Facebook, where users post content publicly or semi-publicly, WhatsApp messages are intended to be private. On Facebook, individuals can find your profile from a Google search and users are more likely to add strangers to their friend network, whereas on WhatsApp you only speak to people you have accepted or initiated a conversation with. There’s a higher expectation of privacy on a platform like WhatsApp, and people are more reluctant to allow their private messages to be read than posts they intended for the public. Should WhatsApp be subject to the same level of scrutiny that Facebook does even though WhatsApp is a peer-to-peer messaging platform? This distinct privacy dynamic between social media and WhatsApp makes it tough to say that we should use the same methods for both. Balancing content regulation and privacy is a difficult task, especially for WhatsApp. Nevertheless, it certainly seems like regulation is just as if not more necessary on Whatsapp than other social networks. Even so, it’s difficult to achieve.
One of WhatsApp’s defining features is its end-to-end encryption, which prevents any third parties from reading a user’s messages. For some people, such as individuals living under authoritarian regimes, this is the primary reason they choose to use WhatsApp over similar messaging platforms. However, this encryption process makes the detection and regulation of misinformation nearly impossible. Stephanie Hankey, the co-founder and Executive Director of Tactical Tech, explains that “researchers, journalists and Whatsapp themselves can not see the content of messages—they can see who the users are and who they are sending things too [sic] and how often, but they do not know what they are saying. This makes it much more difficult to identify, flag or remove misinformation and to trace its source.”
It appears that the only way to regulate misinformation on the platform effectively is to remove the current encryption system, but it doesn’t seem like WhatsApp is going to budge anytime soon. Last year, the US and Indian governments called upon WhatsApp to make messages traceable to help fight crime and misinformation in their respective countries. In India, in particular, WhatsApp was a significant catalyst in mob lynchings across the country. The company refused to take action, stating the creation of any “back doors” to its encryption system would put users’ data at risk.
In the wake of the coronavirus infodemic, Whatsapp, despite its obstinance, has begun to take more aggressive measures to limit the spread of misinformation. Now, users can forward a message to only one person or group at a time, a sharp decrease from five (two years ago, this number was 250).
However, these protocols don’t stop the spread of misinformation. In essence, it’s a quick fix that marginally restricts the ability for misinformation to spread as quickly. In fact, researchers from Brazil’s Federal University of Minas Gerais and MIT found that limits on message forwarding have some impact on delaying misinformation but they’re “not effective in preventing a message to reach the entire network quickly.” In countries like India, message forwarding has become a normative cultural expectation. It’s an easy way to be active in a group without generating original content. In the case of coronavirus misinformation, many people feel an obligation to share what they believe to be potentially life-saving information with their immediate network, even if it takes a few more clicks on their screens. Even more problematically, since users almost always use WhatsApp to converse with people they know—family, friends, colleagues, and peers—they’re more likely to believe content they receive on WhatsApp relative to other social media networks. It’s for these reasons that earlier last month, Professors Shakuntala Banaji and Ram Bhat of the London School of Economics found that “without other measures, like being able to report, ban and prosecute users who pass on hateful misinformation, ‘this new measure for much forwarded content will perforce prove to be ineffective.’”
Additionally, early last month, a spokesperson for WhatsApp stated they were testing a new feature where a button will appear next frequently forwarded messages—messages that have been forwarded at least five times—that allows users to search the information on Google and verify its veracity. This measure is certainly in a step in the right direction, but since it hasn’t been implemented yet, we don’t know whether individuals will use it or not. Additionally, it’s a reactive measure. The feature only appears once content has already significantly spread. WhatsApp should consider trying to stop the spread of fake news earlier in the process, but until then, reactionary measures seem to be the best we can get
Fake news about coronavirus isn’t like regular fake news. For many, believing a false message can be the difference between life and death. WhatsApp has an imperative to both save lives and value its users’ privacy, but in an unprecedented time like this, they can’t do both.