With the world’s battle against coronavirus ongoing, WhatsApp has been fighting a similar battle of its own: fake news. The World Health Organisation described the spread of misinformation online as an “infodemic”.
As more of our communication has moved online in the past year, social media and messaging apps have powered our conversations. In doing so, they unintentionally led to a dangerous and rapid spread of misinformation.
Why misinformation is dangerous
Misinformation can cause real-world damage to individuals, and to society as a whole. In 2016, a man walked into a pizzeria with an assault rifle. He believed Hillary Clinton was running a child-trafficking ring in the pizzeria, so fired some rounds. Thankfully, no one was injured. It was, however, a significant moment as it amplified the effects of fake news on society.
Fast forward a few years, and misinformation has spread from just one pizzeria into a global crisis. When the coronavirus pandemic erupted, false claims that linked the virus to 5G and Bill Gates took off online, and many social networks faced scrutiny for allowing the spread of misinformation on their platforms.
This backlash quickly prompted companies to act quickly by enforcing stricter moderation and promoting trusted sources to reach more users. This strategy worked on social networks, but it couldn’t be implemented by messaging apps which have added privacy layers.
This posed an entirely new challenge for messaging apps, particularly WhatsApp: how to minimise the spread of misinformation without moderating private chats.
How WhatsApp is fighting misinformation
To its credit, WhatsApp didn’t just sit idly by while misinformation spread on its platform. As a starting point, the popular messaging app started with showing a ‘double forward icon’ on messages shared various times. These messages were then only allowed to be forwarded to one person at a time. WhatsApp also set a limit of five forwards at a time to all messages on the app.
WhatsApp can’t introduce moderation into chats, especially considering the privacy concerns its users have. Therefore, restricting messaging forwarding, which is the most common way of misinformation spreading, is its only option.
Shortly after the measures were introduced, WhatsApp reported a 70% decline in highly forwarded messages. This approach helped, but it didn’t completely eradicate fake news.
In fact, WhatsApp recently faced a misinformation crisis of its own.
WhatsApp has been sharing data with Facebook for years but rumours that the company just started this practice quickly spread, leading to an influx of users moving away from the app. Ironically, most of this misinformation spread through WhatsApp itself.
What else can WhatsApp do?
WhatsApp could add secure on-device machine learning that detects if a message contains fake news and add a claim to refute it, just like Twitter. This can be done by analysing keywords and placing a prompt underneath messages containing them.
While this could help tackle misinformation, it could also cause WhatsApp to face privacy-related backlash. Even if the analysis is done offline where messages remain encrypted, people seeing prompts under their message could get the impression that the contents of their messages are not private, even though they are. A company facing immense privacy backlash cannot afford to take such a risk.
In August, WhatsApp announced a feature called Search the Web. It allows users to quickly search the contents of their message online to verify its accuracy. The feature is currently being trialed with some users but has not officially rolled out yet.
WhatsApp is in a tough position, where it is facing scrutiny both for the spread of misinformation and for its lack of privacy. Solving either of these problems will just make the other one more prominent.
My family and friends trust misinformation they receive online – what can I do?
Unfortunately, it seems like there isn’t much more WhatsApp can do to truly combat fake news. So what can we do to help?
The most important thing we can do is to point out misinformation when we see it.
Often, fake news circles within like-minded communities, making it hard for other people to take notice of it. However, fake news on WhatsApp doesn’t discriminate as it often gets forward to everyone.
Therefore, it’s your responsibility to point out misinformation when you receive it and urge the sender to send a follow-up correction to their contacts. WhatsApp has a handy page on tips to deal with misinformation on the app.
We can also share accurate news sources with family and friends that are susceptible to misinformation. One of the reasons people believe fake news is because they see a higher quantity of inaccurate news online than real ones, leading them to subconsciously believe the fake news. For example, someone receiving ten fake messages a day on how the coronavirus vaccine is harmful and one message containing accurate information with its benefits, is likely to believe the fake ones simply due to quantity.
Therefore, constantly sharing accurate news to family and friends could reverse that effect.
The rise of misinformation online is extremely dangerous. A few malicious people might start the wave, but it is quickly spread by people with good intentions too.
Often, people spread fake news due to real fears they have. Instead of ignoring those fears, we can learn to understand their fears to better empathise with them. This guide from USA Today offers great tips on how to achieve this.
If you have any tips on how you combat fake news, share them with us, we’d love to hear them. Also, if you enjoyed this article and think it will benefit the people you care about, why not share it with them online..or maybe on WhatsApp?