top of page
  • Sára Kende

The Age of Misinformation - Social Media: How It Has Fractured Global Politics

As featured in Edition 39, available here.


BY SÁRA KENDE (3rd year - Politics and International Studies - Hungary)


“Zuckerberg was an incel at 19 and now we can’t have democracy apparently” – reads a post from December 2020, which was liked by 58 000 users on Tumblr, one of the few social media platforms not owned by Facebook. The post refers to how the social media giant, which started in 2003 as a site for ranking Mark Zuckerberg’s female classmates by their looks, has since become a powerful force for shaping international politics. Is the relationship between social media and the erosion of democracy that straightforward? Reality might be a bit more complicated.


Around the time of its inception, social media was celebrated as an “inherently liberating” technology: free from centralised control its communicative power enabled easy access to information, political mobilisation, and free speech. Mass communication, which in its traditional form was controlled by elites who decided what information the public would receive and have access to, was transformed completely when the Internet gave everyone the power to create, share, and find content relevant to them. Conventional wisdom held that the free and open information environment, brought about by social networking technologies, would promote democratic outcomes and enhance a liberal, deliberative democracy.

Initially, this seemed true. Social media helped protesters organise their activities, mobilise support, disseminate information, and raise awareness during the Arab Spring uprisings in the early 2010s. More recently, social media helped connect pro-democracy activists and protesters in Hong Kong, Thailand, and Taiwan. The ‘Milk Tea Alliance’ started in April 2020, when Chinese social media users attacked a Thai celebrity couple for expressing support for the Hong Kong and Taiwan independence movements, which led to an online ‘battle’ between Thai and Chinese social media users. Since then, the Milk Tea Alliance, named after popular drinks in the region, has evolved from an online meme to a leaderless pro-democracy protest movement across Southeast Asia, spreading to Myanmar, South Korea, the Philippines, India, Malaysia, Indonesia, Iran, and even Belarus. The pro-democracy, youth-led movements in these countries share knowledge, inspire one another, and stand in solidarity with each other online in their similar struggles against censorship and authoritarianism.


There are also other, less direct ways in which social media fosters democracy and the freedom of information. Social media can be used to spread or fact-check information in environments where elites aim to stifle critical voices or deceive the public.


In China, where censorship of both online and offline media is notoriously strict, social media users resort to using code words or images to replace prohibited words and phrases. Critical social media users in China used code words, such as WH (replacing Wuhan), or F4 (the name of a Taiwanese boy band, but also used mockingly to refer to the four politicians who were in charge of the initial cover-up of the coronavirus outbreak), to discuss Covid-19, which was extensively censored on Chinese social media at the beginning of the outbreak. They also use emojis, for example the rice (pronounced “mi”) and bunny (pronounced “tu”) emojis to address the #metoo campaign and issues of sexual harassment.


Bellingcat, the online investigative website specialising in open-source intelligence, is the prime example of ‘online sleuthing’: the practice of making use of social media to verify claims made by powerful actors. Open-source investigators use publicly available data on the Internet, including photos and videos shared on social media, to fact-check official narratives. Bellingcat attracted attention with its investigations into the use of chemical weapons in the Syrian Civil War, the Skripal poisoning in Salisbury, and the downing of Malaysia Airlines Flight 17 in Ukraine. In all of these cases, Bellingcat investigators worked with open-source information, such as pictures and videos uploaded by protesters and soldiers, satellite imagery, and public databases, to identify who was responsible for the attacks and to debunk disinformation campaigns aimed at covering up these attacks.


However, social media is just as often used with anti-democratic intentions. This came to light most prominently following the 2016 US Presidential elections. In the run-up to the elections, from June 2015, approximately 11 million American social media users were exposed to Facebook advertisements generated by Russian agents who aimed to influence the outcome of the elections. Facebook’s advertising tools, which collect and aggregate data about users’ habits and interactions online, allowed Russian agents to target American citizens with specific messages appealing to their values and identities. Even more significant was the impact of the free posts and pages created by agents and bots, which reached more than 126 million users on Facebook, Twitter, and YouTube. The case underlines how social media platforms are designed in a way that makes them vulnerable to exploitation for spreading propaganda; Russian agents did not need to hack any of these sites, they simply exploited the infrastructure already in place. However, the 2016 election is not the only example of social media being used to push fringe political ideologies and populist ideologies.


By now, it is common knowledge that the algorithms underlying the most popular social media sites nurture addiction to ensure that users spend as much time online as possible. To maximise the time users spend on the sites, social media algorithms track what content users engage with and how they engage with it: what photos they like and comment on, who they follow or view, and even how long they spend looking at certain videos or posts. Algorithms gather this information, rank what content is most ‘relevant’ to an individual, and deliver tailored content based on the user’s preferences. This creates ‘echo chambers’ or ‘filter bubbles’, where users are exposed to more people, news, and information that conforms with what the algorithm has identified to be their pre-existing values or political ideology. Engagement-based ranking creates an environment that is especially favourable to sensational, extreme, or polarising content.

Engagement-based ranking of content and one-sided opinions can be a dangerous combination, as Facebook whistleblower Frances Haugen warned in her testimony to US Senators in October 2021. Haugen accused Facebook of fanning ethnic violence in Ethiopia and Myanmar, where posts inciting and glorifying violence against ethnic minorities gained traction and led to real-world violence. She claimed that the social media giant’s strategy for combating misinformation was ineffective because 87% of the funds were spent on monitoring English-language content, even though only 9% of the site’s users are English speakers. Shortly after Haugen’s testimony in December 2021, survivors of the Rohingya genocide sued Facebook for facilitating violence by allowing its algorithms to amplify hate speech and failing to delete posts calling for violence. Facebook has admitted in 2018 that it had not done enough to prevent violence against the Rohingya minority, stating that “Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence”.

States have delegated significant power to companies to moderate speech and information online for the past two decades. However, growing concerns over the spread of disinformation, propaganda, extreme and polarising content, and online hate, as well as the recognition of the immense power social media platforms have in shaping the global flow of information might make them want to reassert their authority in this space.

Though this will not be without problems. States regulating what can and cannot be posted online is a slippery slope towards censorship. However, it is also clear that without any regulations, social media is a breeding ground for radicalism, fake news, and conspiracy theories. One possible solution could be content labelling. In 2018, YouTube was the first social media platform to introduce a label for state-sponsored media channels. During the 2020 US Presidential elections, Twitter also introduced a label for posts containing misleading information with a warning sign and a link to further information. To curb the rapidly spreading misinformation around the Covid-19 pandemic and vaccine scepticism, Instagram also labels posts that contain these keywords with a link to information confirmed by the WHO or the CDC.

This is a good start, however, it may still leave social media platforms with too much power to regulate information. There is draft legislation in the EU that would require platforms to assess and mitigate their algorithms, while the UK’s Competition and Market Authority proposed a requirement for Facebook to give users a choice on whether to accept targeted advertising. The Covid-19 pandemic, among other examples mentioned in this article, has shown that online misinformation has grave consequences in real life, and thus there is an increasing push for change.

It remains to be seen whether social media can be reformed in a way that allows democracy to flourish.



Image 1: Flickr (Alisdare Hickson)

Image 2: Flickr (Anthony Quintano)


22 views
bottom of page