They might refashion their logos for a short stint, but social media sites have long been a haven for unabashed bigotry and racist rhetoric. Facebook and WhatsApp groups championing hatred and purporting propaganda have even played decisive roles in shaping governments and tipping the consensus in favour of fascist ideals. Unfortunately, the inevitable-yet-diminutive regulation of misinformation and hate speech on social media in recent times might prove to be too little, too late.
Under the promise of “free speech”, racism has lost all its veiled nuance: bigots can spew vile platitudes online in the garb of anonymity. In the past few years, however, anonymity has not been a factor in espousing problematic opinions — many have embraced their hateful views publicly and wear them like a badge of honour.
Yet, social networking sites have done little to upset the ilk of the alt-right and have instead lent them a tacit seal of approval. Giving unbridled xenophobia and bigotry a platform is not in any means a display of neutrality. A bid to listen to “both sides” of the argument is disconcerting and even dangerous as we offer the same credence and plausibility to a side vehemently arguing the denial of basic human rights of the other. Lack of condemnation and brittle reprimands to prejudice only empower the people entrenched in these views, who proudly indulge in them in the real world.
In many countries, perpetrators of white supremacist attacks have found their roots in online communities, who even take to the web to broadcast their atrocities. Social media was meant to be a tool to mobilise change and diversify freedom of expression. It went from a means of emancipation to a hostile environment replete with trolls parroting venom, a microcosm of the real world.
While a lax response to hate speech moderation can be attributed to an extended indifference by private companies; international purview and jurisdiction cannot be exempt from blame. Social media currently enforces appropriate content through a combination of artificial intelligence, user reporting and a dedicated staff of content moderators who are encumbered by a high volume of disturbing posts. Social media companies’ opacity regarding censorship and inappropriate content is matched by digressing global policies to tackle contemporary evil.
In the United States, internet companies are largely protected from liability when it comes to a harmful or objectionable speech by their users due to the broad umbrella of the First Amendment and Section 230 of the Communications Decency Act (CDA) of 1996.
In contrast, Germany paid due diligence to its history and implemented some of the most stringent laws against online hate speech in the world. Many have ostracised the German model as hyper-vigilant and a form of overzealous policing, as it supersedes public authority and mandates private corporations as arbitrators when it comes to doling out censorship. Such complications have made it difficult to implement a uniform framework and safeguard users from threats of conservative extremism.
Despite the certain role played by social media in catalysing violence against immigrants and minorities, a lack of global cooperation and the reluctance of social media sites to alienate authoritarian governments has been a bitter betrayal to the camaraderie and global citizenry these machinations seek to serve. While these sites have drawn public ire due to their ignominious privacy violations, the same cannot be stated for their role in galvanising racial and religious hatred.
However, with a recent spur in the anti-racism movement, several companies have pulled their advertisements from Facebook, demanding greater regulation of hate speech, while Twitter has begun to flag tweets for spreading misinformation — not even sparing the United States President Donald Trump. These changes are welcome and a hopeful reckoning, ushering in the era of accountability and a virtual experience free from venomous tirades and hate-mongering.
Social media has played an increasingly crucial role in shaping public perception, a role often overshadowing major parliaments of the world. Free and inconsequential dissemination of actionable content and death threats are not fringe elements, rather a threat to democracy and safe expression.
World governments need to work in tandem with these sites to establish a legal framework protecting almost 2.95 billion users while maintaining electoral integrity, user privacy and democratic principles.