Content Moderation and the Right to Privacy Cannot Co-Exist
In this article, I explore the inherent tension between content moderation and the right to privacy, as well as the challenges posed by proposed solutions for ethical content moderation. The core issue lies in the dissonance between our digital rights and the demands we place on social media platforms. To effectively moderate content, these platforms must access and analyze everything we post—whether it’s a public comment, a private message, or a story. This means surrendering a significant portion of our privacy. In essence, content moderation requires us to compromise our right to privacy, creating a fundamental conflict between the two.
The Profit-Driven Nature of Social Media Platforms
Social media platforms are private companies designed primarily to maximize profits for themselves and their investors. They are not inherently structured to protect political expression or uphold democratic values. Our freedom of expression is a right guaranteed by the state, not by private entities. Platforms like X (formerly Twitter), Facebook, Reddit, and Instagram rely heavily on advertising revenue, as most users do not pay for these services. This business model incentivises moderation practices that support an increase in ad engagement over ethical considerations.
The Role of AI in Content Moderation
The use of artificial intelligence (AI) for content moderation introduces another layer of complexity. AI systems require vast amounts of data to function effectively—data scraped from our private messages, posts, stories, and other interactions. The more data these systems consume, the greater the invasion of privacy. Additionally, the infrastructure needed to support these systems—massive data centres—contributes significantly to climate change, further impacting our lives.
Moreover, language is dynamic and ever-evolving. Words and phrases that were harmless a decade ago may now carry harmful connotations. The language in digital spaces shifts rapidly, often influenced by real-world events, hate incidents, or cultural changes. How do we ensure that AI systems keep up with these changes? And in how many languages? The challenge and environmental cost of updating and maintaining these systems across diverse linguistic and cultural contexts is immense.
The Human Cost of Content Moderation
A common proposed solution is a hybrid approach combining AI and human moderators. However, this approach often overlooks the significant psychological toll on human moderators. While low pay and lack of mental health support are frequently cited as issues, the deeper problem is the unavoidable trauma these moderators endure. Reading and filtering harmful content daily can lead to severe psychological consequences, including post-traumatic stress disorder (PTSD). Would you willingly subject yourself to such work?
Hate Speech: A Societal Problem, Not a Technological One
Hate speech is not a product of technology; it is a reflection of societal fractures. Throughout history, hate speech has been disseminated through pamphlets, newspapers, radio (e.g., Radio Rwanda), television, and now social media. Each technological advancement has amplified the scale and speed of its spread, but the root cause remains societal division. Addressing hate speech requires tackling the underlying societal issues.
The Challenge of Defining Harmful Content
Defining what constitutes harmful content is neither easy nor straightforward. Who gets to decide? Governments, tech companies, or civil society? And which segment of civil society? Civil society itself is diverse, with groups holding opposing views on what is harmful or acceptable.
The Profit Motive and the Limits of Accountability
Any strategy that reduces the profitability of social media platforms is unlikely to succeed, as these companies have no incentive to adopt such measures. The destruction or overhaul of these platforms would require a complete reimagining of the digital economy—a system deeply entrenched in capitalism. Holding private companies accountable is insufficient, as their priorities will always align with profit-making, shifting only in response to government pressures or market demands.
Rethinking Our Approach
Given these challenges, I believe it is time to go back to the drawing board. We need to rethink our approach and move beyond content moderation.
I would love to hear your thoughts on the piece and on how we can rethink our approach.
Featured image from Canva’s royalty-free image gallery