In a bold move, Pintrest has blocked search results for terms that are known to proliferate anti-vax propaganda. It’s refreshingly no-nonsense, without the cringe worthy dance that websites usually do (“we’re a platform, not a publisher,” they echo).
While taking away a channel for misinformation is good, it spurs a natural discussion around censorship and expression, and the role that “private” platforms increasingly must play in moderating their content.
For the past decade, websites have been forced to grapple with a growing concern: misinformation campaigns. As these sites increasingly become a breeding ground for bad-faith actors who use the ultra-connectedness of these systems to mainstream fringe ideas, we the public are starting to have the uncomfortable conversation: should sites ban specific content?
More generally: should we censor some forms of speech?
I want to propose something that I’m a little afraid to propose. The idea that this isn’t even the question we should be asking. The question we should be asking is: which kinds of speech should be limited, in what contexts, and how do we decide that in a fair and just way?
I’ll be honest: I’m nervous to try and navigate a discussion around free speech. It’s tricky to pick apart the nuances of a value so near and dear to our hearts. But if the state of the internet today is anything to go by, we cannot keep shying away from these hard questions. It’s time to look the gift horse in the mouth.
In the western world, the concept of “freedom of speech” is a cow so sacred, it feels like the last idea valued across the political spectrum. But the digital age, like a magnifying glass upon society, has brought the nuances of this concept into the foreground.
In meatspace, the power of speech has historically been limited to those with access to (or the means to reach) large audiences. And even now, in our “free” country, speech is often policed. In this kind of world, “freedom of speech” is an important right because it is so easily taken away.
In the digital realm, the barrier for entry for speech has been lowered. This is the great “democratisation of media” — one that has lifted marginalised communities and niche groups out of the shadows. It has enabled isolated people to find their communities, and stand up for their own humanity without fear of physical violence. This effect is objectively good.
Accessibility to speech is a neutral feature. The downside is with a lower bar to entry, non-human actors like bots are now in play. This is where our seemingly tidy threads start to unravel.
As the technology for mimicking humans on the internet gets better, this great democratisation of media can be exploited for fringe and extremist propaganda. One can now force ideas into the mainstream, by creating hundreds of thousands of bot accounts, indistinguishable from real people, to proliferate and fake the popularity of their own speech. One can enter into discussions in bad faith — not due to one’s true beliefs, but as a way to radicalise different groups against each other.
The democratisation of media is important, but easily exploited for nefarious ends.
What are we to do?
“Freedom” is a vague, ambiguous concept. It’s easy to find multiple contradictory definitions. For this piece of writing, I will extend my definition of freedom to its purest form: without limitation and without constriction. (More on why later.)
Within this framework, it’s easy to see how maximising one person’s freedom necessarily minimises another’s. Giving absolute freedom to any one person limits the freedoms of others around them. For example, if Alice is absolutely free to commit acts of violence, it impedes the ability for Bob to move freely around her.
Therefore, true, absolute freedom cannot exist.
While “freedom” as an idea seems like an untouchable, unalienable right, it’s important to realise that we already limit freedoms all the time. We institute laws; we tax certain goods and services. And if someone tries to break these limits (crime), they might have more freedoms taken away (movement, possessions). We knowingly and happily sacrifice some freedoms in exchange stability and prosperity.
When it comes to free speech, the same ideas apply. One cannot create a world where everyone’s speech is absolutely free. Hate speech, for example, is a type of speech that limits the free speech of others. And we already impose limits to speech with content moderation and community guidelines.
The free marketplace of ideas
If one were to imagine a utopia where every individual’s speech is maximally free, what would it look like?
It would look a lot like 4chan.
This is the true “free marketplace” of ideas. The barrier to entry is essentially zero, and truly nothing is off limits.
In more familiar communities outside of anonymous boards like 4chan, conversations are loosely bounded by an Overton window: what is inside is accepted, what is on the border is debated, and what is outside is extreme. This gives our conversations with each other a common foundation. There’s a lot to be said about how the norms of the Overton window are being destroyed — often by tactics reminiscent of chan culture.
One defining feature of chan culture is arguments in bad faith. This is when the person speaking does not have to believe the contents of what they are speaking. What matters is who “wins”, who lasts, and who gets the most attention.
When speech and ideas are untethered from the consequences and limitations of reality, the ideas themselves start to become meaningless. Ideas are no longer values or ways to live by, but weapons to don and discard for eyes and ears in the online noise machine.
Back to basics
To thoroughly interrogate the concept of free speech, we must go back to asking: Why do we value freedom of speech at all?
We value freedom of speech because we recognise our thoughts and experiences are incomplete. We recognise our lives and livelihoods are made better by other people’s input, even if it sometimes makes us uncomfortable. In aggregate, hearing diverse ideas progresses our society forward.
We value freedom of speech because we believe there exist ideas and information that is important to the public interest and wellbeing, and that these ideas and information may experience censorship.
If we value free speech because we value better ideas, then we should stop thinking about maximising individual free speech, and start maximising the speech of the whole populace. One could draw an analogy to competition (“antitrust”) laws, where limits on some things promotes more numerous, diverse and better other things.
Definitions, Part II
The common discourse these days talks about how censorship by governments is wrong. But censorship by private companies, even ones with massive user bases like Facebook or YouTube, is not an infringement on free speech.
It seems to me that this is a lazy stance. It’s easy to say that “censorship only counts when it is a state actor doing it”, when it’s right-wing extremist content that Facebook and Twitter are taking down. Ask yourself: would you feel the same way about Facebook removing the posts and accounts of prominent Black Lives Matter activists?
I would argue that censorship is based on size and influence, not on whether they are a public or private entity. And since Facebook is bigger than most countries, their content moderation is as much censorship as the content moderation of a similarly-sized actor; say, China.
Censorship and content-moderation are two sides of the same coin. They are the same act, but will be classified as right (“content-moderation”) or wrong (“censorship”) depending on your opinions towards the content.
This is a hard truth for many of us to face, so take a moment.
In the same vein, one will observe people insisting that freedom of speech does not mean freedom of reach. The act of speech, and the act of speech on a platform, is different — one is guaranteed while the other is not.
It is true that no one is entitled to a large, receptive audience. But before I start muttering “if a tree falls in the forest…“, I would like to point out that once again, this is a stance that sidesteps the problem. “No one is guaranteed a platform” is the same argument that could be made in favour of deplatforming sex workers (which — for the record — is dangerous to their physical safety).
While it is true that no one is entitled to a platform, it may very well be the case that some kinds of expression should have their access protected. Expression that ensures safety, expression that informs, and yes, even expression that (in good faith) challenges ideas in a way that we might not be comfortable with.
The concept of free speech is so dearly held in our hearts, that instead of questioning the nuances of the idea itself, we are defining our way out of the problem. The idea that not all speech is equally valuable is uncomfortable, and so we sidestep the issue by maintaining that we value free speech, but only when it is defined in a certain, special way.
In Daniel Kahneman’s pop-psychology book, Thinking Fast and Slow, he describes a commonly used heuristic called substitution bias.
Humans are naturally economical with our mental resources. When faced with a difficult problem, it is common for us to substitute that problem with a similar, easier problem. Most times we don’t even realise we’re doing it.
When it comes to discussions around free speech and censorship of ideas, we tend to substitute in an easier question: Is freedom of speech a good thing? To which the answer is almost always, “yes, of course it is, you fascist”. This frees us from having to think about the harder question: If moderation of speech and expression is necessary to protect our values and ourselves, how do we best do it?
This is the question that the likes of Facebook, Google and Twitter are trying to answer, and struggling to do so without causing horrible harm.
I’m writing this because I want us as a society to start thinking about the harder problem: not whether speech in general is inherently good or bad, but what kinds of speech are good and which kinds are bad? How do we find consensus in a way that is fair and just? How do we determine these things without our own biases and blind spots getting in the way? How do we navigate this question while being cognisant of the historical horrors committed under this same banner?
I’m writing this because I believe we can hold two ideas in our heads at the same time: one, that the value of “freedom of speech” is incredibly important and large bodies such as governments and corporations should be held to a standard set by these values; and, two, that the practical implementation of pure, absolute “freedom of speech” is not only impossible, it may be undesirable.
Where to from here?
For pretty much my whole life, I subscribed wholly to the idea of absolute free speech. I stood up for the rights of those to espouse speech I vehemently disagreed with. I truly believed in “the marketplace of ideas.”
It is deeply uncomfortable to think about which kinds of speech should be allowed and which kinds should not.
Writing this very essay was an exercise in struggle. I am inescapably aware of the ways that limiting speech and expression for the public good as been corrupted and used unjustly and immorally in the past, and how it could be easily co-opted to do harm in the future. I almost did not write this because of that fear.
But every time I catch up with the news, I am reminded of all the ways our shining platforms for open expression and connection have been co-opted by actors in bad faith. How memes and conspiracy theories have devastating real world consequences. At a point in history where it has never been more important to lucidly care about the world around us (ahem — climate change), the world feels like it’s crashing down into a hole of nihilism.
Repeating the words “free speech matters” is avoiding the problem. Redefining what “free speech” is, over and over again until we can sleep at night, is avoiding the problem. As our AI fails to moderate our content and our humans are being traumatised to fill in the gaps in technology, we need serious, good faith discussions on the categorisation, process and policing of speech and expression.
References and further reading:
- Julia Carrie Wong on Pintrest blocking vaccine-related searches
- Casey Newton on the human cost of content moderation
- An investigation into “data voids” by danah boyd and Michael Golebiewski
- This Radiolab episode about content moderation on Facebook Isiah Berlin’s two concepts of freedom
- Tolerance is not a moral precept, recommended to me by Tom
Here, I use the terms “speech” and “expression” interchangeably. It is worth noting that “speech” is a legal term unique to the United States, and comes with its own history of laws and precedents. In New Zealand, we use the term “freedom of expression”, which is arguably a better term. Shoutout to Oliver and Merrin for useful discussion around this!
Illustration by Pepper Curry. This post was originally published in my newsletter.