Sign up for weekly updates

Private censorship is not the best way to fight hate or defend democracy: Here are some better ideas

A disguised protestor videos an LAPD officer during the Democratic National Convention in Los Angeles, CA, 14 August 2000; as EFF notes, some regulations on violent content have disappeared documentation of police brutality
A disguised protestor videos an LAPD officer during the Democratic National Convention in Los Angeles, CA, 14 August 2000; as EFF notes, some regulations on violent content have disappeared documentation of police brutality

Dan Callister/Newsmakers

This statement was originally published on on 30 January 2018.

From Cloudflare's headline-making takedown of the Daily Stormer last autumn to YouTube's summer restrictions on LGBTQ content, there's been a surge in "voluntary" platform censorship. Companies - under pressure from lawmakers, shareholders, and the public alike - have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed "offensive" but legal content.

These moves come in the midst of a fierce public debate about what responsibilities platform companies that directly host our speech have to take down - or protect - certain types of expression. And this debate is occurring at a time in which only a few large companies host most of our online speech. Under the First Amendment, intermediaries generally have a right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn't mean they should.

To begin with, a great deal of problematic content sits in the ambiguous territory between disagreeable political speech and abuse, between fabricated propaganda and legitimate opinion, between things that are legal in some jurisdictions and not others. Or they're things some users want to read and others don't. If many cases are in grey zones, our institutions need to be designed for them.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them, or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities to not be silenced by harassment. We should all have the ability to exercise control over our online environments: to feel empowered by the tools we use, not helpless in the face of others' use.

But in moments of apparent crisis, the first step is always to look simple solutions. In particular, in response to rising concerns that we are not in control, a groundswell of support has emerged for even more censorship by private platform companies, including pushing platforms into ever increased tracking and identification of speakers.

We are at a critical moment for free expression online and for the role of the Internet in the fabric of democratic societies. We need to get this right.

Platform censorship isn't new, hurts the less powerful, and doesn't work

Widespread public interest in this topic may be new, but platform censorship isn't. All of the major platforms set forth rules for their users. They tend to be complex, covering everything from terrorism and hate speech to copyright and impersonation. Most platforms use a version of community reporting. Violations of these rules can prompt takedowns and account suspensions or closures. And we have well over a decade of evidence about how these rules are used and misused.

The results are not pretty. We've seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech.

Platform censorship has included images and videos that document atrocities and make us aware of the world outside of our own communities. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo.

These takedowns are sometimes intentional, and sometimes mistakes, but like Cloudflare's now-famous decision to boot off the Daily Stormer, they are all made without accountability and due process. As a result, most of what we know about censorship on private platforms comes from user reports and leaks (such as the Guardian's "Facebook Files").

Given this history, we're worried about how platforms are responding to new pressures. Not because there's a slippery slope from judicious moderation to active censorship - but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook - or in governments around the world. Instead many, especially in policy circles, continue to push for companies to - magically and at scale - perfectly differentiate between speech that should be protected and speech that should be erased.

If our experience has taught us anything, it's that we have no reason to trust the powerful - inside governments, corporations, or other institutions - to draw those lines.

As people who have watched and advocated for the voiceless for well over 25 years, we remain deeply concerned. Fighting censorship - by governments, large private corporations, or anyone else - is core to EFF's mission, not because we enjoy defending reprehensible content, but because we know that while censorship can be and is employed against Nazis, it is more often used as a tool by the powerful, against the powerless.

First casualty: Anonymity

In addition to the virtual certainty that private censorship will lead to takedowns of valuable speech, it is already leading to attacks on anonymous speech. Anonymity and pseudonymity have played important roles throughout history, from secret ballots in ancient Greece to 18th century English literature and early American satire. Online anonymity allows us to explore controversial ideas and connect with people around health and other sensitive concerns without exposing ourselves unnecessarily to harassment and stigma. It enables dissidents in oppressive regimes to tell their stories with less fear of retribution. Anonymity is often the greatest shield that vulnerable groups have.

Current proposals from private companies all undermine online anonymity. For example, Twitter's recent ban on advertisements from Russia Today and Sputnik relies on the notion that the company will be better at identifying accounts controlled by Russia than Russia will be at disguising accounts to promote its content. To make it really effective, Twitter may have to adopt new policies to identify and attribute anonymous accounts, undermining both speech and user privacy. Given the problems with attribution, Twitter will likely face calls to ban anyone from promoting a link to suspected Russian government content.

And what will we get in exchange for giving up our ability to speak online anonymously? Very little. Facebook for many years required individuals to use their "real" name (and continues to require them to use a variant of it), but that didn't stop Russian agents from gaming the rules. Instead, it undermined innocent people who need anonymity - including drag performers, LGBTQ people, Native Americans, survivors of domestic and sexual violence, political dissidents, sex workers, therapists, and doctors.

Study after study has debunked the idea that forcibly identifying speakers is an effective strategy against those who spread bad information online. Counter-terrorism experts tell us that "Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses."

We need a better way forward.

Step One: Start with the tools we have and get our priorities straight

Censorship is a powerful tool and easily misused. That's why, in fighting back against hate, harassment, and fraud, censorship should be the last stop. Particularly from a legislative perspective, the first stop should be looking at the tools that already exist elsewhere, rather than rushing to exceptionalize the Internet. For example, in the United States, defamation laws reflect centuries of balancing the right of individuals to hold others accountable for false, reputation-damaging statements, and the right of the public to engage in vigorous public debate. Election laws already prohibit foreign governments or their agents from purchasing campaign ads - online or offline - that directly advocate for or against a specific candidate. In addition, for sixty days prior to an election, foreign agents cannot purchase ads that even mention a candidate. Finally, the Foreign Agent Registration Act also requires information materials distributed by a foreign entity to contain a statement of attribution and to file copies with the U.S. Attorney General. These are all laws that could be better brought to bear, especially in the most egregious situations.

Read the full statement on EFF's site.

Latest Tweet:

Violencia e impunidad, una constante del periodismo haitiano @sip_oficial @CPJAmericas #Haití

Get more stories like this

Sign up for our newsletters and get the most important free expression news delivered to your inbox.