Censoring Social Media: Free Speech in the Digital Age

6 mins read

National Banned Books Week concluded on Sunday, October 1. The Banned Books Week Coalition, a group of various literary organizations, started the tradition in 1982 to raise awareness about the censorship of literature in American schools and libraries. However, as the world’s news and information move increasingly to a digital space, conversations about censorship are changing and increasingly being found with regards to social media and online sites.

Social media occupies an undeniably large part of our lives. Across all demographics, the use of social media skyrocketed in the past decade. Recent social movements, from the Arab Spring to #BlackLivesMatter, use social media to connect people and to spread the movement’s message. However, the same open platform which empowers social justice movements is also extending the reach for hate-filled rhetoric to reach wide audiences. Revolutionary movements like the Arab Spring gained momentum through social media, as did terrorist organizations like the Islamic State (ISIL) and other extremist groups, who used social media to recruit members and increase their followers. In other words, social media has revolutionized the way that society interacts with news, knowledge, and ideologies. Those who can use it to their advantage have enormous power at their fingertips.

Because social media is now one of the most powerful tools available in provoking social change, it raises the question: how do we censor it?

A key issue in much of the Western world is the rise of terrorism. Growing fear of terrorist attacks and ISIL’s use of social media makes online censorship of terrorist groups a key issue for many countries. Unlike other extremist organizations, the Islamic State post much of its’ propaganda and ideological discussions in public spheres, on wide-reaching networks such as Twitter, Tumblr, and Facebook. ISIL posts approximately 38 pieces of propaganda every day, which range from videos of executions and battle victories to promotional content of the ISIL’s supposed utopia in places where it has gained power.

While propaganda is relatively easy to find on social media sites— content simply needs to be reported by other users— removing it from the Internet is much more difficult for social media companies; suspending accounts does little besides temporarily remove the most vocal supporters of the IS.

This past May, the European Union created an online code of conduct that required hate speech to be removed within 24 hours of a valid notification. Major social networks Facebook, Twitter, Google, and Microsoft all committed to this code.

The EU’s code defines hate speech as “all conduct publicly inciting violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.” The code aims to combat the radicalization of young Muslims in European countries, while simultaneously combating the growing xenophobia (namely Islamophobia) caused by the influx of refugees from the Middle East. Hate speech against people based on gender or sexuality is notably left out.

Negative uses of social media are not just limited to extremist organizations. While some instances of hate speech are isolated incidents of harassment, many are part of a growing ultra-conservative presence on the Internet.

The most prominent of these movements is the alt-right, which gained much of its following through social media networks like Reddit and 4chan. The alt-right is a recently emerged wave of conservatism based around the idea that “white society” is under attack, and emphasizes a return to traditional values while utilizing modern science and technology. Generally speaking, members of the alt-right are anti-multiculturalism, anti-political correctness, and anti-feminism (intersectional or otherwise).

The alt-right is notorious for its presence in online spheres, sharing much of its ideology through memes (just this past month, Pepe the Frog was added to the Anti-Defamation League’s list of hate symbols due to the alt-right’s appropriation of the meme) and code words in platforms across the Internet.

A recent example of the power of alt-right trolls is their abuse of Leslie Jones on Twitter. The abuse came after Milo Yiannopoulos, an alt-right leader and editor for the conservative news outlet Breitbart, released a racist, misogynistic review of her film “Ghostbusters.” In it, he called Jones “spectacularly unappealing… [with] flat-as-a-pancake black stylings,” among other derogatory comments. Yiannopoulos’ large following then took to Twitter, sending Jones hundreds of racist and misogynistic tweets.

Jones attempted to defend herself, retweeting many of the trolls to expose them and subsequently blocking and reporting them. She voiced her opinion that to simply ignore the haters was ineffective, and that they needed to be held accountable for their actions.

Yiannopoulos responded by making more misogynistic, transphobic comments about Jones.


Ultimately, Twitter permanently suspended Yiannopoulos. However, his suspension did not revert the harm which Jones experienced after the tirade of hate. After hours of fighting the trolls, Jones eventually signed off Twitter, expressing her unhappiness and frustration with the company for failing to act sooner.

Yiannopoulos’ suspension was an example of Twitter’s challenging responsibility to create a harassment-free space, without censoring the free speech of its users. Following the incident with Jones, many criticized Twitter for not doing enough to protect her from the harassment. Yet Yiannopolous’s alt-right supporters claimed it was just another instance of the liberal establishment silencing conservative voices, starting a #FreeMilo hashtag in protest.

Live streaming features on social media sites present more questions about censorship. The fatal shooting of Philando Castile a few months ago was live streamed on Facebook by his girlfriend. Widely shared, the video exhibits yet another fine line that social media sites walk between censorship and freedom of expression.

In the case of Castile and the #BlackLivesMatter movement, the video was a clear example of the trauma and discrimination that black people face at the hands of the police. It was one instance of members of an oppressed group using the tools provided by social media to expose the reality of their experiences to a wide audience. Similar instances include the video of a police officer choking Eric Garner to death and the more recent police shooting of Terrence Crutcher in Tulsa.

However, as videos such as these gain popularity, some activists argue that they actually desensitize the public to violence against the black body and normalize black death and trauma. When the most common representation of black people in the media is them being shot or harassed by police, state violence against people of color becomes the norm. Especially given the fact that most people of color feel that justice often fails to be served in these situations, the videos become disheartening, even traumatizing, reminders of the systemic issues in the American criminal justice system.

Illustrated by M-A Staff, Caroline Fenyo

Currently, Facebook’s censorship policy concerning violence only prohibits graphic content if the content celebrates or glorifies violence. The video of Castile’s death does not violate this policy since it was posted as a means to call attention to the violence as an issue that needs to stop. However, Facebook’s community standards require that before sharing graphic content, the poster warns his or her audience.

In general, social media sites aim to create a safe, inclusive environment. Facebook’s mission statement is “to give people the power to share and make the world more open and connected.” Facebook’s specific policy regarding hate speech prohibits content that attacks a person or group based on race, ethnicity, religion, nationality, sexual orientation, sex or gender, or a serious disability or disease.

Twitter shares a similar mission: “[to] give everyone the power to create and share ideas and information instantly, without barriers.” Regarding abusive behavior, Twitter adds, “We believe in freedom of expression and in speaking truth to power, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up.”

While all social media sites have these community guidelines and policies, it is impossible to weed out every hateful account or post among billions of users. And perhaps it is not necessary; just like in the real world, in social media, there will always be bigots.

Ultimately, at the core of any controversy over online censorship is the question: who should have the opportunity to have an online voice? Posting hate speech, propaganda, or violence in the digital world amplify their real world effects by making them instantaneous, shareable, and quasi-permanent. In a world where online influence increasingly translates to real world influence, online censorship can play as a key of a role as legislation, creating a safe and mindful world against “fighting words” or other forms of speech not protected by the First Amendment.

Emma Dewey is a senior in her second year on the Chronicle staff and her first year as an editor. She enjoys working with other writers to make the Chronicle the best it can be. She is most interested in using journalism to connect with her community and affect social change.

Latest from Blog