
BY SKYE GRAYSON
Arguments surrounding online content and “free speech” are common. One might find that the act of “free speech” is used as an excuse to disseminate any sort of potentially disturbing and/or harmful content, and the effort to limit “free speech” is used as an excuse for despotic censorship.
But between total transparency and total obscurity lies “moderation,” an effort to manage information in such a way that there is still a sense of freedom and truth. Moderation often depends on a clear set of localized, ethical standards enforced by a defined authority. Of course, on a global and online scale, this is hardly possible. So, how to moderate the World Wide Web is a hotly debated topic. But in order to come to useful conclusions about moderation online, one must understand moderation as a concept. One must understand moderation’s history, its effects—and its arbiters. Once these conditions are met, it becomes clear that moderation is fundamentally amorphous, which complicates how to implement it today. Thus, we must restructure our assumptions about moderation to healthily implement it on a global scale.
Moderation may be defined as an authority’s effort to manage the public visibility of information in a widely satisfactory way. One may think of moderation as a valve that tightens or eases around a liquid flow of information according to what the social body indicates is generally morally appropriate. This flow of information is commonly held as “expression,” and moderation allows enough expression into the communal sphere to satisfy a desire for truth while also being socially acceptable. Socially acceptable expression is commonly held as “speech,” and according to the U.S. Constitution’s First Amendment, “speech” is to be free.1
The First Amendment states that the U.S. government shall respect the freedom of “speech” and of “the press,” meaning that excessive moderation (i.e., censorship) of this kind of content by the government is prohibited. But one must remember that the terms “speech” and “the press” only cover a subset of expression. “Speech” and “the press” include certain acts and entities while excluding others. Further, the boundaries between what is included and excluded varies over time and place. The U.S. courts have held that “speech” includes acts of speaking, wearing protest signs or materials, using offensive words to convey political meaning, promoting monetary contribution, posting ads and engaging in other symbolic acts. (And, as of 2001, speech also includes computer code.2) “Speech” excludes engaging in harmful verbal acts (e.g., shouting ‘fire’ in a theatre), distributing obscene materials, engaging in any obscene acts, protests and/or illegal distribution in an educational setting.
As for “the press,” the U.S. Newseum Institute states that the term refers to the “mainstream” media (i.e., the “news” pipeline), but also any “publisher” who facilitates “information and opinion.”3 And lately, there is debate over whether social media witnesses and online bloggers count as “the press.” If they do, they’re protected by the First Amendment, and from excessive moderation. If they don’t, then like any other acts or entities that fall outside of these general provisions about “speech” and “the press,” they are subject to excessive moderation. But the fact that what counts as “speech” and “the press” varies makes it harder for authorities to pinpoint what shall be censored and what shall be left alone.
And like what to moderate, who gets to moderate (or who should get to moderate) is also unclear. Before the World Wide Web, U.S. speech and press information largely flowed through the local, mainstream “news pipelines” (e.g., Associated Press, CNN, The New York Times, etc.).4 Any submission or other content through these pipelines was moderated by the news network, or the publisher, or his or her designee. (Note: an archive of such publications—like the local library—was classified under these laws as a “platform,” which did not claim responsibility to moderate.) The publisher would make marked efforts to uphold the press’s right to freedom and the peoples’ right to speech, but censor any expression that is deemed to fall outside of the law. The press and the people exercising speech would report or complain to the publisher, the publisher would be subject to the courts, and everything would be more or less under control.
But in the global Age of the Internet, speech and the press—themselves rapidly changing—no longer report to the local, mainstream media. Professor Zeynep Tufekci states, “Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure.” Anyone can submit information to a countless number of ‘news’ outlets, including social media like YouTube, Twitter and Facebook. Must all of these novel outlets report to the courts as “publishers?” It remains to be seen. There is debate as to whether websites like YouTube, Twitter and Facebook are even “publishers,” as they in some ways act as “platforms.” (There is also debate as to whether ISPs should report to the courts, which would make them “publishers” when they are more akin to “platforms.”)
If these novel outlets do not have to report to the courts as “publishers,” then they do not have the responsibility of moderating between what is “free” and what is to be censored. If these sites do not have the responsibility of traditional publishers, these sites may allow any expression to be visible to the public sphere. What would normally be censored may be treated as free. These recent changes in the dissemination of information through certain outlets thus change the traditional definition of “free speech.” The lines between free speech and general expression are blurred. (At least, so long as “internet neutrality” is the norm, and responsibilities are uncertain.)
These pseudo-publishers’ lack of responsibility as moderators was formally deadened by Section 230 of the Communications Decency Act of 1996, which absolves websites of the responsibility for content, unless they are proven as being actively involved in the making and distribution of content. In other words, websites do not have to moderate, and any disturbing and/or harmful expression that appears on such websites cannot be blamed on the website. This lack of liability has arguably resulted in the very existence of sites like YouTube, Twitter and Facebook; but it has also resulted in countless complaints of disturbing and/or harmful expression.
It has also resulted in many complaints of “false” or “fake” news. Tufekci sums up this problem very well: “In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of ‘free speech.’ And sure, it is a golden age of free speech—if you can believe your lying eyes.” In other words, under a new information outlet structure, free speech has now come to include all forms of expression—including false forms. These false forms of expression have been used to influence elections and sell us things we might not want.5 And the voter/consumer is far from protected: if a website does happen to moderate, that website might moderate under biased ethical and/or moral standards.
Since websites today are often not legally obligated to moderate expression, the responsibility is divided among a growing group of sub-authorities. Buni and Chemaly (2016) remark, “Content moderation is fragmented into in-house departments, boutique firms, call centers, and micro-labor sites, all complemented by untold numbers of algorithmic and automated products.” “Numbers range from hundreds of thousands to ‘many numbers times that.’”6 These sub-authorities each operate under their own agendas, principles and interpretations of the law. And oftentimes, these agendas, principles and interpretations of the law are vividly colored by the need to profit.
At some level, these websites and their companies do have to moderate according to what consumers collectively deem as morally acceptable—lest they desire to lose their income stream because of consumer grievances. But as Tufekci states, these companies have cleverly found that it is more profitable to moderate according to what keeps eyeballs on a screen, not according collective moral sentiments. These companies have discovered that there is a fine line between what people ought to see and what people want to see.
Returning to the definition of moderation, authorities must oftentimes restrict more expression than what people may want. Traditional authorities must distinguish between what will keep the public sphere peaceful and content, and what will indulge the public sphere—because some indulgences may cause harm. (For example, people are drawn to violence. Traditional authorities might want to moderate violent content to avoid normalization of crime, despite a public’s underlying desire to see it.) Companies today loosen the grip on potentially harmful content in order to reap a profit. (So, for example, if said violent content keeps visitors on a website for longer—long enough for an advertiser to effectively sell the visitor something, and then pay the website for that successful transaction—then the owners of that website have a financial obligation to keep violent content on the website. And that website will likely cite “free speech” in order to do so.) Sometimes, with excessive public backlash, it becomes in a website owner’s best interest to form an ethics board (like Facebook’s Advisory Board or Twitter’s Trust and Safety Council). But at these board meetings, “corporate and civil society participants remain nearly silent about the deliberations” (Buni and Chemaly, 2016). In other words, these boards are a cover, and economic incentives still take precedence.
In sum, today’s authorities on moderation do not give the ethics much weight. First, many companies’ staff are not clear on the ultimate moral grounding of their moderation policies. A Facebook moderator remarks, “‘Yes, deleting Hitler feels awesome,’ (…) ‘But, why do we delete Hitler? If Facebook is here to make the world more open,’ (…) ‘why would you delete anything’” (Buni and Chemaly, 2016)? This uncertainty results in vague moderation policies. Second, many companies’ staff are more efficiently utilized for functions other than ethics. With costs in mind, companies typically automate what they can, and then employ a “relatively low-wage, low-status sector, often managed and staffed by women” to take care of simple matters that cannot be automated. This is a time- and cost-efficient business operation. Alternatively, deliberation over ethics is time- and cost-inefficient. When deliberation over ethics is necessary for a company, the company will employ the same low-wage, low-status sector in order to keep costs low. An unskilled labor pool effectively decides on exceedingly complex global matters, only with a small administrative staff and vague policies to provide guidance.
For example, Reddit, a website host to millions of visitors, has a full-time staff of just about 75. Rules about moderation can be found on Reddit’s “reddiquette,” a page-long document that contains commands such as “posts containing explicit material such as nudity, horrible injury, etc., add NSFW (Not Safe For Work) for nudity, and tag. However, if something IS safe for work, but has a risqué title, tag as SFW (Safe For Work). Additionally, use your best judgment when adding these tags, in order for everything to go swimmingly.” Terms like “horrible injury” and “risqué” are never expounded upon; and the command to use one’s “best judgement” assumes everyone has similar—and sound—judgement.
An additional explanation for the difficulty in drawing precise but unified moderation policies is that moral and ethical standards vary by time, location, context, etc.; they’re fundamentally subjective. Buni and Chemaly state, “Content flagged as violent—a beating or beheading—may be newsworthy. Content flagged as ‘pornographic’ might be political in nature, or as innocent as breastfeeding or sunbathing. Content posted as comedy might get flagged for overt racism, anti-Semitism, misogyny, homophobia, or transphobia. Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people. Trolls and criminals exploit anonymity to dox, swat, extort, exploit rape, and, on some occasions, broadcast murder.” In other words, what ‘we’ define as ‘bad’ might be what ‘they’ define as ‘good.’ What ‘we’ define as relevant might be what ‘they’ define as arbitrary. And what ‘we’ define as benign might be what ‘they’ define as incendiary.
Differing moral and ethical standards are seen in respective national responses to online content. For example, in India, citizens were contracted by YouTube to moderate (for cheaper wages). YouTube subsequently encountered public scrutiny when those Indian moderators allowed videos featuring bullying to stay on the site, “because to them, the people in the video were not children, but adults.” This might suggest differing ethical sentiments about when exactly children become adults, and what responsibilities they assume thereafter.
And in the United Kingdom, there is no Section 230 equivalent; so, all websites are required to diligently police any negative expression. This policy largely focuses on the policing of pornography accessed by underage persons (Burgess, 2018). This may reflect differing ethical sentiments regarding a website owner’s responsibility, and perceived underage sensitivity to sexual content.
Meanwhile, in the U.S., there is a Section 230, and websites have weaker policing rules. This might logically reflect inverse sentiments regarding website owner responsibility and underage sensitivity to sexual content—but more likely, the U.S.’s less unified moderation policies may reflect uncertainty about website owner responsibility, and a range of sensitivities: various rules and complaints emerging across the U.S. online ecosystem highlight how complicated issues surrounding race, class, and gender may be. Buni and Chemaly cite Facebook’s “cheat sheet” for ethical moderation, which states, “Humor and cartoon humor is an exception for hate speech unless slur words are being used or humor is not evident.” Buni and Chemaly continue in regard to Facebook’s cheat sheet, “As in U.S. law, content referring to ‘ordinary people’ was treated differently than ‘public figures.’ … Things like ‘urine, feces, vomit, semen, pus and earwax,’ were too graphic, but cartoon representations of feces and urine were allowed. Internal organs? No. Excessive blood? OK. ‘Blatant (obvious) depiction of camel toes and moose knuckles?’ Prohibited.” They continue, “The ‘Sex and Nudity’ section perhaps best illustrates the regulations’ subjectivity and cultural biases. The cheat sheet barred naked children, women’s nipples, and ‘butt cracks.’ But such images are obviously not considered inappropriate in all settings and the rules remain subject to cultural contexts.” (It thus appears that the U.K. and the U.S. have arguably similar sentiments about sexuality, but the U.K. reacts to these sentiments through new government regulation while the U.S. reacts to these sentiments through new business policies—which are naturally more fragmented).
In light of these many contextual differences, the next logical step might be to make all content open and unmoderated—or, make all content closed and heavily censored. But there are dire consequences to both: open and unmoderated content may cause psychological damage; closed and heavily censored content may cause societal damage. (Using the valve analogy, too much ease on the information flow causes a flood; too much grip on the information flow causes a drought.) In the case of open and unmoderated content, online content moderators have recently emerged, expressing their longtime psychological struggles with heavy exposure to unmoderated content (Buni and Chemaly, 2016, 8). One Microsoft moderator claimed that he had developed “symptoms of P.T.S.D., including insomnia, nightmares, anxiety, and auditory hallucinations” (Chen, 2014). And SHIFT (Supporting Heroes in Mental Health Foundational Training) supports moderators from Crimes Against Children, helping them reconcile with “sustained exposure to toxic images: isolation, relational difficulties, burnout, depression, substance abuse, and anxiety.” Facebook moderators also “receive regular training, detailed manuals, counseling and other support” (Buni and Chemaly, 2016). It appears that open and unmoderated content would likely cause psychological struggle, as people would have to wrestle with heavy exposure to the world’s taboos. And according to Pakistan’s local company “Bytes for All,” unmoderated content would additionally cause societal decline. It warns (with scientific data in support) that a sudden influx of open information in a country where information has been historically and strongly moderated may result in outbreaks of violence.7
But this fear of open expression could turn into an excuse to censor expression, and control others. Authoritative efforts to heavily censor content correlates with despotic regimes. For example, in recent years, Russia has flooded Twitter with content removal requests in an effort to limit its citizens’ exposure to foreign news (and thus exposure to information that might conflict with national propaganda) (Lokot, 2016). And China’s President Xi Jinping recently announced that he plans to “officially abolish term limits,” giving Jinping unprecedented power. At the same time, China’s communications platforms have doubled-down on surveilling and censoring content (Mozur, 2018).8 (Worse, China has recently strong-armed Facebook, Google and Apple into complying with China’s censorship policies in exchange for access to China’s massive user market. Mercedes-Benz recently had to comply with China’s anti-Instagram policy for fear of being labeled an “enemy of the people”). It appears that neither unmoderated information nor heavily censored information is the answer.
Moderation might be a necessary evil—one that we must study further. For now, we only have the “gut check:” moderators are “told to take down anything that makes you feel bad, that makes you feel bad in your stomach” (Buni and Chemaly, 2016). But the gut check does not hold universally, nor in complicated contextual cases; so, a more scientific study of ethical moderation is needed. Efforts are on the way. Jenifer Puckett of Emoderation “believes that moderation, as an industry, is maturing. On the one hand, the human expertise is growing, making that tableful of young college grads at YouTube seem quaint.” Julie Mora-Blanco, formerly of YouTube and Twitter, states, “People are forming their college studies around these types of jobs . . . The industry has PhD candidates in internship roles.” Ethical moderation is becoming a big deal.
But at the moment, there is no clear solution to the problems surrounding ethical moderation, and the solution cannot be easily drawn from history. The Founding Fathers wrote the First Amendment when “speech” and “the press” were vastly different things. Today, everything is “speech,” and everyone is “the press.” And as this “high-stakes public domain,” which “did not exist for most of human history,” has changed and continues to change, so must our methods of ethically moderating it. Our reservations about what is “right” or “wrong,” “free” or “hate” speech must be revaluated on a planetary scale. As ethics vary across the world, we may have to think seriously about establishing global standards, interconnected local intermediaries, and tangible enforcement mechanisms that incline everyone to follow. But that will come as we further interconnect, communicate, and educate ourselves on moderation.
Notes
1. “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
2. In Universal Studios v. Corley 273 F.3d 429, (2001), it was decided that functionality and the conveyance of information are not mutually exclusive, rendering code a form of speech.
3. “While no doubt exists that ‘mainstream’ media, such as broadcast stations, newspapers and magazines enjoy the freedom of ‘the press,’ the line gets blurrier in cases involving underground newspapers, freelance writers and pamphleteers. In general, however, courts have defined ‘the press’ so as to include all publishers. The 2nd U.S. Circuit Court of Appeals, for example, has said that First Amendment protections extend to “‘every sort of publication which affords a vehicle of information and opinion.’ von Bulow v. von Bulow, 811 F.2d 136, 144 (2d Cir.) (quoting Lovell v. Griffin, 303 U.S. 444, 452 (1938)), cert. denied, 481 U.S. 1015 (1987).”
4. “Variations on this general playbook for censorship—find the right choke point, then squeeze—were once the norm all around the world. That’s because, until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands” (Tufekci, 2018).
5. “But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, ‘top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.’”
6. “Content moderation is fragmented into in-house departments, boutique firms, call centers, and micro-labor sites, all complemented by untold numbers of algorithmic and automated products. Hemanshu Nigam, founder of SSP Blue, which advises companies in online safety, security, and privacy, estimates that the number of people working in moderation is ‘well over 100,000.’ Others speculate that the number is many times that.”
7. “Abusive men threaten spouses. Parents blackmail children. In Pakistan, the group Bytes for All—an organization that previously sued the Pakistani government for censoring YouTube videos—released three case studies showing that social media and mobile tech cause real harm to women in the country by enabling rapists to blackmail victims (who may face imprisonment after being raped), and stoke sectarian violence.”
8. “Facebook created a censorship tool it did not use and released an app in the country without putting its name to it. Apple is moving data storage for its Chinese customers into China and last year took down software that skirts China’s internet blocks from its China App Store. Google recently said it would open a new artificial intelligence lab in the country.” “The Chinese government asked Google’s services to take down 2,290 items in the first half of last year, according to the company’s statistics. That was more than triple the number it requested in the second half of 2016, which itself had set a record. The majority of China’s recent takedown requests focused on videos on YouTube, the data showed.”
Sources
1. Buni, Catherine, and Chemaly, Soraya “The Secret Rules of the Internet.” The Verge, 13 Apr. 2016, www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech.
2. Burgess, Matt, and Liat Clark. “The UK Wants to Block Online Porn. Here’s What We Know.” WIRED, WIRED UK, 13 Mar. 2018, www.wired.co.uk/article/porn-block-ban-in-the-uk-age-verifcation-law.
3. Chen, Adrian. “The Human Toll of Protecting the Internet from the Worst of Humanity.” The New Yorker, The New Yorker, 19 June 2017, www.newyorker.com/tech/elements/the-human-toll-of-protecting-the-internet-from-the-worst-of-humanity.
4. Chen, Adrian. “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” Wired, Conde Nast, 23 Oct. 2014, www.wired.com/2014/10/content-moderation/.
5. Mozur, Paul. “China Presses Its Internet Censorship Efforts Across the Globe.” The New York Times, The New York Times, 2 Mar. 2018, www.nytimes.com/2018/03/02/technology/china-technology-censorship-borders-expansion.html.
6. “Section 230: A Key Legal Shield For Facebook, Google Is About To Change.” Section 230: A Key Legal Shield For Facebook, Google Is About To Change | WBUR News , www.wbur.org/npr/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.
7. Tufekci, Zeynep. “It’s the (Democracy-Poisoning) Golden Age of Free Speech.” Wired, Conde Nast, 15 Feb. 2018, www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship.
8. Lokot. “Twitter Reports Massive Increase in Russian Government’s Content Removal Requests.” Global Voices Advocacy, 7 Mar. 2016, www.advox.globalvoices.org/2016/03/06/twitter-reports-massive-increase-in-russian-governments-content-removal-requests/.
9. “What Does Free Speech Mean?” United States Courts, www.uscourts.gov/about-federal-courts/educational-resources/about-educational-outreach/activity-resources/what-does.
- Skye Grayson is a graduate of Columbia University in the City of New York. She now lives in Los Angeles, California. She may be reached via e-mail at This email address is being protected from spambots. You need JavaScript enabled to view it..