Fill in your email address to obtain the download verification code.
Enter the verification code
Please fill the fields below, & share with us the article's link and/or upload it:
upload file as pdf, doc, docx
SKeyes Center for Media and Cultural Freedom - Samir Kassir Foundation

Double Standards in Social Media Content Moderation

Thursday , 05 August 2021

Social media plays an important role in building community and connecting people with the wider world. At the same time, the private rules that govern access to this service can result in divergent experiences across different populations. While social media companies dress their content moderation policies in the language of human rights, their actions are largely driven by business priorities, the threat of government regulation, and outside pressure from the public and the mainstream media. As a result, the veneer of a rule-based system actually conceals a cascade of discretionary decisions. Where platforms are looking to drive growth or facilitate a favorable regulatory environment, content moderation policy is often either an afterthought or a tool employed to curry favor. All too often, the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.

This report demonstrates the impact of content moderation by analyzing the policies and practices of three platforms: Facebook, YouTube, and Twitter. We selected these platforms because they are the largest and the focus of most regulatory efforts and because they tend to influence the practices adopted by other platforms. Our evaluation compares platform policies regarding terrorist content (which often constrict Muslims’ speech) to those on hate speech and harassment (which can affect the speech of powerful constituencies), along with publicly available information about enforcement of those policies.


In section I, we analyze the policies themselves, showing that despite their ever-increasing detail, they are drafted in a manner that leaves marginalized groups under constant threat of removal for everything from discussing current events to calling out attacks against their communities. At the same time, the rules are crafted narrowly to protect powerful groups and influential accounts that can be the main drivers of online and offline harms.


Section II assesses the effects of enforcement. Although publicly available information is limited, we show that content moderation at times results in mass takedowns of speech from marginalized groups, while more dominant individuals and groups benefit from more nuanced approaches like warning labels or temporary demonetization. Section II also discusses the current regimes for ranking and recommendation engines, user appeals, and transparency reports. These regimes are largely opaque and often deployed by platforms in self-serving ways that can conceal the harmful effects of their policies and practices on marginalized communities. In evaluating impact, our report relies primarily on user reports, civil society research, and investigative journalism because the platforms’ tight grip on information veils answers to systemic questions about the practical ramifications of platform policies and practices.


Section III concludes with a series of recommendations. We propose two legislative reforms, each focused on breaking the black box of content moderation that renders almost everything we know a product of the information that the companies choose to share. First, we propose a framework for legally mandated transparency requirements, expanded beyond statistics on the amount of content removed to include more information on the targets of hate speech and harassment, on government involvement in content moderation, and on the application of intermediate penalties such as demonetization. Second, we recommend that Congress establish a commission to consider a privacy-protective framework for facilitating independent research using platform data, as well as protections for the journalists and whistleblowers who play an essential role in exposing how platforms use their power over speech. In turn, these frameworks will enable evidence-based regulation and remedies.


Finally, we propose a number of improvements to platform policies and practices themselves. We urge platforms to reorient their moderation approach to center the protection of marginalized communities. Achieving this goal will require a reassessment of the connection between speech, power, and marginalization. For example, we recommend addressing the increased potential of public figures to drive online and offline harms. We also recommend further disclosures regarding the government’s role in removals, data sharing through public-private partnerships, and the identities of groups covered under the rules relating to “terrorist” speech.

Share News