Fill in your email address to obtain the download verification code.
Enter the verification code
Please fill the fields below, & share with us the article's link and/or upload it:
upload file as pdf, doc, docx
SKeyes Center for Media and Cultural Freedom - Samir Kassir Foundation

Hate Speech in Lebanon: The Shortcomings and Responsibilities of Social Media Platforms

Thursday , 27 July 2023
Design: Mahmoud Younis

This research project evaluates the hate speech policies of four social media platforms: Facebook, Twitter, YouTube, and TikTok, as implemented in Lebanon, using a selection of Ranking Digital Rights’ human rights-based indicators. Facebook, Twitter, and YouTube are owned respectively by U.S. technology companies Meta Platforms Inc. (known as Meta), X Corp., and Google LLC, a subsidiary of Alphabet Inc. TikTok is owned by the Chinese technology company ByteDance.


The Samir Kassir Foundation (SKF) and Media Diversity Institute (MDI) partnered with Ranking Digital Rights, whose methodology is employed to benchmark companies in the ICT sector using indicators that establish high yet attainable standards for corporate transparency and policies that align with internationally recognized human rights standards.


The research focuses on hate speech policies applied to both user-generated and advertising content, shedding light on the policies’ potential effectiveness, transparency, user-friendliness, fairness, and respect for freedom of expression and the right to non-discrimination. We also sought to document any significant disparities in policies’ availability in Arabic and English.


We selected Facebook, TikTok, and YouTube for their widespread usage in Lebanon, with a combined user base nearing 10 million as of early 2023. In 2022, Lebanon had a population of 6.7 million people. Although Twitter has significantly fewer users in Lebanon – 531 thousand users – we decided to include it for two reasons. Firstly, in Lebanon and elsewhere in the region, Twitter serves as a platform for political debate and mobilization. Secondly, following Elon Musk’s takeover, we were interested in examining any potential changes in the company’s hate speech policies and their implications for a deeply divided country like Lebanon.

Key Findings

● In general, all four platforms have the majority of their available policies translated into Arabic. When an Arabic version of the policy is available, it is typically a direct translation into classical Arabic with little to no difference from the English version. The platforms make the Arabic language policies accessible (when available) through a simple language switch in the page settings. However, in some cases, key policies are not accessible in Arabic, such as TikTok’s Intellectual Property Policy and Google’s AI principles.

● Twitter’s Terms of Service, which govern users’ access to and use of the service, are not provided in Arabic. This creates a barrier for users in Lebanon (and elsewhere) who are only fluent in Arabic, as they cannot give informed consent when signing up for the service.

● Among the platforms, Facebook exhibits the most inconsistencies between its Arabic and English policies. In six out of 19 indicators, it provides less to no information at all in Arabic compared to its performance on these indicators when applied to its English-language policies.
Human Rights Practices

● Among the platforms studied, TikTok was the only one that did not explicitly and clearly commit to upholding human rights. Its policy did not encompass freedom of expression and information, and although it expressed a commitment to protecting the right to privacy, this commitment was not grounded in international human rights standards.

● There was no evidence to suggest that any of the companies owning the platforms included in the study conduct due diligence in Lebanon. None of the platforms conduct robust human rights impact assessments to understand how their policy enforcement processes affect the fundamental rights of their users in Lebanon, particularly communities at higher risk of experiencing hate speech, such as migrant workers, refugees, LGBTQ+ community, and women. Consequently, they failed to address and mitigate the negative impacts that arise from these risks.
Twitter under Musk

● Under the leadership of Elon Musk, Twitter has experienced setbacks in terms of freedom of expression and protection from hate speech. Since 2021, the company has ceased publishing its transparency reports on Rules Enforcement and Removal Requests. These reports provide insights into the actions taken by the company to restrict content and enforce its rules, as well as its response to third-party demands. Furthermore, Twitter disbanded its Trust and Safety Council, which previously brought together civil society representatives from various regions worldwide, including a Lebanese NGO, to provide advice on the platform’s rule development and product enhancements.

● The implications of Musk’s takeover and the changes he implemented on the spread of hate speech in Lebanon remain unclear. However, with the discontinuation of transparency reports on content moderation actions, it has become increasingly challenging for researchers and civil society to monitor how the platform handles hate speech.

● All of the platforms included in the study utilize machine learning algorithms for various purposes, including content ranking and moderation. However, despite the human rights risks associated with these systems, none of the companies explicitly and clearly articulated a policy commitment to human rights in the development and utilization of their algorithmic systems. While Meta and YouTube provided commitments that were not clearly grounded in human rights principles, TikTok and Twitter did not make any commitments at all.

● Platforms lacked transparency regarding their use of algorithms to curate, recommend, and rank content. While TikTok disclosed more details compared to its counterparts, including the variables that influence ranking systems and user options to control those variables, this information was not available in Arabic. Facebook only provided information about how its Feed curates and recommends content using algorithmic systems, without specifying how it uses these systems in other areas such as friend recommendations and search results.

● Platforms exhibited even less transparency concerning their policies governing the use of bots. Twitter was the most transparent, disclosing clear rules and enforcement mechanisms. TikTok and Meta disclosed almost no information, while YouTube did not disclose anything regarding their bot policies.

● Among all the platforms in the study, Twitter and YouTube demonstrated the highest level of transparency regarding their ad content and ad targeting policies.

● TikTok and YouTube were the only platforms that published data about the volume and nature of actions taken to restrict advertising that violated their policies. However, the data they provided was not comprehensive and did not disaggregate advertisements rejected for violating ad content rules from those rejected for violating ad targeting rules.

● Platforms are not explaining the processes they follow to handle content restriction requests pertaining to hate speech. We are aware that technology platforms, including Facebook, TikTok, Twitter, and YouTube, have partnerships with civil society organizations under the 2016 “EU Code of conduct on countering illegal hate speech online.” These partnerships involve organizations submitting reports of hateful content. However, it remains unclear how the platforms assess these requests before responding. It is also unclear whether the platforms receive requests from private sector entities in Lebanon to restrict hateful content.

● All platforms publish data on government demands they receive to restrict content and accounts, with YouTube providing the most comprehensive data, including information on the types of subject matter associated with these demands. Facebook, TikTok, and Twitter do not specify the subject matter, making it challenging for researchers, advocates, civil society, and journalists in Lebanon and elsewhere to understand the extent to which these demands are related with hate speech.

Share News