Human Rights Watch Criticizes Meta's Content Moderation Policies

Human Rights Watch has accused Meta, the parent company of Facebook and Instagram, of repeatedly removing or restricting content supporting Palestine or Palestinian human rights

by Faruk Imamovic
SHARE
Human Rights Watch Criticizes Meta's Content Moderation Policies
© Getty Images/Justin Sullivan

Human Rights Watch has accused Meta, the parent company of Facebook and Instagram, of repeatedly removing or restricting content supporting Palestine or Palestinian human rights, raising concerns over digital censorship and freedom of expression on social media platforms.

Allegations of Unjustified Content Removal

The human rights organization's report highlights instances where pro-Palestine content, described as "peaceful," was allegedly removed or restricted despite not violating Meta's policies.

The report scrutinizes Meta’s handling of such content, particularly during heightened periods of the Israel-Hamas conflict. Human
Rights Watch has called on Meta to disclose more information about its moderation decisions, including government takedown requests and criteria for making "newsworthiness" exceptions for content that otherwise violates its rules.

Human Rights Watch provided broad descriptions of the content in question but offered limited specifics on the hundreds of posts said to be removed or restricted, including a lack of screenshots. According to the organization, more than 1,000 pieces of pro-Palestine content were unjustifiably restricted or removed during October and November 2023.

Meta's Response to the Accusations

In response to these allegations, Meta issued a statement refuting the claims of systemic bias in its content moderation practices. Spokesperson Ben Walters emphasized the challenges of enforcing policies globally during a highly polarized conflict.

"This report ignores the realities of enforcing our policies globally during a fast-moving, highly polarized and intense conflict, which has led to an increase in content being reported to us," the statement read.
Meta asserted that its policies aim to balance free expression with platform safety.

The company acknowledged occasional errors in content moderation but contested the notion of deliberate or systematic suppression of specific voices. Meta criticized the report for implying systemic censorship based on a relatively small number of examples, considering the vast amount of content related to the Israel-Hamas conflict.

The debate between Human Rights Watch and Meta underscores the complex challenges of content moderation on global social media platforms, especially during contentious political conflicts.

Facebook Instagram
SHARE