Controversial Removals: Meta Faces Backlash Over Israel-Hamas Content

Meta, the parent company of Facebook, has found itself at the center of a content moderation controversy.

by Faruk Imamovic
SHARE
Controversial Removals: Meta Faces Backlash Over Israel-Hamas Content
© Getty Images News/Justin Sullivan

Meta, the parent company of Facebook, has found itself at the center of a content moderation controversy. The company's automated tools, designed to police potentially harmful content, have inadvertently removed two videos related to the Israel-Hamas conflict, according to a statement from the Meta Oversight Board released on Tuesday.

Oversight Board's Intervention

This incident marks the first “expedited review” by the Oversight Board, underscoring the intense scrutiny social media companies face over their handling of sensitive content.

The board overturned Meta’s initial decision to remove the two videos, emphasizing the importance of respecting users' rights to freedom of expression, especially in times of crisis.

Michael McConnell, a co-chair of the board, articulated the delicate balance the board aimed to achieve: “The Board focused on protecting the right to the freedom of expression of people on all sides about these horrific events, while ensuring that none of the testimonies incited violence or hatred”.

He emphasized the significance of these testimonies for users globally who seek diverse information about such critical events.

Meta's Response and the Challenge of Automated Moderation

In response to the board’s decision, Meta stated that it had already reinstated the two pieces of content before the board's review and would therefore take no further action.

The company reaffirmed its commitment to both expression and safety in a blog post. The incident sheds light on the challenges of automated content moderation. In response to the outbreak of the Israel-Hamas conflict, Meta implemented temporary measures to more aggressively remove content that could potentially violate its policies on hate speech, violence, and harassment.

While these measures aimed to prioritize safety, they also increased the chances of mistakenly removing non-violating content. As of December 11, Meta had not reverted the content moderation thresholds of its automated systems to normal levels.

The Oversight Board criticized this approach, arguing that excluding content that raises awareness of potential human rights abuses or conflicts from recommendations is an excessive restriction on freedom of expression, given the public interest in such content.

Facebook
SHARE