Meta announces measures to curb disinformation and violence amid renewed Israel-Hamas conflict News
Artapixel / Pixabay
Meta announces measures to curb disinformation and violence amid renewed Israel-Hamas conflict

Meta, the parent company of Facebook and Instagram, announced measures Friday to address concerns regarding disinformation and misinformation surrounding the recent conflict between Israel and Hamas. This comes after the European Union urged social media platforms like Meta, X, Youtube and Tiktok to take more decisive action against misinformation. Meta’s announcement also prompted groups like the Association for Progressive Communications to call for the protection of “Palestinian digital rights.”

Within three days following the attack by Hamas on October 7, Meta removed or labeled as disturbing over 795,000 pieces of content in Hebrew and Arabic for violating their policies. This represents a sevenfold increase in the removal rate compared to the two months prior, specifically concerning content violating the Dangerous Organizations and Individuals Policy in Hebrew and Arabic.

Meta says Hamas, designated as a terrorist organization by multiple Western governments, falls under the company’s Dangerous Organizations and Individuals Policy, and it is banned from Meta’s platforms. Meta actively removes any praise or substantive support of Hamas upon identification, while still allowing open discussions on social and political matters.

The social media giant says that it established a special operations center staffed with experts proficient in Hebrew and Arabic in the wake of the October 7 attacks. It reportedly enables Meta to swiftly remove content that violates their Community Standards or Community Guidelines, providing an additional line of defense against misinformation.

Furthermore, Meta has introduced additional measures to tackle emerging risks. These include stronger steps to prevent the recommendation of potentially violating content, an expansion of the Violence and Incitement Policy to prioritize the safety of hostages and restricted access to certain hashtags associated with violating content. The platform will only allow content featuring victims of hostage-taking if the victims’ faces are blurred. Meta will prioritize the safety and privacy of kidnapping victims when assessments are unclear.

The company emphasized they try to maintain a balance between allowing freedom of expression and ensuring user safety, regardless of the poster’s identity or beliefs.

However, the Association for Progressive Communications, acknowledging the need to combat hate speech on social media, said:

We are also concerned about significant and disproportionate censorship of Palestinian voices through content takedowns and hiding hashtags, amongst other violations. These restrictions on activists, civil society and human rights defenders represent a grave threat to freedom of expression and access to information, freedom of assembly, and political participation. 

Relatedly, Meta’s competitor X took action Friday to curb disinformation after Thierry Breton issued a statement of concern to the company. The Commission sent X a request for information under its Digital Services Act on Thursday, seeking to gather data related to disinformation on the platform. 

Civil society organizations have also spoken out about conflict-related hate speech and disinformation on social media. An Arab Center for the Advancement of Social Media review found 19,000 Hebrew-language tweets relating to “hate speech and incitement,” pointing out that the incidence of such speech increased after October 7th. The Anti-Defamation League also called on social media platforms to limit the spread of graphic media posted by Hamas in the wake of the attack.