[go: nahoru, domu]

Facebook Report Concludes Company Censorship Violated Palestinian Human Rights

The report, due out tomorrow, said Facebook and Instagram showed bias against Palestinians during a brutal Israeli assault on the Gaza Strip last May.

Content moderators work at a Facebook office in Austin, Texas. (Photo by Ilana Panich-Linsman for The Washington Post via Getty Images)
Content moderators work at a Facebook office in Austin, Texas., 2019. Photo: Ilana Panich-Linsman/Getty Images

Facebook and Instagram’s speech policies harmed fundamental human rights of Palestinian users during a conflagration that saw heavy Israeli attacks on the Gaza Strip last May, according to a study commissioned by the social media sites’ parent company Meta.

“Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” says the long-awaited report, which was obtained by The Intercept in advance of its publication.

Commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility, or BSR, the report focuses on the company’s censorship practices and allegations of bias during bouts of violence against Palestinian people by Israeli forces last spring.

“Meta’s actions in May 2021 appear to have had an adverse human rights impact.”

Following protests over the forcible eviction of Palestinian families from the Sheikh Jarrah neighborhood in occupied East Jerusalem, Israeli police cracked down on protesters in Israel and the West Bank, and launched military airstrikes against Gaza that injured thousands of Palestinians, killing 256, including 66 children, according to the United Nations. Many Palestinians attempting to document and protest the violence using Facebook and Instagram found their posts spontaneously disappeared without recourse, a phenomenon the BSR inquiry attempts to explain.

Last month, over a dozen civil society and human rights groups wrote an open letter protesting Meta’s delay in releasing the report, which the company had originally pledged to release in the “first quarter” of the year.

While BSR credits Meta for taking steps to improve its policies, it further blames “a lack of oversight at Meta that allowed content policy errors with significant consequences to occur.”

Though BSR is clear in stating that Meta harms Palestinian rights with the censorship apparatus it alone has constructed, the report absolves Meta of “intentional bias.” Rather, BSR points to what it calls “unintentional bias,” instances “where Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users” — a nod to the fact that these systemic flaws are by no means limited to the events of May 2021.

Meta responded to the BSR report in a document to be circulated along with the findings. (Meta did not respond to The Intercept’s request for comment about the report by publication time.) In a footnote in the response, which was also obtained by The Intercept, the company wrote, “Meta’s publication of this response should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions or viewpoints identified by BSR, nor should the implementation of any suggested reforms be taken as admission of wrongdoing.”

According to the findings of BSR’s report, Meta deleted Arabic content relating to the violence at a far greater rate than Hebrew-language posts, confirming long-running complaints of disparate speech enforcement in the Palestinian-Israeli conflict. The disparity, the report found, was perpetuated among posts reviewed both by human employees and automated software.

“The data reviewed indicated that Arabic content had greater over-enforcement (e.g., erroneously removing Palestinian voice) on a per user basis,” the report says. “Data reviewed by BSR also showed that proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.”

BSR attributed the vastly differing treatment of Palestinian and Israeli posts to the same systemic problems rights groups, whistleblowers, and researchers have all blamed for the company’s past humanitarian failures: a dismal lack of expertise. Meta, a company with over $24 billion in cash reserves, lacks staff who understand other cultures, languages, and histories, and is using faulty algorithmic technology to govern speech around the world, the BSR report concluded.

Not only do Palestinian users face an algorithmic screening that Israeli users do not — an “Arabic hostile speech classifier” that uses machine learning to flag potential policy violations and has no Hebrew equivalent — the report notes that the Arabic system also doesn’t work well: “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects, both because the dialect is less common, and because the training data — which is based on the assessments of human reviewers — likely reproduces the errors of human reviewers due to lack of linguistic and cultural competence.”

Human employees appear to have exacerbated the lopsided effects of Meta’s speech-policing algorithms. “Potentially violating Arabic content may not have been routed to content reviewers who speak or understand the specific dialect of the content,” the report says. It also notes that Meta didn’t have enough Arabic and Hebrew-speaking staff on hand to manage the spike in posts.

Related

States Can’t Control the Narrative on Israel-Palestine Anymore

These faults had cascading speech-stifling effects, the report continues. “Based on BSR’s review of tickets and input from internal stakeholders, a key over-enforcement issue in May 2021 occurred when users accumulated ‘false’ strikes that impacted visibility and engagement after posts were erroneously removed for violating content policies.” In other words, wrongful censorship begat further wrongful censorship, leaving the affected wondering why no one could see their posts. “The human rights impacts … of these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety were of heightened significance, especially for activists and journalists,” the report says.

Beyond Meta’s failures in triaging posts about Sheikh Jarrah, BSR also points to the company’s “Dangerous Individuals and Organizations” policy — referred to as “DOI” in the report — a roster of thousands of people and groups that Meta’s billions of users cannot “praise,” “support,” or “represent.” The full list, obtained and published by The Intercept last year, showed that the policy focuses mostly on Muslim and Middle Eastern entities, which critics described as a recipe for glaring ethnic and religious bias.

Meta claims that it’s legally compelled to censor mention of groups designated by the U.S. government, but legal scholars have disputed the company’s interpretation of federal anti-terrorism laws. Following The Intercept’s report on the list, the Brennan Center for Justice called the company’s claims of legal obligation a “fiction.”

“Meta’s DOI policy and the list are more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

BSR agrees the policy is systemically biased: “Legal designations of terrorist organizations around the world have a disproportionate focus on individuals and organizations that have identified as Muslim, and thus Meta’s DOI policy and the list are more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

Palestinians are particularly vulnerable to the effects of the blacklist, according to the report: “Palestinians are more likely to violate Meta’s DOI policy because of the presence of Hamas as a governing entity in Gaza and political candidates affiliated with designated organizations. DOI violations also come with particularly steep penalties, which means Palestinians are more likely to face steeper consequences for both correct and incorrect enforcement of policy.”

The document concludes with a list of 21 nonbinding policy recommendations, including increased staffing capacity to properly understand and process Arabic posts, implementing a Hebrew-compatible algorithm, increased company oversight of outsourced moderators, and both reforms to and increased transparency around the “Dangerous Individuals and Organizations” policy.

In its response to the report, Meta vaguely commits to implement or consider implementing aspects of 20 out of 21 the recommendations. The exception is a call to “Fund public research into the optimal relationship between legally required counterterrorism obligations and the policies and practices of social media platforms,” which the company says it will not pursue because it does not wish to provide legal guidance for other companies. Rather, Meta suggests concerned experts reach out directly to the federal government.

Latest Stories

Join The Conversation