- Global Voices - https://globalvoices.org -

Meta's Oversight Board grapples with Facebook and Instagram's opaque content guidelines

Categories: Censorship, Technology, Advox

Image courtesy Giovana Fleck

Social media, when used correctly, can provide a vital lifeline for human rights defenders. Through their online platforms, users can raise awareness about issues otherwise ignored by the mainstream media and even provide valuable corroboration of war crimes, enabling perpetrators to be brought to justice [1].

In recent months, however, users raising awareness of human rights abuses on Instagram and Facebook (both owned by Meta) have found themselves at the mercy of Meta’s algorithms and opaque content guidelines. As a result, Meta’s “Oversight Board,” (an appeal mechanism established by Meta for users who felt their content had been wrongly removed) has found itself not only reversing Meta’s original decisions but also issuing policy guidance to enable the tech giant to better protect the rights of users.

For instance, the board recently reversed Meta’s decision to remove an Instagram video showing the bloody aftermath of a terrorist attack in Nigeria [2]. The post had been taken down under Meta’s policy prohibiting videos of intense, graphic violence and content containing sadistic remarks towards imagery depicting human suffering. It was removed even though none of the captions or hashtags accompanying the video had, according to a majority of the board, been intended to convey enjoyment or derive pleasure from the suffering of others. Rather, the posters were trying to spread awareness amongst their community. The fact that Nigeria had sought to suppress coverage of a recent spate of terror attacks was also vital to the board’s decision to uphold the user’s freedom of speech. Importantly, the board noted that Meta’s understanding of the term “sadistic” had not been made publicly available and that the internal guidance provided to the tech giant’s moderators had been overly broad.

In theory, the “violent and graphic content policy [3]” allows users to condemn and raise awareness about “important issues such as human rights abuses, armed conflicts or acts of terrorism.” However, the board noted that: “the Violent and Graphic Content policy does not make clear how Meta permits users to share graphic content to raise awareness of or document abuses.”

This was by no means the first time the board had raised such concerns. Before this decision, Meta removed a video documenting the Sudan military’s abusive response [4] to protestors following the army-led takeover of the civilian government. The company then reversed its decision under a “newsworthiness,” exception, which allows content to remain visible when it is in the public interest. Though the board approved Meta’s u-turn, it raised concerns that the newsworthiness policy did not make clear when content documenting atrocities would benefit from the allowance: the lack of definition clarifying the term “newsworthy” left users at risk of “arbitrary” and “inconsistent,” decisions. Furthermore, in the 12 months before March 2022, Meta had only issued 17 “newsworthiness” exemptions under its violent content policy. Given that Meta had removed 90.7 million pieces of content under this community standard in the first three quarters of 2021, there were clear concerns that content that could be in the public interest was not getting through.

Another target of the board’s criticism has been Meta’s policy prohibiting praise of dangerous individuals and organizations [5]. On various occasions, the board has recommended [6] that Meta elaborate criteria and illustrative examples to increase users’ understanding of the exceptions for neutral discussion, condemnation, and news reporting. The board has stated that even the term “praise” was not properly defined. Meta’s internal guidance instructs reviewers to remove content for praise if it “makes people think more positively about” a designated group “making the meaning of ‘praise’ less about the intent of the speaker but the effects on the audience [7].” Hence, users clearly criticizing human rights abuses [8] or even reporting neutrally on the activities of the Taliban [7] have had their posts removed if they happened to mention an organization targeted by the policy.  This is exacerbated by the fact that Meta has left its list of dangerous organizations hidden from the public.

These concerns are nothing new. In an extensive analysis [9], the NGO Article 19 noted as far back as 2018 that the terms of service of the dominant social media giants — YouTube, Facebook, Twitter, and Google — were characterized by a lack of transparency and lower standards when it came to free speech.

To reinforce its content moderation teams, Meta has established a system of “Media Matching Service banks,” which automatically identify and remove images that have been previously identified by human reviewers as violating the company's rules. Initially designed to enable Meta to make decisions at speed and at scale, such systems have ended up amplifying the impact of erroneous human decisions.

On one occasion, a user who posted a cartoon criticizing the Colombian [10] police had their comment removed after it automatically matched with an image previously flagged as problematic by a human reviewer. In the board’s view, given the cartoon's clear aim of criticizing state actors or actions, Facebook had been wrong to add it to the bank in the first place. Further exacerbating its concerns, was the fact that hundreds of other users had also posted the same cartoon and had their posts removed by Meta’s automated systems. Two hundred and fifteen of these users appealed these removals, 98 percent of whom were successful. Despite this, Meta was slow to remove the cartoon from the media bank.

Conversely, in a decision regarding Ethiopia [11], the board criticized Meta for being too slow to take action against content fuelling ethnic tension and armed conflicts. According to the board: “more transparency is needed to assess whether Meta's measures are consistently proportionate throughout a conflict and across all armed conflict contexts.” While Meta appeared to take prompt action during the Russia–Ukraine conflict, it has been too slow with other conflict-afflicted regions such as Ethiopia and Myanmar. The Ethiopian decision laid bare some of the difficulties in policing speech in the context of an ongoing armed conflict. According to a public comment submitted by an academic, there was a “lack of engagement with communities in Ethiopia.” Content inciting violence against Tigrayans had “slipped through the cracks,” due to a lack of moderators that spoke the native language. In one instance, a Facebook post calling for cleansing Tigrayans from the Amhara region remained on the platform for more than four months. Citing a report [12] by the Bureau of Investigative Journalists, the commenter noted that online hate campaigns had fuelled abuses against Tigrayans living in the city of Gondar.

Given the sheer volume of content that Meta regulates on its platforms — 23.2 million posts deemed violent and graphic were “actioned, [13]” in the third quarter of 2022 — it is inevitable that the company will continue to tread the fine line between protecting users’ freedom of speech and ensuring that Facebook and Instagram do not become safe havens for illegal content. The drive globally [14] to push online content providers to take a firmer stance against misinformation and hate speech will only exacerbate the risks inherent in this process. Though the board’s policy recommendations provide much-needed guidance on how to get the balance right, there is a catch: as the name suggests, the recommendations are advisory only. Meta is not obliged to implement them (though it has committed to respond to them within 60 days [15]). If Meta is to continue to be a safe space for human rights defenders, it will have to engage seriously and in good faith with the Oversight Board.