How COVID-19 is intensifying content moderation’s flaws

Graphic by Mozilla. Used with permission.

The following post was co-authored by guest writers Solana Larsen, Internet Health Report Editor at Mozilla, and Leil Zahra, a Mozilla Fellow embedded at WITNESS.

As people around the world shelter from the COVID-19 pandemic, the internet has become an even more vital resource than before. Millions of us now exclusively connect, commiserate, learn, work, and play online. 

In some ways, the pandemic has revealed the internet’s potential as a global public resource. But it is also revealing what’s unhealthy about the internet: Those without access are now at an even greater disadvantage. And privacy and security shortcomings in consumer technology are now making exponentially more people vulnerable. 

One troubling issue the pandemic brings to the fore is the internet’s broken content moderation ecosystem. That is, the deeply flawed ways that big tech platforms deal with online hate speech, disinformation, and illegal content. Despite many advancements, Facebook, YouTube, and other platforms often moderate content in ways that are capricious or downright dangerous: harmful content is left to fester, and acceptable content is unfairly removed. These decisions often disproportionately affect people in the Global South and who speak languages that are less supported.

Right now, we’re at an inflection point. Will content moderation's flaws loom even larger? Or can this crisis be a springboard for positive, lasting change in how we communicate online? 

One of the thorniest problems with content moderation is who — or what — does the moderation. With pandemic-related misinformation overwhelming platforms and many human content moderators unable to work, many platforms are increasingly turning to artificial intelligence. The limits of this approach are clear. Automated filtering has failed to prevent COVID-19 misinformation from spreading like wildfire and endangering public health. Platforms are awash in posts suggesting that drinking bleach can cure the virus, or that 5G technology is somehow spreading the disease. 

In addition to overlooking misinformation, automated moderation can also accidentally censor quality content. Internet users asking earnest questions, or speaking in local vocabularies or contexts, can be incorrectly labeled as problematic. Mid-pandemic, automated moderation on Facebook resulted in the erroneous takedown of a large amount of content, including links to news articles about COVID-19. In March, Facebook claimed a technical “issue” was behind this, and said they had restored the removed content. But the episode raises serious questions about the efficiency of such systems, and, further, casts doubt on Facebook’s transparency. When YouTube announced it had heightened automated filtering due to COVID-19 in March, it wrote: “As we do this, users and creators may see increased video removals, including some videos that may not violate policies.” Similarly, Twitter explained in the same month that their automated moderation “can sometimes lack the context that our teams bring, and this may result in us making mistakes.” 

It doesn’t help that when content is unfairly removed or an account is suspended, the appeal process can be opaque. In many cases, users are left without understanding the reasons behind the removal of content or suspension of an account. And even before the pandemic, context has been a major issue in the discussion around content moderation. For instance, whether a  U.S.-centric approach to what is accepted language or not should be applied internationally.

Flawed technology is one problem with content moderation; inequitable policies are another. In the United States and some European countries, big tech platforms can be fairly vigilant about following local laws and upholding their own internal policy commitments. Elsewhere, this isn't the case. During Nigeria’s 2018 election, researcher and Global Voices contributor Rosemary Ajayi and a group of colleagues catalogued hundreds of tweets spreading disinformation and were dismayed at the unpredictable and inconsistent response rates to reports about this activity. “If you report something serious on election day, and they get back to you a week later, what’s the point?” said Ajayi. The thought is equally frightening in today’s context: If a platform removes COVID-19 misinformation after millions of people have already seen it, the damage has already been done.

These are just two of several long-standing problems in the realm of content moderation. In Mozilla’s recent survey of the social media space, we examined several others. We spoke with a Polish nonprofit drug harm reduction group, SIN, that was suspended by Facebook and unable to appeal the decision. And we spoke with the human rights research group Syrian Archive, who say evidence of human rights abuses in war is frequently deleted by platforms. It’s not hard to see how instances like these can be especially grave during a pandemic too. What if critical health information disappears, or evidence of lockdown-related human rights abuses is wrongly taken down? 

There’s no panacea for these problems. But greater transparency about what is taken down, when, and why, as well as what portion of takedowns are appealed and reinstated, could help researchers and affected communities advise platforms and policymakers far better. The transparency reports of major platforms have, nevertheless, become more detailed over the years, thanks in part to pressure from civil society, including the signatories of The Santa Clara Principles. This community initiative to outline baselines for transparency and accountability in content moderation was launched in 2018, and several platforms have gone on to endorse them. In March, noting that the principles would benefit from an update, the Electronic Frontier Foundation (EFF) initiated a global call for proposals (the deadline is June 30) for how to better meet the needs of marginalized voices that are heavily impacted.

So much is unknown about the moderation and appeal patterns in different global contexts, that even anecdotal evidence from affected users would be a precious resource. Silenced.online is one new grassroots tool for crowdsourcing and discussing experiences of unfair takedowns around the world. It aims to create a network of organizations and individuals who have been working on, or want to start working on, content takedown and moderation.

Other groups will concur that it’s crucial for civil society and researchers to be engaged on questions of content moderation and platform regulation. Scandals and crises tend to provoke new rules and regulations, or calls for further automation, that aren’t necessarily based on independent analysis of what works. New approaches to creating accountability and appeal mechanisms, like Facebook’s new Oversight Board, demand attention from a global public. 

As outlined above, the COVID-19 pandemic is bringing content moderation flaws into sharp relief, but the problems are long-standing, not least on matters of health and disinformation. They call for a change in the daily operation of platforms as well as how they are held accountable. Heightened attention and focus on the issue has the potential to catalyze something good. It can spark more transparency, more humane technology, better laws—and a healthier internet.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.