Emerging Tech Digest (November 2021)

27 Nov 2021

This extract is taken from the November 2021 issue of The Digest, GPD’s newsletter. Sign up here.

Are online content policies failing marginalised groups?

Last month, Twitter expanded its private information policy to include images and videos, recognising the impact of misuse of private media on ‘women, activists, dissidents, and members of minority communities’ in particular. 

It has long been recognised that marginalised groups suffer a disproportionate amount of harassment, hate speech and incitement to violence while using the internet. Beyond contributing to offline human rights abuses, these harms also impact targeted users’ right to freedom of expression online, often leading to self-censorship out of fear of attack or abuse. Twitter’s latest policy update is just one among a raft of recent measures by tech companies and governments to try to improve the protections and remedies available for vulnerable groups online.

Yet almost as soon as Twitter implemented the change, it was weaponised by users affiliated with far-right movements to report and lock the accounts of anti-fascist activists and journalists. It’s a pattern we’ve seen far too often—new content governance measures having adverse and unintended consequences for those most in need of protection. Earlier this year, a glitch in TikTok’s algorithm for the detection and removal of hate speech erroneously prevented creators from showing support for the Black Lives Matter movement. And while Australia included non-consensual sharing of intimate images as a core focus of its Online Safety Act— a positive step for women’s safety—it also disempowered sex worker communities

While discriminatory and hateful online content should not be left to flourish unchecked, blunt or hasty content governance measures and tools developed without proper consultation can cause more harm than good. How can companies and policymakers avoid these damaging clashes, and ensure that marginalised groups enjoy full access to their right to freedom of expression? A few basic recommendations, rooted in international human rights law standards: 

  • Before developing content governance responses, build a solid understanding of how users are affected and what their needs are—including through engagement with relevant stakeholders and victims of online abuse and through disaggregated transparency reporting data
  • Ensure that your approach centres the rights of users to freedom of expression without discrimination, while also recognising that measures should be taken to ensure respect for other human rights, such as the right to security
  • Make it easier to report hateful or harassing content or activity, and give users greater control over the content and users that they interact with
  • Ensure that content moderators with local cultural and linguistic expertise are appropriately resourced, educated and supported to make context-sensitive decisions; with automated processes and machine learning used only where there is a sufficiently high degree of accuracy, and always with human oversight

There are some small but encouraging signs that platforms and governments are starting to pay greater heed to these issues—from the UK’s recent announcement of a discovery project into gendered online violence, to Facebook’s Women’s Safety Hub, and TikTok’s Black Creatives incubator hub. And a growing bank of high quality research into online violence—like Amnesty’s Troll Patrol Report, or InternetLab’s Discourse on Gender-Based Violence in Brazil—is unlocking vital new insight into the structural factors behind online violence.

But it’s clear that the wholesale change that is needed will only come when the rights and needs of marginalised communities are the starting point for content regulation approaches, rather than just an afterthought.

Listening post

Your monthly global update, tracking relevant laws and policies relating to the digital environment.

For emerging tech:

  • At UNESCO’s General Conference, the organisation’s member states unanimously adopted the Recommendation on the Ethics of AI developed over the last two years. You can see our thoughts on the final text here
  • In Chile, the government has recently adopted its National Artificial Intelligence Strategy (in Spanish, here)