02 Mar 2022

Marginalised languages and the content moderation challenge

By

Last week saw the launch of a new report on the state of the internet’s languages, timed to coincide with International Mother Languages Day on 21 February. The report explores how multilingual the internet actually is, finding considerable inequalities in the types of services, tools and content available online in different languages.

Such inequalities are often discussed through the lens of access and digital divides. But they also have urgent—though underexplored—relevance to current debates around the effective regulation and moderation of online content. 

Major online platforms are increasingly relying on Natural Language Processing (NLP) technology to help them detect and remove harmful text-based content—like hate speech or disinformation—at scale and in a timely fashion. There are considerable limitations to using such tools for mass content moderation; even in widely spoken languages like English, they struggle to interpret surrounding context or adapt to novel cases. YouTube’s algorithms, for example, erroneously classified a discussion about black and white chess pieces as constituting ‘harmful and dangerous content’ last year. And NLP tools are heavily dependent on their training data and any biases it incorporates: some hate speech classifiers have been shown to be up to twice as likely to label tweets by Black people as offensive compared to other users. We’ve highlighted the risks that such automated tools pose to users’ freedom of expression and right to non-discrimination in our responses to platform regulation proposals in Australia, Canada, Ireland and the UK, arguing that governments shouldn’t incentivise or mandate companies—especially smaller companies without the resources to develop such tools appropriately—to use them without rigorous human rights safeguards. 

It’s clear, however, from the sheer scale of the problem of harmful content online, that automated tools will be a necessary part of platform responses. The problem is that vast portions of user activity on major online platforms takes place in languages for which such automated tools are not yet available. The aforementioned report on the state of the internet’s languages highlights that, of the 5 billion people using the internet today, 75% are from the global south, which is home to 90% of the world’s 7,000+ languages. For many of these marginalised languages, most major platforms have few to no moderation tools or oversight mechanisms at all, automated or not. For example, Twitter’s new Bodyguard tool, which protects users from hate speech, is only available in English, French, Italian, Spanish and Portuguese. Twitch’s Community Guidelines are available in just 28 languages. And 87% of Facebook’s global budget for time spent on classifying misinformation is allocated to US users, who make up just 10% of the platform’s community. 

This geographic imbalance in content moderation capacity has already led to tragic offline consequences and grave human rights abuses in Myanmar in 2018 and in Ethiopia in recent months. Yet the response of major online platforms has been limited. Rather than systematically re-designing their content moderation policies and tools in consultation with local experts and affected communities, they’ve done little more than retroactively hire local content moderators—causing a whole host of new problems related to working conditions, psychological trauma and underpayment. 

These same online platforms often promise that their content moderation failings in particular contexts will all be solved with bigger and better NLP algorithms and more powerful machine learning tools, like Meta’s new AI Research Supercluster or Cohere AI’s massive language models. Yet these technologies are, by nature, incredibly data-hungry, and need huge amounts of linguistic information in order to learn to make correct inferences. Digital linguistic datasets—such as Wikipedia knowledge ecosystems, legal repositories or even online dictionaries—simply do not exist for most marginalised languages, even those spoken by millions of people. Even where there is available language data—for example, as a result of state-led initiatives like the South African Centre for Digital Language Resources or the National Library of Finland’s Fenno-Ugrica project—it may not be relevant to the task of a particular NLP algorithm, and must still be cleaned and tokenised before it can be used as training data. There are, of course, researchers and civil society organisations doing important work to rectify this data sparsity, such as HAL’s TraduXio platform, Hatebase’s Citizen Linguist Lab, the Masakhane research community and Translators Without Borders. But without buy-in from both the public and private sector, it is likely that the existing divide in NLP capabilities for content moderation will get exponentially worse. 

Rather than relying on shiny new AI tools—which will, in practice, service only languages of strategic business importance to major companies at considerable costs to those least likely to benefit from them—or further outsourcing content moderation to poorly-paid reviewers in the global south, major online platforms should invest in the expertise, data and safeguards needed to ensure that all of their users, regardless of language background, are afforded the same protections when using their services. This might involve translating their terms of service, providing better training and support for context- and language- sensitive human moderators, and ensuring that grievance mechanisms are accessible to all users regardless of language background. Developing these systems and tools will require meaningful consultation with affected groups, and particularly those most at risk from human rights harms caused by or contributed to by the platform. 

In the long run, those same major platforms should consider using their global influence and significant budgets to build the capacity of local researchers and linguistic experts developing open-source datasets in marginalised languages. This is, indeed, one of the goals of the UNESCO Global Action Plan on the International Decade of Indigenous Languages; and private-public sector collaboration is needed to achieve it. Major platforms should also support civil society groups who work to prevent discrimination against marginalised-language users. Only by developing the digital infrastructure and NLP tools available for marginalised languages, and ensuring that such tools are deployed in a safe, consultative and context-sensitive manner, will such platforms be able to design and implement effective and scalable solutions to illegal and harmful content online for all their users. Ideally such recognition would take place before, not after, the next Myanmar.