Unpacking “harmful content”: Extremist content

7 Oct 2019

By Richard Wingfield

The UK government’s Online Harms White Paper proposes placing legal obligations on online platforms to remove or restrict particular forms of illegal or “harmful” content. 

In this series, we’ll be taking a close look at three of those categories of “harmful” content—cyberbullying, extremist content and behaviour, and disinformation—to better understand what impacts they might have, and possible alternatives. In this second post, we look at extremist content.

*

The scenario

In 2022, a prominent public figure posts a long critique of continued British military involvement overseas on their personal Facebook page, calling it “imperialist” and saying that British soldiers “need to face consequences for what they’re doing”. A journalist at a newspaper discovers the post, and uses it to launch a public campaign against “anti-military hate speech” on online platforms, calling on OfWeb, the UK’s recently instated social media regulator, to punish Facebook for leaving it (and similar posts) up. As a result of public and political pressure, and the threat of an OfWeb investigation, Facebook removes the posts in question, suspends the associated accounts, and commits to tweaking its algorithms to filter out  “hateful” content about British soldiers.

*

What does the Online Harms White Paper actually say about “extremist content”?

The UK government’s Online Harms White Paper includes “extremist content and activity” in its list of harms to be tackled. While it doesn’t say anything about what the terms mean, or even provide examples, references to “extremist content” are usually made when the White Paper discusses terrorism and violence. 

The government’s Counter-Extremism Strategy is also a likely reference point for the proposals, even though it isn’t explicitly referenced. This Strategy notably sets out a government-developed definition of “extremism” as “the vocal or active opposition to our fundamental values, including democracy, the rule of law, individual liberty and the mutual respect and tolerance of different faiths and beliefs” as well as “calls for the death of members of our armed forces”.

 

How will this change current UK law if implemented?

Extremism, or the expression of extremist views, is not in and of itself illegal in the UK. Instead, the government’s current approach to tackling extremism is mostly through non-criminal measures which aim to prevent individuals developing an extreme ideology, and to change people’s ideology when they do. This includes, through its “Prevent” programme, presenting alternatives to extremist narratives, and engaging with communities and vulnerable individuals. Where “extremist language” is seen to amount to a criminal offence—for example, if it includes an incitement to kill, or is directed towards a particular group—the police may become involved, and prosecute the proponents of said extremist language under the criminal law, such as legislation targetted at combating hate speech.

The approach of the Online Harms White Paper is not to introduce new laws on extremist content. Rather, it would place a new statutory duty on online platforms to remove or prevent access to online content which constitutes “extremist content”. Failure to comply with the duty of care could lead to civil fines, or potentially more severe sanctions, such as blocking of the platform or criminal liability for senior members of staff.

*

The arguments for including extremist content

While extremist speech can, in and of itself, be criminal in exceptional circumstances, even non-criminal forms of extremism can be socially harmful—and may pose threats to human rights. When people in a society are radicalised by what they see and hear, including online, society as a whole might become less tolerant of difference, and hostile to the free expression of certain ideas. At the same time, individuals holding extremist views may be more inclined to perpetrate hate crimes against marginalised groups (such as women, specific religious groups, or the LGBTQ+ community), or even acts of terrorism, which result in injury or death.

Addressing extremism when it manifests online also makes sense given that the law already requires actors in other spaces to take steps to identify and challenge extremism. Many institutions—including schools, prisons, universities and hospitals—are tasked with identifying extremism and radicalisation, and alerting law enforcement or other authorities as necessary. Why should online platforms be free of these obligations? After all, the online environment is often a gateway for individuals to be radicalised, with extremist and terrorist groups all using platforms like Facebook and Twitter to promote their cause and recruit people. 

If we know extremist activity is taking place online, and we know that online platforms have the technical feasibility to intervene to stop it (e.g. by monitoring communications and suspending accounts), why shouldn’t they be expected and required to do so?

*

Why this argument doesn’t work

However tempting, there are two key problems with such an approach. 

First, it is incredibly difficult to define “extremist content” without bringing in huge swathes of legitimate speech. Freedom of speech includes the right to say things which are deeply offensive. Even if you don’t like someone describing Britain’s democratic institutions as a sham, or—for instance—expressing preference for a dictatorship, the expression of these sentiments in and of itself shouldn’t be prohibited, according to the international human rights framework. Similarly, criticism of someone’s religious beliefs might be seen as aggressive, intolerant, and indeed “extreme”; but—again—unless it includes an incitement to violence, or hatred, is is likely to be legitimate free expression. 

Making decisions about when “extreme” forms of speech should lead to action being taken requires finely considered judgement. Even if we believe that online platforms should be allowed to make such judgments (over institutions like the independent judiciary and courts), they simply don’t have the resources to be able to do so—especially given the vast scale of content they host. If they are given the power and incentive to do this (which the Online Harms White Paper, in its current form, will do) we risk creating a situation where large amounts of speech which is permitted offline is prohibited online.

A second reason why this approach is a bad idea: there’s very little evidence that restricting “extreme” content is effective as an anti-extremism strategy. While they may have the technical feasibility to monitor content, remove it, and suspend accounts, there is little else that online platforms can do, and there is little evidence that such blunt measures would be effective in tackling extremism. And even if online platforms were not expected to remove extremist content, but simply to pass it on to law enforcement agencies or other authorities, this would be a huge undertaking and put online platforms in the invidious position of de facto taking on public functions.

*

Alternatives

As the government’s own Counter-Extremism Strategy shows, there are a range of actions that can be taken to help tackle extremism, including its online manifestations, Some alternative measures which pose fewer risks from a human rights perspective:

  • Digital literacy programmes (led by government and online platforms) which help empower young people and vulnerable users to use the internet safely, including information on how to identify and resist extremist content and views.
  • Making it easier for internet users to report extremist material, either to law enforcement agencies or other authorities so that appropriate interventions can be made. This could be enhanced by providing greater support and resources to law enforcement agencies to investigate online activity, and notifying online platforms of particular pieces of content that constitute criminal offences related to extremism.
  • Encouraging platforms to provide clear terms of service about when speech or behaviour is a violation of those terms of service, as well as accessible and simple means for users to report such behaviours. This will make it easier for users to take action against extremism when it crosses a certain threshold.

*

Following the online content regulation debate? Sign up to GPD’s monthly newsletter for updates on the Online Harms White Paper, Facebook’s Oversight Board, and more.