Unpacking “harmful content”: Cyberbullying

19 Sep 2019

By Richard Wingfield

The UK government’s Online Harms White Paper proposes placing legal obligations on online platforms to remove or restrict particular forms of illegal or “harmful” content. 

In this series, we’ll be taking a close look at three of those categories of “harmful” content—cyberbullying, extremist content and behaviour, and disinformation—to better understand what impacts they might have, and possible alternatives. In this first post, we look at cyberbullying.

*

The scenario

The year is 2031. A woman in Manchester sends a series of tweets to an MP, accusing her of betraying her constituents by participating in the twelfth consecutive government of national unity. In the thread, alongside measured criticism, she tries to send a tweet calling the MP “a sell out”. Twitter’s newly updated algorithms flag the keywords and preemptively remove it; as a result, her account is suspended. After she appeals, the tweets are referred to a human moderator, who judges that the initial removal was correct, because the UK’s Online Harms Act 2020 requires online platforms to prevent “cyberbullying”.

 

What does the Online Harms White Paper actually say about “cyberbullying”?

Despite referring to “cyberbullying” as a problem over 30 times, the UK government’s Online Harms White Paper never actually defines what it means by the term. Instead, it refers primarily to two surveys as evidence to justify its inclusion as a “harm” to be tackled. 

One of these is a 2017 NHS study of the mental health of children and young people in England in which one in five children aged 11 to 19 reported having experienced cyberbullying in the past year. The definition of “cyberbullying” used in that survey was extremely broad, and included “unwanted or nasty” emails, texts or messages, “nasty” posts; being “ignored or left out of things online” and “inappropriate pictures”. 

The other is the 2017 Ditch the Label survey which showed that some groups experienced cyberbullying more than others. The survey also used a list of examples of what could constitute cyberbullying, and which included: “nasty comments”, sharing private information, having someone impersonate another person, and starting rumours.

 

How will this change current UK law if implemented?

Bullying, whether offline or online, has—until now—been seen as an issue to be dealt with by those who are in charge of the welfare of people (such as schools and employers) through developing and enforcing institutional policies. In more extreme cases, where the bullying amounts to a criminal offence, the police might become involved and a person could potentially be prosecuted. 

The Online Harms White Paper takes a different approach, placing a new statutory duty on online platforms to remove or prevent access to online content which is seen to constitute cyberbullying. Failure to comply with the duty of care could lead to civil fines, or potentially more severe sanctions such as blocking of the platform or criminal liability for senior members of staff. 

 

The arguments for including cyberbullying

Bullying can cause serious harm, particularly to children—and cyberbullying is a growing concern in the UK, with one in five children aged between 11-19 reporting having experienced it in 2017. This is itself is a good reason to take steps to address it.

There’s also the fact that bullying is already prohibited in some physical, “offline” environments in the UK. Schools and employers have legal duties to protect students and employees from bullying, for example; and where bullying reaches a particularly high threshold, it can amount to a criminal offence, and merit police attention.

If we accept that bullying can be prohibited in specific “offline” environments, and that someone should try to prevent it, why shouldn’t it be prohibited and prevented on online platforms? Platforms, like schools or workplaces, are administered, controlled environments in which people interact and express themselves. It might be beyond the capability of a school to stop a student being bullied on a social media platform, but the platform itself has many tools at its disposal to effectively intervene—it can, for example, monitor communications, remove them, and suspend users’ accounts.

 

Why this argument doesn’t work

As logical as it sounds, there are serious risks in giving platforms a mandate to tackle “cyberbullying”.

The first key problem is with even defining “cyberbullying”. As we’ve seen, the definitions of “cyberbullying” used in the sources of evidence in the Online Harms White Paper have a very low threshold for unacceptable (such as saying things which are “nasty” or even just “unwanted”). This matters, because the right to freedom of expression actually has a very high threshold—it allows us, for instance, to say things which are shocking, offensive or disturbing. The White Paper’s definition of “cyberbullying” suggests that many of these lawful, if not always savoury, forms of expression would, in practice, be prohibited. 

It can’t be emphasised enough how complicated it is to define speech that is harmful rather than just irritating or hurtful, especially with something as broad and vague as “cyberbullying”. Consider, for example, that—under international human rights law—speech is treated differently depending on the circumstances of the speech, and who it is directed at. In the words of the UN Human Rights Committee, “the value placed (…) upon uninhibited expression is particularly high in the circumstances of public debate in a democratic society concerning figures in the public and political domain”. This means that a very critical tweet might be seen as cyberbullying if it was sent to a private citizen, but not if it was sent to a politician. 

Would platforms be able to appreciate these nuances? Think about the number of cases they would likely have to handle. Thousands? Hundreds of thousands? Millions? It’s impossible that a trained legal expert would be able to personally review all of them. And, due to the volume of cases, and (under the White Paper’s proposals) the threat of a fine from a government regulator for not handling them quickly enough, moderators would have a powerful incentive to “err on the side of caution” and take down ambiguous content rather than leave it up. If, indeed, a human moderator is even involved, of course—because it’s very likely platforms that would turn to artificial intelligence to monitor and remove content; a technology that is notoriously poor at accounting for nuance and context in human speech.

 

Alternatives

Worried about this scenario? The good news is that there are a range of different ways to deal with the challenge, many of which mirror approaches that are taken in the “offline” world but which do not involve prohibiting the speech or sanctioning speakers.

  • The most extreme forms of “cyberbullying” which cause serious, objective harm can still be prohibited through legislation and enforced, regardless of whether the speech or behaviour occurs online or offline. Tightly defined offences of, for example, inciting violence against another person, putting them in fear of violence, or attempting to stir up hatred towards a particular group, are all legitimate criminal offences. Where these occur online, law enforcement agencies should be sufficiently resourced to be able to investigate and prosecute offences.
  • Most online platforms, as noted above, have policies in place to respond to cyberbullying—such as removing pieces of content, or suspending or banning users. Encouraging platforms to provide clear terms of service about when such speech or behaviour is a violation of those terms of service, as well as accessible and simple means for users to report such behaviours, will make it easier for users to take action against cyberbullying which crosses a certain threshold.
  • Digital literacy programmes (led by government and online platforms) can also help empower children and other vulnerable users to use the internet safely. These programmes could be tailored to include guidance to schools on how to teach students about cyberbullying as part of the curriculum, and how they should respond to incidents of students cyberbullying each other online through their own disciplinary processes.

*

Following the online content regulation debate? Sign up to GPD’s monthly newsletter for updates on the Online Harms White Paper, Facebook’s Oversight Board, and more.