Unpacking “harmful content”: Disinformation

27 Jan 2020

The UK government’s Online Harms White Paper proposes placing legal obligations on online platforms to remove or restrict particular forms of illegal or “harmful” content. 

In this series, we’ll be taking a close look at three of those categories of “harmful” content—cyberbullying, extremist content and behaviour, and disinformation—to better understand what impacts they might have, and possible alternatives. In this final post, we turn to “disinformation”.

*

The scenario

Megan is in a group on an online platform with a few of her close friends from university. Megan and her friends use it to organise social events, share personal updates, and chat about news and current affairs. In the lead up to a general election, one of Megan’s friends shares a news article about a recent indiscretion on the part of a notable politician. Megan clicks on the link, half-reads the article, and shares it on her timeline.

Two weeks later, Megan’s social media account is deactivated, her friendship group’s chat is shut down, and Megan and the friend that originally shared the post are barred from the platform. When Megan and her friend appeal the decision, they are told that their accounts have been suspended to comply with the online platform’s duty to prevent the spread of disinformation. 

*

What does the Online Harms White Paper actually say about “DISINFORMATION”?

Disinformation is one of the harms identified within the scope of the Online Harms White Paper—which describes it as a “threat to our way of life” which “threatens the UK’s values and principles, and can threaten public safety, undermine national security, fracture community cohesion and reduce trust”. 

The Online Harms White Paper refers to disinformation 38 times in total, defining it as “information which is created or disseminated with the deliberate intent to mislead… to cause harm, or for personal, political or financial gain”. While acknowledging potential ambiguities within this definition, the White Paper does not attempt to define disinformation further, beyond identifying AI-facilitated manipulation and microtargeting as particular areas of concern.

 

How will this change current UK law if implemented?

Until now, tech companies such as Facebook and Google have only faced very minor regulation on their activities in the UK. It has largely been left up to them to decide how they deal with disinformation on their platforms, if at all. 

In this regard, the White Paper would represent an entirely new approach—introducing a legally binding set of obligations on platforms to tackle disinformation. 

Specifically, the White Paper proposes establishing a duty of care for online platforms to take proportionate and proactive measures “to help users understand the nature and reliability of the information they are receiving, to minimise the spread of misleading and harmful disinformation and to increase the accessibility of trustworthy and varied news content”. 

The White Paper also proposes the establishment of an independent regulator that would develop a Code of Practice for disinformation and implement, oversee and enforce the new regulatory framework. This regulator would likely have the power to take action where breaches of the duty of care manifested, including through the imposition of substantial fines. 

The Department for Digital, Culture, Media and Sport has also been consulting on additional powers that would (in extreme cases) empower the regulator to hold senior management at online platforms liable for major breaches of the duty of care, and require internet service providers to block access to non-compliant websites or apps from within the UK.

*

The arguments for including disinformation

Disinformation spread on social media platforms is widely acknowledged to be a growing problem. The University of Oxford’s Computational Propaganda Project has found evidence of organised social media manipulation campaigns in over 70 countries in 2019—up from 28 countries in 2017.  

And the effects of this disinformation are serious. It has been argued that disinformation erodes public confidence in the media, as well as public and private institutions, and even in election campaigns, where—notoriously in the 2016 US election and in the UK’s referendum on EU membership—it has stoked fears over foreign interference. 

Disinformation also holds the potential to cause harm to public health—for example, by spreading inaccurate information about vaccination; a phenomenon which has been linked to the revival of diseases once thought to have been eradicated, like measles. 

The UK government has a responsibility to protect the integrity of democratic processes, and a mandate to protect public health—which it has invoked in many disruptive and restrictive legislative initiatives, like the ban on smoking in public spaces. If de facto public spaces like Twitter and Facebook are facilitating the spread of expression which poses a risk to the population, shouldn’t they also be subject to regulation?

Besides, regulation of speech expression is far from an alien concept to the UK. The current media regulator, Ofcom, is vested with sanctioning powers; the advertising regulator, the Advertising Standards Authority, can demand the removal of misleading or offensive advertisements; and the courts can order the removal of defamatory material. Why would a social media regulator be anything different?

*

Why this argument doesn’t work

There are several key reasons why this argument falls down.

The first is the difficulty of defining disinformation, against—for example—merely inaccurate or out-of-date information. The White Paper attempts to get around this by specifying that such information must be shared “with deliberate intent to mislead”. 

This raises an obvious question: how will platforms be able to determine intent? It’s currently entirely unclear, which is troubling—especially considering the scale of content platforms will have to review, which would almost certainly necessitate the use of algorithms, and perhaps even proactive content moderation. If platforms were being asked to make these sensitive determinations around motive in a neutral, pressure-free context, that would be one thing. But—per the provisions of the White Paper—they will be doing it under the auspices of a highly punitive sanctions regime, which incentivises erring on the side of caution to demonstrate compliance. The likely result? Suspensions or content takedowns imposed on ordinary users who inadvertently share content flagged as disinformation. 

Perhaps the most meaningful argument for exercising restraint in regulating disinformation  is the remarkable poverty of our current understanding of disinformation. Despite heavy media coverage of the issue, there’s been little conclusive research into its actual impact, either on human behaviour or on wider structures and institutions (e.g. media trust, and trust in democratic processes). Notably, an exact causal connection between disinformation and the political opinion and voting behaviour of individuals is not yet proven. Would the government attempt to launch a transformative new policy on, say, healthcare or education with such a paucity of evidence?

One area where we do have concrete evidence is the implementation of similar laws in other countries. Only last week, a legal challenge was brought against Singapore’s “misinformation law”, amid concerns it is being used by the government to stifle criticism. Malaysia’s parliament voted to repeal its repressive “fake news law” only a year after its introduction. Closer to home, Germany’s NetzDG law—which imposes a similar sanctions regime on platforms which play host to “hate speech”—is also subject to legal challenge, and has provoked controversy and debate around overblocking and due process without (so far) yielding measurable progress on combating hate speech.

*

Alternatives

The model proposed by the Online Harms White Paper is not the only way we can deal with disinformation. Alternatives include:

  • Media and information literacy programmes;
  • Encouraging independent and verified fact-checking (Such as Africa Fact Check);
  • The enforcement of effective data protection legislation which tackles micro-targeting and surveillance advertising based on user data; and
  • Promoting transparency around advertising and political campaigning, particularly at election times.

None of these proposals will, alone, entirely stop users spreading disinformation. However, when taken together, these alternative measures would represent a more sustainable, proportionate and holistic response to the challenge of disinformation. Rather than simply tackling the symptoms of disinformation—by trying to suppress, control or predict individual user behaviour—they focus on fostering informed communities of users, organic forms of rebuttal and verification, and a healthy and accurate information ecosystem.

*

Following the online content regulation debate? Sign up to GPD’s monthly newsletter for updates on the Online Harms White Paper, Facebook’s Oversight Board, and more.