27 Sep 2018

The EU’s Code of Practice on Disinformation: First thoughts

A group of online platforms, social networks, advertisers and advertising industry bodies, brought together by the European Commission, has this week agreed upon a new Code of Practice on Disinformation, setting out a series of commitments that they will undertake to tackle disinformation. While many are positive, or at least unproblematic, there are a number of shortcomings that raise concerns from a freedom of expression perspective.

 

Background

The development of the Code was first announced by the European commission in a Communication published in April of this year, which proposed the creation of a “multistakeholder forum on disinformation” with two aims: “to provide a framework for an efficient cooperation among relevant stakeholders” and to “secure a commitment to coordinate and scale up efforts to tackle disinformation”. The forum was established shortly afterwards and split into two groups: the first a working group tasked with drafting a Code of Practice, made up of representatives of major online platforms and their trade associations, as well as advertising agencies and associations; the second a “sounding board” made up of representatives of the media, civil society, fact checkers and academia, who would review the draft Code. In July, the draft was made public, and this week the final version – as well as the opinion of the sounding board – was published and opened for signature.

The Code of Practice divides the commitments into five sections:

  • Better scrutiny of advert placements and avoiding the promotion of websites or adverts that spread disinformation;
  • Ensuring that political advertising and issue-based advertising is clearly distinguished from editorial content and news and to improve transparency on its sponsors;
  • Tackling fake accounts and improving transparency around the use of bots;
  • Empowering consumers by making it easier to find trustworthy and diverse sources of news;
  • Empowering the research community by encouraging independent efforts to track disinformation and to support research into disinformation and political advertising.

Progress made against the commitments will be evaluated through annual reports published by the Code’s signatories and reviewed by a third party organisation. Separately,  both the signatories and the European Commission will undertake an assessment of the Code of Practice after twelve months. Signatories include some of the largest tech companies, such as Facebook, Google, Twitter, and Mozilla.

 

The Good

There are a number of positive aspects to the Code of Practice, not least the narrow definition of “disinformation.” While this term (and others) have been used and misused in many states to discredit information which is simply unpopular or critical of those in power, the Code uses the definition of the European Commission itself, namely “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.” “Public harm” is understood to mean “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”. The Code makes clear that disinformation does not include “misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary”.

By restricting the type of content being addressed to that which is objectively false or misleading, and only where it is done for financial gain or would threaten a legitimate public interest such as public health, security, or democratic processes, the definition appears to be consistent with international standards on where particular forms of expression can be permissibly restricted, such as those set out in Article 19(2) of the International Covenant on Civil and Political Rights.

Second, the Code of Practice itself repeatedly emphasises the importance of the right to freedom of expression and other human rights, and explicitly prohibits any action being taken which would be inconsistent with those rights. The preamble itself refers to the “the fundamental right to freedom of expression and to an open Internet”, recognises the “delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike,” and requires the signatories to ensure that any actions taken are in compliance with, among other instruments, the EU Charter of Fundamental Rights and the European Convention on Human Rights (ECHR). In addition, throughout the Code, a number of specific commitments note that they should be complied with in a manner consistent with particular provisions of the ECHR, including the right to freedom of expression in Article 10. 

Finally, the vast majority of the commitments are limited and proportionate responses with a strong focus on greater transparency, providing more information to users, and supporting further research.

 

The Not So Good

Despite the many positive aspects of the Code, there are three particular areas which are disappointing and could be improved.

First, within the section on empowering consumers, there is a commitment to invest in technology which would “prioritize relevant, authentic and accurate information” in search feeds and other automatically ranked distribution channels. As such, content would – it is true – only be harder to find, and would not be automatically removed. But the reliance on technology to make these determinations is troublesome. Algorithms and other forms of automated decisionmaking are not infallible when it comes to identifying content which is “disinformation”, and their limitations with respect to other forms of content are well-documented. With no guarantee of human oversight or involvement in the prioritisation process, it’s extremely likely that at least some information will be incorrectly determined by a machine to be “disinformation”.

Secondly (and relatedly), there is nothing in the Code which would enable a producer or publisher to be made aware that their content had been identified as “disinformation” and thus deprioritised or labelled in a particular way. Nor has any provision been made for users to appeal against such decisions,  and obtain remedy where appropriate. Without these critically important features (with the latter in particular consistent with companies’ responsibilities under the United Nations Guiding Principles) there would be a worrying lack of transparency and accountability.

Finally, there’s the weakness of the assessment and evaluation process – which essentially amounts to self-reporting by a signatory, reviewed by a third party organisation chosen by the signatory. Under this system, there will be no meaningfully independent assessment or review of the signatories’ actions, meaning shortcomings or adverse impacts stemming from those actions – which may include risks to freedom of expression – could remain unknown. Such an approach has been a source of contention in other similar processes which allow companies to determine who will be marking their homework.

Overall, the Code of Practice contains a number of positive aspects and the majority of commitments are limited and proportionate. Indeed, some have criticised the Code of Practice for not going far enough, including the sounding board; and even the Commissioner for Digital Economy and Society, Mariya Gabriel, while welcoming the Code as an “important step”, noted that the European Commission may still propose further actions, including regulatory measures, should the Code prove ineffective. GPD would caution against any further measures which incentivise the removal or deprioritisation of content, particularly where automated processes are involved, and instead urge the European Commission and the signatories to the Code to focus their efforts on ensuring human involvement in any content review processes, clear and effective grievance and remedial processes to challenge decisions, and transparent and independent assessments of companies’ policies, processes and actions. For more detail on how such a model might work in practice, see our recent white paper on online content regulation by platforms.