Approaches to content regulation –  #1: A Code of Practice

1 Feb 2019

By Amy MacKinnon and Richard Wingfield

In recent months, the global debate around harmful and illegal online content has noticeably shifted. If 2018 was characterised by a spate of self-regulatory initiatives by platforms, in 2019 there’s a growing focus on the role governments should play on this issue. Nowhere is this turn more evident than in the United Kingdom, where government consultations have provoked a flurry of competing models for the regulation of online content from actors as diverse as parliamentary committees, telecommunications regulators, academic institutions, corporations and civil society organisations.

To date, four proposals have emerged as serious contenders to be included in the government’s upcoming Online Harms White Paper and hold the potential to fundamentally alter the regulation of online content in the UK. They are:

  1. the introduction of a mandatory code of practice for social media platforms,
  2. the imposition of a duty of care owed by social media platforms to their users,
  3.  the creation of a regulator with a mandate to oversee social media platforms;
  4. the introduction of transparency reporting requirements for social media platforms.

In this blog series, we’ll be taking a forensic look at each – looking at what’s being proposed, how it differs from the current state of play, and their potential implications for the enjoyment of human rights online. In this first post, we take a look at the concept of a code of practice.

***

What is a code of practice?

A code of practice is a document or legal instrument which sets out standards for appropriate behaviour and conduct for a particular sector, industry or profession. They typically take one of two forms:

  • A mandatory (or binding) code of practice –- these are imposed and enforced by government or an independent regulator,
  • A voluntary code of practice – these are effectively a form of industry self-regulation that can occur independently of, or in partnership with, government.

If developed and used appropriately, codes of practice can be useful in establishing clear guidelines for the conduct of the relationship between the regulated party and consumer, guiding the actions of organisations and individuals in a way which safeguards important public interests such as consumer protection, health and safety, and protection from financial loss.

When compared to more traditional forms of legislation, codes of practice are generally considered a more ‘light touch’ form of regulation and are usually more principle- or outcome-based than standard legislative rules, which tend to be concrete and inflexible.

Codes of practice are also quicker and easier to revise and update than standard pieces of legislation and may therefore be particularly well-suited to the regulation of new or rapidly evolving sectors.

*——

The current situation in the UK

Section 103 of the Digital Economy Act 2017 requires the government to publish a code of practice for “social media providers”. Under the Act, adherence to the code is voluntary, and there is no mechanism for monitoring compliance (or indeed a sanctions regime for non-compliant parties).

Alongside their response to the Internet Safety Strategy Green Paper published in May of 2018, the government published a draft code of practice (Annex B). This short document outlines the internal procedural mechanisms social media platforms should be considering and provides examples of best practice organised under the structure set out in section 103, namely:

  • ensuring arrangements exist which enable a user to notify platforms about conduct which involves bullying or insulting that user, or other behaviour likely to intimidate or humiliate the user;
  • maintaining processes for dealing with such notifications;
  • including provisions on matters relating to the above in platforms’ terms and conditions; and
  • giving information to the public about action providers take against the use of their platforms for the conduct listed in the first bullet point.

A final, authoritative, code of practice has not yet been published. 

*

Government proposals for reform

In its response to the Internet Safety Strategy Green Paper in May 2018 (page 15), the government stated that further legislation relating to the code of practice was under consideration and that proposals for such legislation could be included in a planned White Paper, due to be released in Winter 2019.

This move has been widely viewed as a response to pressure from some civil society organisations, particularly those working with children, who – in their responses to the Green Paper – demanded that the existing (voluntary) code of practice become “legally binding, underpinned by an independent regulator and backed up by a sanctions regime”.

At the time of writing, the government has provided no additional detail on its proposed round of “further legislation” – apart from a brief allusion at the Conservative Party Conference by the Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, to a need for “new laws” to apply “normal rules of human behaviour [online]”.

*

Why should human rights defenders care about the introduction of a binding code of practice?

Although the current draft code of practice focuses squarely on establishing best practices, encouraging transparency, and empowering users, there are a number of trends which have emerged both domestically and internationally over the last 12 months which are likely to be reflected in any new, binding code of practice. Four key themes of concern to human rights defenders are:

  1. The role of artificial intelligence (AI) in regulating online content,

The current draft code of practice refers to “using a mix of human and machine moderation” as an example of best practice for moderating online content. As has been documented in the recent roll out of AI-mediated content filtering on Tumblr, these tools can pose a significant risk to freedom of expression. Without the ability to accurately determine whether content is harmful, or the capability to consider context in their decision-making processes, AI-mediated content regulation tools risk the removal of legitimate content if appropriate safeguards (such as human oversight) are not put in place, or if the technology is adopted before it is fit for purpose.

  1. Time limits and sanctions

A binding code of practice which includes time limits for content takedowns and a scheme for sanctions for non-compliance would create  a clear incentive for platforms to err on the side of caution when removing content. As the implementation of Germany’s Network Enforcement Act (NetzDG) has demonstrated, this is likely to result in platforms removing lawful and legitimate content protected by the right to freedom of expression, particularly in “grey areas”  of content or speech which may be critical, offensive or challenging.

  1. Private sector decision-making on the legality of content

Human rights apply online as well as offline, and so the same principles which underpin permissible restrictions on freedom of expression offline also apply to its restriction online.

This means that restrictions, including the removal of content, should only take place following a clear, transparent and rights-respecting process, with appropriate accountability mechanisms and the possibility of an independent appeal process.

A code of practice which contains requirements for online platforms to remove content which is unlawful would shift judicial and quasi-judicial functions to platforms, or their nominees, forcing them to make determinations regarding whether particular forms of content are legal or not. This would include determinations on whether certain content constituted, among other things, hate speech, defamation, and incitement to violence or terrorism. However, unless mandated, there would be no guarantee that there would be mechanisms for accountability or safeguards in place, as there are when decisions are made by public authorities or the judiciary.

Even where a code of practice is limited to “harmful but lawful” content, such as bullying or insulting other individuals, concerns remain. Are platforms best placed to make determinations as to what content is harmful, particularly if no precise definitions are provided? At the same time, consider the scale of content which would need to be reviewed. Would in-person reviews be feasible? Probably not. That means that platforms would likely turn to automated processes which, as we’ve already established, have a poor record on accurately identifying this kind of content.

  1. The risk of the code of practice being adopted by other jurisdictions

A trend of states passing copycat legislation relating to the internet has been gathering momentum over the last twelve months – the proposed Russian adoption of the German NetzDG is one notable, and worrying, example. As such, advocates for any proposals which are put forward in the UK, including instituting a binding code of practice for social media providers, should be bear in mind that any content regulation regime instituted in the UK is likely to act as a precedent for legislation in other jurisdictions, including those where freedom of expression is restricted, and human rights protections are weak.

*

What next?

There is currently no set timetable for the publication of the final code of practice. If – as has been strongly suggested – the final version of the code of practice ends up being legally binding and backed up by sanctions, it could pose clear risks to freedom of expression. From a civil society point of view, it’s important that we prepare for this eventuality – by establishing clear talking points ahead of a likely further public consultation, building relationships with MPs and other politicians sympathetic to human rights, and forming networks with other human rights defenders.