On Monday, the UK government launched its long awaited Online Harms White Paper, which sets out proposals to regulate online platforms and content in the UK.
While GPD recognises the legitimate desire of governments worldwide to tackle unlawful and harmful content online, we believe that many of the proposals in the White Paper – including a new duty of care on online platforms, a new regulatory body, and even the fining and banning of online platforms as a sanction – pose serious risks to individuals’ right to freedom of expression online.
There is much to be concerned about in the White Paper – but three particular issues stand out:
1. The scope of harms covered
If speech is going to be restricted, it’s essential that it be clearly defined and objectively harmful. Although the unlawful online content which the White Paper identified as needing to be tackled is, for the most part, clearly defined in legislation, this isn’t the case for the “legal but harmful” content which is also within its scope. So while it is relatively straightforward to identify child sexual abuse imagery, for example, the same cannot be said for speech which might amount to “coercive behaviour” or “disinformation”, to take two examples from the White Paper. Indeed, such speech is generally lawful “offline”, risking the creation of two different standards of speech depending on whether it is expressed in person or online.
Furthermore, each type of harm identified in the White Paper requires a different, specific, and tailored response. What might work as an approach against terrorist propaganda would be manifestly inappropriate for cyberbullying. Yet the White Paper proposes only one blanket form of regulation for all these very different harms.
Our recommendation to the government – which we have repeatedly made, alongside other civil society groups in the UK – has been for targeted, evidence-based responses, which ensure consistency between offline and online speech, and which do not impose new restrictions on freedom of expression.
2. Inappropriate forms of regulation
To address this diverse range of “online harms”, the White Paper proposes one blanket solution: a statutory duty of care, accompanied by mandatory codes of practice relating to different harms, all overseen and enforced by a regulatory body. GPD has examined the risks that each element of this model poses to freedom of expression in an in-depth series of articles. We also raised concerns directly with the UK government in a briefing we shared in February. We set out how such a model would give online platforms a strong incentive to err on the side of caution when deciding whether to remove “grey area” content, noting that there is already evidence of this happening in Germany since the implementation of the “NetzDG” law which, similarly, imposed liability on large platforms that failed to remove illegal content, with fines for non-compliance.
Our position – shared with Index on Censorship and Open Rights Group in a joint statement – is that any regulation of online platforms should be evidence-based, appropriate, proportionate, and in full conformity with international human rights law and standards. In particular, we caution against any regulation which incentivises the removal of content with strict time limits or the threat of sanctions, due to risks of excessive caution and the removal of lawful and legitimate content.
Unfortunately, the proposals in the White Paper fail these tests. Indeed, the White Paper includes several measures which go beyond those we had expected, and would, in fact, further incentivise removal of content. The most concerning of these is a proposal to impose criminal liability on senior managers of online platforms (p. 60) echoing a law passed in Australia last week which was immediately condemned by the UN Special Rapporteur on freedom of expression. We’re similarly concerned by a proposal in the White Paper to allow the new regulator to compel ISPs to block websites (albeit in extreme circumstances) (p. 60).
3. Lack of transparency and accountability
Given the very real risks posed to freedom of expression by measures included in the White Paper, it’s disappointing how few safeguards it proposes. One is to demand companies produce transparency reports which include details on how they ”uphold and protect” human rights (p. 44) – but little further detail is provided on this.
Another proposed safeguard is a requirement that the regulator “protect users’ rights”. (p.56) But there’s no clear indication of what this would mean in practice or what difference it would make. Indeed, it’s difficult to see how this requirement is not significantly undermined by the remainder of the proposals relating to the regulator and its enforcement powers, which together incentivise the removal of lawful and legitimate content.
Our position – which we set out in our earlier joint statement – is that the starting point should be for online platforms to develop fair, simple and transparent oversight mechanisms under which requests for the removal of content, whether by users or governments, can be challenged by those affected. These mechanisms should include transparent dialogue with users, including notifying users why a decision has been made.
We’ll be responding in full to proposals in the White Paper in our formal submission into the three month consultation, which ends on 1 July. That submission will be published on the GPD website in due course.