04 Apr 2022

The UK’s Online Safety Bill poses serious risks to human rights

By

After a Green Paper, a White Paper and a draft Bill, the UK government has finally published its long-awaited Online Safety Bill, which aims to tackle a range of illegal and harmful online content through a new regulatory framework for online platforms, overseen by the UK’s existing communications regulator, OfCom. If passed, it will be the most comprehensive and demanding piece of online platform regulation in the world, going far beyond those proposed in the EU, the US and elsewhere.

At every stage of the development of the Bill, GPD has raised concerns over the risks the new framework poses to freedom of expression and privacy online. Despite warm words from the government over protecting these rights, the Bill contains deeply concerning provisions more comparable to the legislation that has been developed in Turkey, Russia and China than in more democratic states. Improvements from the draft Bill are minimal, with many new provisions exacerbating risks to human rights.

The Bill is also long and complex, with 194 clauses and 14 schedules spread over 225 pages. It has already drawn criticism from a wide range of commentators, including children’s rights advocates, human rights organisations and academics. In this post, we set out our first thoughts on the Bill, highlighting areas of particular concern as well as aspects of the Bill which we welcome.

Areas of concern

One of our central concerns is that the Bill, like the draft version, places significant pressure on large platforms to remove legal but “harmful” speech. The original definition of “harmful” content was overly complex and we are glad to see that it has been simplified somewhat to content which poses a “material risk of significant physical or psychological harm”. However, the principle of encouraging companies to remove content which is lawful remains objectionable, particularly since the Bill gives government ministers the power to specify certain types of content as “priority content that is harmful to adults”, which platforms will then face particular pressure to remove. The censorship of legal speech is inevitable as a consequence.

We also remain deeply concerned about the duties the Bill will impose on platforms. Under its provisions, platforms of all sizes will be expected to “prevent individuals from encountering” various types of content. The vast scope of content covered by these duties may force platforms to employ general monitoring of content, as well as the use of automated tools to automatically remove content, despite the known inaccuracies and biases associated with such technologies. The Bill lists hundreds of criminal offences to which this duty will apply, from fraud to harassment, assisting illegal immigration to provocation of violence. Instead of a court determining whether online content amounts to a criminal offence, platforms will be expected to do this themselves, essentially privatising the role of law enforcement. For all other criminal offences where “the victim or intended victim is an individual” (an “outrageously broad” definition), platforms would still have to “swiftly” decide whether content which has been flagged to them by users is illegal or not. Given the significant penalties and sanctions in the Bill, there will be a clear incentive for platforms to “play it safe” and over-remove content which may in fact not be illegal at all.

The Bill also imposes specific additional duties for online platforms which are “likely to be accessed by children”. In practice, this will mean all online platforms, since a platform will be so considered if it is “possible” for children to access it. These additional duties require platforms to prevent content which is harmful to children from being made available to users. By setting such a low bar before these additional duties apply, there is a real risk that the entirety of a platform will have to be moderated to meet this child-friendliness requirement, meaning the removal of content which is neither illegal nor even harmful to adults. Alternatively, and as the Bill now explicitly encourages, online platforms will simply restrict children from being able to access the platforms at all. Not only would this undermine children’s own right to freedom of expression, but the Bill’s encouragement of age verification technologies for this purpose carries significant privacy risks.

These privacy risks are exacerbated by the Bill’s complete lack of meaningful safeguards to ensure that encrypted and private communication platforms will not be subject to the same requirements as public platforms. Without such safeguards, private channels will need to be monitored, and even providing end-to-end encrypted services at all could risk non-compliance with the legislation. As we know from recent experience in Ukraine, end-to-end encryption can be a life-saving technology for vulnerable groups, and it is deeply disappointing that the UK’s proposals risk weakening it in the country. As a member of the Global Encryption Coalition, we will continue to defend the availability of strong encryption as a critical means of ensuring the privacy and security of all.

We continue to remain concerned about the degree of government control over the supposedly independent regulator, OfCom. The Bill gives significant powers to government ministers to determine priority types of illegal or harmful content, to set out what OfCom’s “strategic priorities” should be, to provide guidance to OfCom on how it should carry out its duties, and even to direct OfCom to modify codes of practice. Together, these provisions wholly undermine any suggestion that OfCom will be fully independent and impartial as a regulatory body for online platforms.

Finally, there are many additional provisions in the Bill which cause us concern. One is a new requirement on larger platforms to introduce optional identity verification for all users. While some malicious users do hide behind anonymous accounts for harmful purposes, anonymity can also be critical for those who would be at risk if their identity were known, including journalists, minorities, and human rights defenders. The Bill would not mandate users to verify their identities but would make content from unverified accounts more difficult to access. Anything which undermines users’ ability to access and impart information via online platforms is troubling.

We are also concerned that the Bill now specifically allows OfCom to mandate the use of “proactive technology” to identify and remove not only content falling into narrow categories of clearly defined illegal content (such as child sexual abuse imagery), but any kind of illegal content or content which is harmful to children. As noted above, these kinds of proactive technologies often have high rates of inaccuracy and incorporate a range of systemic biases, making them inappropriate tools for identifying illegal or harmful content in contexts where their decisions directly impact individuals’ freedom of expression.

Welcome elements

The Bill is not without positive elements. We welcome the statutory duty on online platforms to consider users’ rights to freedom of expression and privacy when developing and implementing their policies and procedures. In a positive development from the draft Bill, OfCom will now be required to publish a statement each year setting out the steps it has taken to ensure that those rights are protected, in addition to its duty to consider these rights when developing codes of practice around the new duties the draft Bill would impose.

Additionally, we continue to support the statutory duty on online platforms to allow users and affected persons to easily make complaints in relation to the removal of content (as well as to report content). We also welcome the provisions requiring online platforms to be more transparent about their content policies and the measures they are taking to address illegal and harmful content.

Despite these positive elements, the concerns set out above go to the very heart of the Bill’s approach and its core provisions. As such, there is a limit to how far the positive provisions will be able to mitigate the Bill’s potential adverse impacts on human rights.

Where greater attention is needed

Two provisions of the Bill may appear at first glance to be beneficial from a freedom of expression perspective, but in practice may do more harm than good. The first is the set of additional duties on larger platforms relating to “content of democratic importance”. Specifically, they will be required to consider “the free expression of content of democratic importance” when making content moderation decisions, and to ensure that their policies “apply in the same way to a diversity of political opinion”. As we noted in our review of the draft Bill, the definition of “content of democratic importance” is vague, referring to content which “is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom”. This requirement is not only confusing, but potentially inconsistent with the legislation’s aim to prevent harm, given that it is sometimes political figures who incite violence on social media.

In addition, the Bill would establish a set of duties in relation to “journalistic content”, defined as content published by a “recognised news publisher”, or such content when it is shared by a user. Entities considered to be “recognised news publishers” must meet a number of criteria such as whether the entity has as its principal purpose the publication of news-related material, publishes such material in the course of a business, is subject to a standards code, and has policies and procedures for handling and resolving complaints. But the entity must be registered in the UK, leaving news publishers from outside the UK without protection, and would also exclude other forms of journalism, such as citizen journalism. It also creates an inconsistent regulatory approach in which the same words would be protected if contained within a news article, but not if an individual person posts them.

Next steps

The Bill will now undergo scrutiny by the UK Parliament starting on 19 April and likely lasting until the end of 2022. GPD, as part of the Save Online Speech coalition, will be calling for significant revisions to the Bill to address our concerns, and ensure that the final version of the legislation fully respects people’s rights to freedom of expression and privacy. Sign up to our monthly Digest for regular updates and analysis.