20 Dec 2022

The return of the UK’s Online Safety Bill: what’s changed and what’s next

By

The UK’s Online Safety Bill came back to Parliament on Monday 5 December for the completion of its report stage, having undergone substantive changes since the previous version was released in June 2022 (after passing through the Public Bill Committee). The process of developing the 250-page long piece of legislation has outlasted four prime ministers.

The Bill cannot be carried over to the next parliamentary session again, and so must be passed before the current one ends or face going back to square one. This is likely one of the government’s motivations for delaying the end of the session from Spring 2023 until Autumn 2023. Yet, instead of being passed straight onto the Lords as usual, several clauses and schedules in the new version of the Bill (as well as another list of proposed amendments from DCMS) had to go back to the Public Bill Committee for further scrutiny. The Committee approved all the tabled amendments, and the third and final reading of the bill is now scheduled for 16 January 2023.

While the breakneck speed of these amendments and approvals is hard to follow, we can draw some initial conclusions about the direction of change from what we currently know, as well as highlighting remaining concerns with the proposed legislation as a whole:

 

Recent amendments 

  • As had been signalled in recent months under new Conservative Party leadership, the provisions in the Online Safety Bill requiring platforms to address and remove content which is legal but harmful for adults have been removed, meaning platforms will only be required to remove illegal content for adults. The “legal but harmful”—or “lawful but awful”—aspect of the bill was undoubtedly its most contentious feature, over which we raised significant concerns in the past. The government took the advice of the House of Lords Communications and Digital Committee by instead making targeted types of “harmful” content illegal through new offences, including the promotion of suicide, self-harm or eating disorders. These content types will now clearly fall within the scope of the duties relating to illegal content.
  • While the removal of the “legal but harmful” content requirements for adults does bring much needed clarity around permissible online speech, as well as greater consistency between speech which is legal online and offline, this has knock-on effects for other aspects of the legislation. Another amendment removes the previously-proposed harmful communications offence, which was defined as sending communications likely to cause harm (“psychological harm amounting to at least serious distress”) to another person. While this clause was by no means perfect, it was narrower in scope and more consistent between online and offline speech than the Section 127 of the 2003 Communications Act that it replaced. Section 127 made it an offence to send an electronic message “causing annoyance, inconvenience or needless anxiety”, which fails to provide clarity to individuals over exactly what speech is prohibited online and gives overly broad discretion to those interpreting these provisions as to what content would fall within scope.
  • Instead of being required to remove legal but harmful content for adults, in the new version of the Bill platforms are required to provide adult users with functionality to control their own exposure to legal forms of abuse and hatred by opting out of seeing such content, even if it does not violate the platforms’ terms and conditions. This may be a positive development for user agency and particularly for vulnerable users to shield themselves from harm, but questions remain about its implementation and long-term impacts on public discourse.
  • The tabled amendments also expand upon platforms’ duties to publish and enforce their terms of service. Previous versions of the Bill required platforms only to consistently enforce terms of service regarding priority content that is harmful to adults; but now they are required to consistently enforce all of their terms of service or face sanctions from Ofcom. While users can and should expect platforms to be consistent in how and when they moderate different content types, some have expressed concern that this requirement could, in practice, reinforce T&Cs that go far beyond legal definitions of harmful or illegal content. This would give platforms greater power over what people can and cannot say online, and increase the risk of censorship of speech which should be permissible under UK law.

 

Ongoing concerns

Besides the amendments, many of our previous concerns remain. The Online Safety Bill still outsources decisions on illegal content to private platforms, essentially privatising the role of law enforcement and incentivising over-removals of legitimate content. It will allow Ofcom to order platforms to use “proactive technology” to identify and remove broad swathes of content that fall in scope of the Bill, despite the fact that these proactive monitoring technologies often have high rates of inaccuracy and incorporate a range of systemic biases. And, in including all user-to-user services as Part 3 regulated services, it risks placing overly burdensome and disproportionate requirements on smaller or start-up platforms, as well as those that currently rely on community-based content moderation, as Wikimedia has cautioned in recent weeks. Overburdening these smaller or public interest platforms with duties and requirements could result in fewer spaces and services available for individuals to communicate and freely express themselves online, and accentuate the existing network effects and monopolies of the most dominant platforms.

The Bill at its core also requires platforms to know whether their users are over the age of 18 or not in order to fulfil their duties relating to child safety and removal or restriction of content which could be harmful to children. While it is vital that platforms and regulators do take steps to tackle the risks that digital platforms may pose to children, such as adverse effects on mental and psychological wellbeing or exploitation by malicious actors, most age verification tools that purport to provide this assurance are nascent and largely untested at this scale. As argued by 5Rights Foundation, there is a need to establish common definitions, agreed standards and regulatory oversight around the design and deployment of such technologies to ensure that they respect and protect user data and privacy, as well as anonymity online, which underpin free expression.

Finally, Clause 106 (previously Clause 104)—which allows Ofcom to order platforms to use “accredited technologies” to identify and remove child sexual abuse material shared on private channels—has remained. There has been considerable confusion around what this might actually mean in practice. MP Damian Collins, who chaired the Joint Committee on the Online Safety Bill for three years until July 2022, said that this clause does not require platforms to decrypt private messages but is instead designed to have platforms use metadata that they already collect about private messages and their senders to detect warning signs of CSAM. The new Secretary of State for the Department for Digital, Culture, Media and Sport Michelle O’Donelan has also repeatedly insisted that the Bill does not introduce a requirement for platforms to monitor the content of encrypted messages. It’s certainly true that the bill does not actually mention “encryption” anywhere, and Clause 106 is the only place where it introduces a specific responsibility on platforms with respect to content which is communicated privately.

However, digital rights expertslawyersMPs and companies have expressed deep concern that the current framing of Clause 106 will be used to undermine encryption in practice, or to require or mandate platforms to implement client-side scanning, posing disproportionate risks to individuals’ privacy and with a chilling effect on freedom of expression. There are safeguards written into Ofcom’s ability to make such an order (Clause 108 requires Ofcom to consider the risk and severity of harm to individuals in the UK that the suspected content poses, the extent to which the use of the specified technology might result in interference with users’ rights to freedom of expression or privacy, and whether less intrusive measures could be taken instead). But if the clause is indeed used to order a platform to access the content of encrypted messages, the risks this poses to encryption itself—and the rights to privacy and freedom of expression that encryption underpins—would be disproportionate, regardless of the circumstances. GPD has signed a joint letter from the Global Encryption Coalition calling on the government to amend this provision and protect end-to-end encryption in the UK.

The Bill will finish up its extra report stage in the coming weeks and is scheduled for its third and final reading—and vote—in the House of Commons on 16 January 2023. Assuming it passes, it will move onto the House of Lords, where more amendments can and will likely be made. As the text is finalised, we recommend that the government at the very least should:

  • clarify the scope of “accredited technologies” that could be required of platforms under Clause 106, and explicitly exempt the contents of encrypted messages from the application of such technologies or the respective order from Ofcom to use such technologies;
  • set out a code of practice or other guidance requiring platforms to ensure that their age verification tools, or those of third party providers, respect and protect user privacy and data protection standards; and
  • strengthen requirements for any automated or proactive technologies for moderating publicly-available content to be assessed for risks of bias prior to deployment.

We’ll continue to track and monitor developments in the progress of the Bill, on this blog and through our monthly Digest.