16 Dec 2020

The UK government’s full response to the Online Harms White Paper: initial thoughts

This week, the UK government published its full response to the Online Harms White Paper consultation, setting out further details on its proposed legal framework to address illegal and harmful content and activity on online platforms.

The government is championing this framework as a proportionate one that will “usher in a new age of accountability for tech companies”. But GPD remain concerned that, despite some clarifications and policy changes, a number of its elements continue to pose serious risks to individuals’ rights to freedom of expression and privacy.

 

New responsibilities for online platforms and search engines

The full response confirms that the legislation will establish a duty of care for companies within its scope, requiring them to prevent the proliferation of illegal content online and ensure that children are not exposed to harmful content (e.g. pornography or violent content). A small group of high-reach and high-risk companies (likely to include entities like Facebook and Twitter) will have additional obligations to tackle content or activity that is legal but still harmful to adults (e.g. abuse and content about eating disorders, self-harm or suicide).

The scope of companies is narrower than the original White Paper suggested, with internet service providers, web-hosting companies and app stores now out of scope. The full response also provides for a list of exemptions, including services used internally by organisations (like cloud storage and conferencing software), product reviews, and content and articles produced and published by news services on their own sites (as well as user comments under articles and blogs). All of these exemptions are welcome, and will help ensure that the legislation is more targeted.

The scope of what is considered “harmful”, however, remains broad. Content and activity will be “harmful” if it gives rise to “a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. While the focus may be on illegal content, all companies will have to ensure children are not exposed to “legal but harmful” content, and larger companies will have to do the same for adults. The full response, as was the case with the White Paper, does not clarify how these types of “harmful” content—like “violent content” or “content about suicide”—will be defined in practice. The right to freedom of expression, for both adults and children, includes expression which may be shocking, offensive or even disturbing, and broad categories like these would easily include much legitimate expression.

This duty of care will be overseen by Ofcom, the UK’s communications regulator, which will issue codes of practice that companies must follow in order to be in compliance. These will require companies to put in place “systems and processes that improve user safety on their services”. We welcome the requirement that companies must “consider users’ rights, including freedom of expression online” when making decisions on what safety systems and processes to put in place on their services. A critical test will be how Ofcom enforces this in practice: we’ll be looking out for whether enforcement is proportionate to the level of harm, supports good content moderation, and ensures that freedom of expression is respected. 

Despite this requirement, there remain significant concerns over how the legislation as a whole will impact upon the rights to freedom of expression and privacy:

  • The continuing lack of clarity over “legal but harmful” content, and exactly what companies are required to do risks legitimate expression being censored for both adults and children.
  • The duty of care will also force companies into making determinations about what is legal and illegal on their platforms, essentially taking over the functions of law enforcement and courts.
  • The duty of care and codes of practice may encourage or even require companies to proactively monitor all content on their platforms, including private channels, to determine if anything illegal or harmful is being said. Inevitably, platforms will turn to automated content moderation processes, which are prone to error, and—without the capability to consider context in their decision-making—risk increasing removal of legitimate content.
  • While the legislation will give users the right to challenge content moderation decisions, this is a weak safeguard that places the burden on users, rather than platforms, to ensure that freedom of expression is protected.

We are particularly concerned that the duty of care will apply even to encrypted channels, and that the government will mandate platforms “to use automated technology that is highly accurate to identify illegal child sexual exploitation and abuse activity or content on their services”, even when these services are encrypted. While the government is right to tackle  child sexual exploitation and abuse online, their response does not explain how automated technology which identifies particular videos or images can be mandated in a way that does not create new vulnerabilities and security risks when applied to encrypted channels.

 

Enforcement 

The full response confirms that Ofcom will be granted powers to sanction and address non-compliance with the duty of care. As well as developing the codes of practice, Ofcom  will have the power to issue fines on platforms of up to £18 million or 10% of annual global turnover; whichever is higher. Ofcom will also have the power to engage in a range of “business disrupting measures”, including—as a last resort—blocking UK access to a platform’s services in the most serious of cases. Enforcement action could include action against a parent company that wholly owns or controls a non-compliant entity. Criminal sanctions remain a possibility.

We are concerned about this enforcement and sanctions regime, particularly given the broad scope of content and activity which will be considered “harmful”. While the government has stated that Ofcom will take a proportionate approach to its enforcement activity, powers such as these should only ever be used in exceptional circumstances, where there is clear and serious criminal activity taking place, negligence on the part of the online platform, and judicial oversight. We hope that such restrictions and safeguards are included in any legislation.

 

Transparency 

Under the proposed framework, larger companies will be required to publish transparency reports, and the government will be able to extend the scope of companies who will be required to do so. The full response provides a list of types of information that transparency reports may cover: including information on the enforcement of the company’s own relevant terms and conditions, processes that the company has in place for reporting harmful content, the number of reports, and actions taken. We are pleased that the government wants to ensure that the transparency reporting framework is agile and future-proof, but stress that requirements for transparency reporting should be narrowly tailored and not pose a disproportionate burden to companies where there is no clear risk of harm taking place on their services.

We are also pleased that the full response confirms that the regulator will oversee the implementation of clear and transparent user redress mechanisms, and that there will be a statutory appeals mechanism to challenge the regulator’s decisions as a means of ensuring accountability. However, this alone is not sufficient to ensure freedom of expression online is protected. Additional clarity is needed on how redress will be provided, and what will be considered an adequate response or action by companies.

 

Next steps

This full response to the Online Harms White Paper consultation will be followed by the introduction of an Online Safety Bill in 2021. We’ll be following the Bill’s progress closely—sign up to our monthly Digest for regular updates and analysis.