In recent months, the global debate around harmful and illegal online content has noticeably shifted. If 2018 was characterised by a spate of self-regulatory initiatives by platforms, in 2019 there’s a growing focus on the role governments should play on this issue. Nowhere is this turn more evident than in the United Kingdom, where government consultations have provoked a flurry of competing models for the regulation of online content from actors as diverse as parliamentary committees, telecommunications regulators, academic institutions, corporations and civil society organisations.
To date, four proposals have emerged as serious contenders to be included in the government’s upcoming Online Harms White Paper and hold the potential to fundamentally alter the regulation of online content in the UK. They are:
- the introduction of a mandatory code of practice for social media platforms;
- the imposition of a duty of care owed by social media platforms to their users;
- the creation of a regulator with a mandate to oversee social media platforms;
- the introduction of transparency reporting requirements for social media platforms.
In this blog series, we’ll be taking a forensic look at each – looking at what’s being proposed, how it differs from the current state of play, and their potential implications for the enjoyment of human rights online. In this third post, we take a look at the idea of a new regulator with a specific mandate to regulate online content.
Who currently regulates online content in the UK?
There is no single body with responsible for regulating online content in the UK. Instead there are a range of organisations with different legal statuses, mandates, and powers, who collectively regulate different forms of online content. These include:
- The Internet Watch Foundation: a private company and charity which identifies images and videos of child sexual abuse imagery online, and shares “hash lists” of this content with industry members who can then remove it automatically and prevent it being uploaded;
- The Counter-Terrorism Internet Referral Unit: a unit within the Metropolitan Police which identifies unlawful terrorist material content and refers it to platforms for removal;
- OfCom: a statutory regulator of communications, including broadcasting, which regulates the BBC’s online programmes and written content, the catch-up services of other broadcasters, and, from 2020, news content on video sharing platforms;
- The Advertising Standards Agency: a self-regulatory body which regulates offline and online advertising;
- The Independent Press Standards Organisation and IMPRESS: self-regulatory bodies for the newspaper and magazines industry;
- The Information Commissioner’s Office: a statutory data protection authority;
- The British Board of Film Classification: a self-regulatory body, albeit with statutory duties, which determines the age at which people can watch certain films, including online, and acts as the age verification body for online pornography.
Proposals for a new regulatory body
Within the wider debate around the the best methods to regulate social media platforms – from a code of practice to a duty of care – the question of which body would be charged with overall enforcement looms large.
The government has not yet formally proposed the establishment of a regulator for online content. However, towards the end of 2018, news reports and government leaks suggested that it was actively considering this option in response to pressure from a number of child safety groups and other stakeholders, who called for the establishment of a regulator in their responses to the recent public consultation on internet safety (see p.9).
In recent months, a number of other UK organisations, including parliamentary committees, have put forward different models for a new regulatory body. With varying degrees of detail, proposals have ranged from expanding the mandate of existing regulators to establishing entirely new regulatory bodies. Four of the most comprehensive models include:
- The Carnegie Trust UK: In a series of blog posts produced for the Carnegie Trust UK, Professor Lorna Woods and William Perrin proposed legislation to create a statutory duty of care owed by social media service providers to their users. Woods and Perrin also propose the establishment of a regulator to oversee the implementation of this duty of care and work with social media platforms to reduce online harms. Although they do not propose a concrete form for the regulator, under their model any regulator would have the power to impose sanctions on platforms who fail to meet their obligations under the duty of care.
- The London School of Economics Truth, Trust and Technology Commission has proposed the establishment of an Independent Platform Agency that would be structurally independent of government but report to Parliament. The body would initially provide an observatory and policy advice function. Later, it is anticipated that the body would establish a permanent institutional presence and provide guidance to various government and non-government initiatives attempting to address problems of information reliability. The body would have powers to request data from platforms and impose fines for non-compliance with requests.
- Doteveryone: Doteveryone have proposed an independent Office for Responsible Technology (ORT) to act as a steward to technological change in the UK, guiding the government’s response to new technologies. The ORT would have three functions: to empower regulators; to inform the public and policymakers; and to support people to find redress. The body would have no enforcement powers of its own.
- Global Partners Digital: In our 2018 white paper A Rights Respecting Model for Online Content Regulation by Platforms, Global Partners Digital proposed the creation of a multistakeholder oversight body, operating globally. The body would be funded by social media platforms and empowered to assess their compliance with a set of Online Platform Standards, developed by platforms following consultation and engagement with relevant stakeholders, and to publish periodic reports with recommendations where appropriate.
Why should human rights defenders care about a new regulatory body?
There are already a wide range of bodies which regulate different aspects of online content in the UK. Why, then, should the proposal of a new regulatory function cause concern for human rights defenders? There are three key reasons:
1. Impact on freedom of expression
The potential impact of a regulator on freedom of expression depends largely on two factors: the scope and definition of the harms which the body is tasked with preventing; and the nature of its powers of enforcement and sanctioning.
Definition of harms
Although we don’t currently know exactly what legal framework any potential regulator would be tasked with enforcing, it is likely that any set of regulations will set out certain forms of harm or types of harmful content and contain a requirement that social media platforms work to prevent them. At first glance, this mandate may seem relatively innocuous, but if the tenor of the government’s Internet Safety Strategy Green Paper is anything to go by, it seems likely that the spectrum of harms that platforms may be tasked with preventing may be be very broad indeed.
This is concerning from a human rights perspective, because the regulator’s working definition of harm and harmful content will exert a significant amount of influence on social media platforms’ internal content policies – particularly if the regulator has the power to define certain types of harm in an iterative way, or provide interpretative guidance on these harms. If these harms are overly broad or generate any uncertainty as to their scope it is likely that this mechanism would create a risk that platforms – fearing contravention of a code of practice or duty of care – would remove content which is protected by the right to freedom of expression and is, in fact, lawful.
It is likely that a regulator would be accorded enforcement powers in some form. These could range from simply publicising non-compliance with any code of practice or duty of care, to more punitive sanctions, such as fines.
If excessive, enforcement powers can create incentives for social media platforms to play it safe and take down content which might in fact be lawful and protected by the right to freedom of expression – especially in “grey areas” of content or speech which may be critical, offensive or challenging. Evidence from the implementation of the Network Enforcement Act (NetzDG) in Germany earlier this year has suggested that this is a likely eventuality. Since the introduction of the tight timelines and heavy fines for non-compliance included in the NetzDG (48 hours in the case of “manifestly unlawful” content), there have been a number of high-profile examples of Twitter (for example) removing tweets which were controversial, satirical and ironic, but not obviously illegal.
2. Impact on transparent and accountable decisionmaking
Human rights apply online as well as offline, and so the same principles which underpin permissible restrictions on freedom of expression offline also apply to its restriction online. This means that restrictions, including the removal of content, should only take place following a clear, transparent and rights-respecting process, with appropriate accountability mechanisms and the possibility of an independent appeal process.
A regulatory body with the power to enforce rules governing online content which is unlawful would shift judicial and quasi-judicial functions to platforms, or their nominees, forcing them to make determinations regarding whether particular forms of content are legal or not. This would include determinations on whether certain content constituted, among other things, hate speech, defamation, and incitement to violence or terrorism.
Even if the regulator’s mandate were limited to enforcement of rules relating to “harmful” but lawful content, such as bullying or insulting other individuals, there are similar concerns over whether platforms are well-placed and able to make determinations as to what content is harmful. This risk is magnified by the sheer scale of content, which means an in-person review is unlikely to be feasible. Automated processes, which would serve as replacement for a in-person review, are notoriously poor at categorising this kind of content.
When these mandates are combined with the threat of sanctions for non-compliance, there is a real risk, as noted above, that platforms will simply remove all “grey area” content, dragging in content which is protected under the right to freedom of expression, with no guaranteed mechanisms for accountability or safeguards to protect individual rights, as there are when decisions are made by public authorities or the judiciary.
And given the absence of an equivalent body regulating all “offline” speech, the creation of a regulator with oversight over social media platforms could create a perverse situation where speech which is lawful, but potentially harmful, is restricted when it is expressed online, but not when it is expressed in person.
3. The risk of proposals being adopted into other jurisdictions
As we’ve noted throughout this blog series, there has been a recent trend of states passing copycat legislation relating to the internet, including legislation regulating online content. As such, any proposals which are put forward in the UK, including a new regulatory body for online content, have the potential to be adopted in other states which could then point to the UK for justification. However, notwithstanding the problems listed above, the UK does at least have some safeguards in place when it comes to freedom of expression which mitigate the risks, such as an independent judiciary, national human rights institutions, and the Human Rights Act 1998. In states where speech which should be protected under international human rights law is criminalised, or where there are no equivalent effective safeguards, the effects of such a regulator could be far more restrictive than they would be in the UK.
A broad range of vocal stakeholders in support of establishing a social media regulator means that it is becoming increasingly likely that one will be proposed in the UK government’s upcoming Online Harms White Paper.
There are, however, substantial concerns over the impact that the introduction of a regulator would have on freedom of expression in the UK. If a regulator is to be established, it is critical that the government fully understands the potential impacts of its powers, mandate and operation on the enjoyment of human rights in the UK, and puts appropriate safeguards in place to mitigate these risks. Civil society can help ensure this by engaging in the white paper consultation.