Approaches to Content Regulation – #2: A Duty of Care

15 Feb 2019

By Amy MacKinnon and Richard Wingfield

In recent months, the global debate around harmful and illegal online content has noticeably shifted. If 2018 was characterised by a spate of self-regulatory initiatives by platforms, in 2019 there’s a growing focus on the role governments should play on this issue. Nowhere is this turn more evident than in the United Kingdom, where government consultations have provoked a flurry of competing models for the regulation of online content from actors as diverse as parliamentary committees, telecommunications regulators, academic institutions, corporations and civil society organisations.

To date, four proposals have emerged as serious contenders to be included in the government’s upcoming Online Harms White Paper and hold the potential to fundamentally alter the regulation of online content in the UK. They are:

In this blog series, we’ll be taking a forensic look at each – looking at what’s being proposed, how it differs from the current state of play, and their potential implications for the enjoyment of human rights online. In this second post, we take a look at the idea of a duty of care.

***

What is a DUTY OF CARE?

In the UK, a duty of care is a legal obligation owed by one person (or entity) to another to ensure that the latter does not suffer any reasonably foreseeable harm or loss as a result of the former’s act or omission.

Initially developed by the courts, the concept of a duty of care is rooted in the principles set out in a case from the early 20th century, Donoghue v Stevenson (1932), where a claim was brought by a Mrs Donoghue who fell ill from consuming a bottle of ginger beer that she claimed had a decomposing snail in it.

In its decision, the court established a generalised duty of care under a “neighbour principle” holding that the manufacturer of the ginger beer had a duty to take reasonable care to avoid acts or omissions (in this case failing to check the contents of the bottle) that could reasonably be foreseen as likely to injure its neighbour (neighbour meaning “persons who are so closely and directly affected by my act that I ought reasonably to have them in contemplation as being so affected when I am directing my mind to the acts or omissions which are called in question”).

Since Donoghue v Stevenson, a duty of care has become a well-established legal principle under the common law and may be found to exist where:

  1. Harm is suffered by one party as a result of another party’s act or omission;
  2. The harm suffered is reasonably foreseeable;
  3. There is a requisite degree of ‘proximity’ between the two parties; and
  4. There are no public policy considerations that stand in the way of the imposition of a duty of care.

Examples of where the courts have found a duty of care to exist include duties owed by: a professional adviser to his/her client; by an organiser of a boxing match (who had assumed responsibility for the contestants care) to provide ringside medical facilities that were of an appropriate standard; and by an ‘occupier’ of a premises to ensure that the physical safety of persons coming lawfully onto that premises was not harmed by the occupiers act or omission (subject the occupier’s degree of control).

Importantly, courts have largely shied away from recognising a general duty of care to prevent others from suffering harm caused by the acts of third parties, unless a person or organisation has voluntarily assumed responsibility for the their safety.

Over the last century, statutory duties of care have also been used by parliament as a mechanism to mediate specific relationships or address matters of public interest. For example:

  • The Trustee Act 2000 imposes a duty of care on trustees, requiring them to exercise “such care and skill as is reasonable in the circumstances” when carrying out their duties.
  • The Environmental Protection Act 1990 imposes a duty of care on those involved in waste management to ensure that the waste is managed safely and not in a way that could cause pollution or harm to human health.
  • The Occupiers Liability Act 1957 supplements the common law duty placed on occupiers of premises to ensure that all lawful visitors are reasonably safe for the purpose for which they are on the occupier’s premises.
  • While not strictly a duty of care, the Health and Safety at Work etc. Act 1974, imposes a comparable duty on all employers to ensure the health, safety and welfare at work of their employees.

*——

Proposals for a statutory duty of care for platforms

In recent months, the idea of establishing a statutory duty of care owed by social media platforms to their users has emerged as one of the most prominent ideas for regulating online content and reducing online harms. Initially suggested in a series of blog posts produced for the Carnegie UK Trust in early 2018 by Professor Lorna Woods and William Perrin, the idea has been welcomed by, among others, the Home Secretary, Sajid Javid, the Telegraph, the Children’s Commissioner for England, and the National Society for the Prevention of Cruelty against Children (NSPCC). Its proponents often draw an analogy between such platforms and public spaces, reasoning that, just as the law imposes a duty of care on the owners and managers of public spaces, social media platforms should have a duty of care to protect their users (particularly children) from experiencing online harms on their platforms.

Although the idea of imposing a duty of care on social media platforms has been proposed by a number of parties, the most prominent proposal – on which the others have drawn – is that put forward by Perrin and Woods. Under their model the scheme would be composed of the following principal elements:

  • A defined legal category of “qualifying companies” to be subject to a duty of care:  This would be determined by the UK parliament.
  • The creation of a statutory duty of care: The UK parliament would also legislate for a statutory duty of care and would be required to set out a list of online harms that qualifying companies would be under an obligation to prevent.
  • A “risk based” system of regulation: “Qualifying companies” would fall under tiered levels of scrutiny, depending on the level of risk they were deemed to pose. High risk services, like services used by young people away from adult supervision, would be subject to closer oversight and tighter rules. Specialist audiences or user-bases of social media services, like subreddits on Reddit, would be subject to less oversight.
  • A regulator: A regulator would be established and charged with ensuring that social media platforms met their obligations under the duty of care. Monitoring and evaluation of social media platforms performance would be carried out on an ongoing basis through what Perrin and Woods call a “harm reduction cycle” with the ultimate aim of incentivising platforms to prevent harm by design.
  • A harm reduction cycle: The harm reduction cycle would begin with a measurement of harm on social media platforms. The regulator would provide platforms with a template against which to measure the harm which would establish an initial “baseline of harm”. The social media platform would then be required to address these harms and report on the steps they have taken to reduce them.
  • Penalty regime: the scheme would include a penalty regime to encourage compliance. Perrin and Woods suggest penalties ranging from fines to enforcement notices, enforceable undertakings, adverse publicity orders, and restorative justice.

*

Why should human rights defenders care?

In principle, imposing a duty of care on social media providers appears to be a pragmatic solution to address online harms. It is an established legal doctrine that would be relatively straightforward to legislate, and also holds the potential to be flexible enough to cover the wide variety of platforms that host user-generated content. However, a duty of care also poses a number of risks to human rights, particularly freedom of expression, which need to be considered in any discussion of the scheme.

1. The scope of the duty and sanctions for non-compliance

The scope of the definition of harms to be addressed under a statutory duty of care plays a large part in determining the potential impact on freedom of expression and other human rights. The greater the number of harms, and the more broadly they are defined, the greater the scope for content to be removed on the basis that its availability may breach the statutory duty of care, even though the content might be protected by the right to freedom of expression and, in fact, perfectly lawful.

This issue is of particular concern given the tenor of the government’s Internet Safety Strategy Green Paper, where an exceptionally broad spectrum of online behaviour has been identified as being potentially harmful. Perrin and Woods themselves have also leaned into this temptation in their submissions to the House of Lords Communications Committee, laying out a broad range of harms as meriting inclusion, including emotional harm, while the Children’s Commissioner has proposed defining harm as anything which has “a detrimental impact on the physical, mental, psychological, educational or emotional health, development or wellbeing of children” – an extremely broad definition.

Such broad definitions could easily capture legitimate (if challenging) content, such as heated debates and disagreements, as well as content which might be upsetting, offensive or shocking, but which is still within the scope of the right (including children’s right) to freedom of expression. Many children’s films, for example, contain scenes which are upsetting. Would these count as having a detrimental impact upon the emotional wellbeing of a child who watches them? Only recently, some parents in the UK have argued that teaching about same-sex relationships is schools is harmful for children – would platforms have to remove such material on the basis that parents think it could be harmful?

And with the risk of a financial or reputational penalty for failing to remove “harmful” content, there would be strong commercial incentives for social media platforms to “play it safe” and take down large swathes of content which is, in fact, protected by the right to freedom of expression.

2. Specific application of the duty to children

In an interview with the Telegraph in November 2018, the Home Secretary Sajid Javid proposed a duty of care for social media platforms, exclusively aimed at protecting children from online harms. Similarly, the NSPCC and the Children’s Commissioner for England have also proposed the introduction of a duty of care specifically aimed at protecting children.

Although these proposals are well intentioned, the online ecosystem does not generally distinguish between adults and children when it comes to its users; indeed, children often use the same social media platforms and websites as adults. Without a system of age or identity verification (which holds its own human rights concerns) to ensure segregation between the internet use of adults and minors, complying with a duty of care owed to minors would potentially necessitate social media platforms to remove any and all content that would breach their duty of care to children using their platforms, effectively making such content unavailable for adults.

3. Impact on transparent and accountable decisionmaking

As we looked at in our previous post, the shifting of responsibility for determinations regarding what constitutes unlawful or harmful content onto the private sector creates problems for ensuring rights-respecting and transparent processes for the regulation of online content. Whether with a duty or care or some other model that shifts this responsibility, there would, after all, be no guarantee that there would be mechanisms for accountability or safeguards in place when content is removed, as there are when decisions are made by public authorities or the judiciary. A duty of care magnifies these concerns by requiring platforms to take a proactive, rather than reactive approach to the removal of content, scanning all content on the platform rather than only that which has been flagged in some way. Platforms would almost certainly turn to automated processes which have a poor record on accurately identifying this kind of content and, as recent examples have shown, may remove perfectly innocent content.

4. The risk of a duty of care being adopted by other jurisdictions

The risk of UK law regulating online content being adopted by other jurisdictions with fewer legal and procedural safeguards in place has been discussed in our previous post on the imposition of a code of practice, and this is equally true with respect to a duty of care. Without the safety nets provided by the human rights framework in the UK, a duty of care could be far more restrictive if adopted in other countries, with even more serious implications for both platforms and users.

*

What next?

It seems that there is broad-based support among a range of stakeholders for a duty of care to be included in the UK government’s Online Harms White Paper, likely to be published in the next month or so.

There are, however, significant concerns over the impact that a duty of care would have, and many questions that remain unanswered. If such a model is to be pursued, then it is critical that it contains safeguards to mitigate risks to users’ right to freedom of expression. To ensure that this message is heard and acted upon, civil society should engage in the white paper consultation and use their submissions to emphasise potential human rights risks, some of which we’ve highlighted in this blog post.