Approaches to Content Regulation – #4: Transparency reporting

26 Feb 2019

By Amy MacKinnon and Richard Wingfield

In recent months, the global debate around harmful and illegal online content has noticeably shifted. If 2018 was characterised by a spate of self-regulatory initiatives by platforms, in 2019 there’s a growing focus on the role governments should play on this issue. Nowhere is this turn more evident than in the United Kingdom, where government consultations have provoked a flurry of competing models for the regulation of online content from actors as diverse as parliamentary committees, telecommunications regulators, academic institutions, corporations and civil society organisations.

To date, four proposals have emerged as serious contenders to be included in the government’s upcoming Online Harms White Paper and hold the potential to fundamentally alter the regulation of online content in the UK. They are:

  • the introduction of a mandatory code of practice for social media platforms;
  • the imposition of a duty of care owed by social media platforms to their users;
  • the creation of a regulator with a mandate to oversee social media platforms;
  • the introduction of transparency reporting requirements for social media platforms.

In this blog series, we’ll be taking a forensic look at each – looking at what’s being proposed, how it differs from the current state of play, and their potential implications for the enjoyment of human rights online. In this fourth and final post, we take a look at transparency reporting requirements.

***

What transparency reporting takes place already?

When it comes to content moderation, there are currently no mandatory transparency reporting requirements on platforms in the UK, meaning that any transparency reports published by the platforms are done so on their own initiative. However, most major platforms do publish some kind of information when it comes to the enforcement of their terms of service and requests for removal of content from third parties, such as governments. These include:

  • Facebook’s Transparency Reports, which provide quarterly statistics on the number of pieces of content removed, divided into eight categories based upon its Community Standards. Alongside this, Facebook also publishes information on how it calculates those numbers, and a blog series setting out how some of the more difficult content-related decisions are made. Their reports also include the number of pieces of content reported and removed each month on the basis of copyright violations. Finally, Facebook also publishes biannual statistics on the number of pieces of content removed on the basis that they breached national legislation, broken down by country, alongside some case studies with examples.
  • Twitter’s Transparency Reports, which provide statistics twice a year on the number of requests for accounts and tweets to be removed on the basis of national law, as well as the number of removals (broken down by country), with examples. The reports also contain the number of takedown notices and counter notices made under the US Digital Millennium Copyright Act, and the number of users accounts which were reported and ‘actioned’ on the basis of a breach of Twitter’s Rules, divided into six categories of harm.
  • Google’s Transparency Reports, which provide statistics twice a year on the number of pieces of content (such as URLs) removed due to breach of copyright, with examples; the number of requests made by governments to remove content, broken down by country and reason, again with examples and reasoning; various figures relating to the removal of search results on the basis of European Union privacy law; and the number of pieces of content removed on the basis of the German NetzDG.
  • YouTube’s Transparency Reports, which are contained within Google’s, and which provide quarterly statistics on the number of videos and comments reported and removed as in breach of their Community Guidelines, broken down by reason (such as sexual content, spam and abuse). For some policy areas – violent extremism and child safety – specific details are provided on how YouTube makes its decisions, including how it uses algorithms and other technologies, along withexamples. As with Google, the reports also contain details on the number of pieces of content removed on the basis of the German NetzDG.
  • Oath’s Transparency Reports, which provide biannual details on the number of requests and items of content reported by governments as in breach of national law (broken down by country), the percentage of which led to removal, and some illustrative examples.

*

Proposals for mandatory transparency reporting

A mandatory “annual internet safety transparency report” was one of the key proposals in the government’s Internet Safety Strategy Green Paper of October 2017.  In their response to the Green Paper in May 2018, the government published a template of what would be required in the transparency reports:

  • General information about the platform, including the number of UK users and pieces of content/posts published in the UK;
  • A narrative description of the platform’s terms of service which set rules on what content is and is not allowed, as well as the process for reporting content;
  • The number of employees who moderate content;
  • General statistics on the number of pieces of content reported by the UK users, the percentage which led to action being taken, the average time taken to respond to reports, and the percentage of those who receive feedback after reporting content;
  • Specific statistics on: the number of content items proactively removed by the platform; the number of pieces of content reported by UK users; the number of UK users who made a report; the percentage of reports made by UK users which led to action being taken; the number of user accounts blocked due to reports; and the average review time for reports, broken down into seven categories of harmful content (including abuse, violent and graphic content, content which might lead to self-harm, sexual content, scams and ‘fake news’, copyright and impersonation).
  • Specific statistics on the number of users, the number of reports of abuse made, and the percentage which led to action, broken down by age group;
  • Specific statistics on the number of reports of abuse made divided up by characteristic to which the abuse related (such as race, gender, disability, sexual orientation, gender identity or religion); and
  • Details of trends or significant information relating to any of the above data; and further information on the companies’ approach to reporting and any plans to make improvements to reporting processes.

Despite proposing such transparency reports, the government hasn’t yet set out which online platforms would be required to publish reports (save for a general reference to “social media platforms who provide services here”) how any requirement would be enforced, or what sanctions would exist for non-compliance.

*

Why should human rights defenders care about transparency reporting requirements?

“Transparency” is a broad term –and while some forms of mandatory transparency reporting requirements may encourage positive behaviour, and strengthen the right to freedom of expression online, other forms may simply incentivise the removal of more content. It is therefore essential that, if mandatory transparency reporting requirements are to be imposed, they are designed appropriately, and in a way which is supportive of freedom of expression.

How transparency can risk adversely impact freedom of expression

Transparency reporting requirements which only ask platforms about how much harmful content they’ve taken down risk creating a dangerous incentive. A platform faced with such a requirement might reason that – in order to show ‘progress’ or compliance – the easiest solution is to deliberate increase the volume and rate of content removal.

This is particularly likely in cases where legal requirements explicitly or implicitly indicate that a certain percentage of content under each category of harm reported should be removed, or that the percentage should go up over time. Further, any reporting requirements which relate to the time taken for content to be reviewed and removed risk incentivising a faster takedown process. . This would encourage either the use of automated decisionmaking or for quicker decisions to be made by human moderators, risking greater inaccuracy.

How transparency can enhance freedom of expression

In contrast to purely quantitative forms of transparency (which, as noted above, risk simply encouraging the removal of content and more quickly), qualitative forms of transparency could encourage platforms to show greater respect for freedom of expression. At present, it is not always clear how platforms make decisions about what content to remove, the standards and processes that are employed, those involved in the process, and how quality of decisionmaking is ensured. As a result, many have raised concerns that platforms for taking too much content down, not taking enough down, and failing to be sufficiently transparent about how they take down content in the first place.

Mandatory transparency reporting requirements could help address these concerns. They would encourage platforms to develop clear terms of service which explain what content is and is not allowed on the platform, and how decisions are made relating to content removal. Good practice could be more easily identified and adopted by other platforms. And qualitative reporting requirements on steps taken to improve processes could encourage platforms to make better and more consistent decisions, rather than simply remove more content more quickly.

*

What next?

The Online Harms White Paper is almost certainly going to include detail on transparency reporting requirements for platforms.

For those who want to ensure that the right to freedom of expression is protected online, the final form that these requirements take is hugely important. This means responding to the upcoming consultation and making arguments in favour of transparency reporting which enhances, rather than risks, online freedom of expression. And it means talking to MPs of all parties, and other stakeholders, to highlight those risks, and foster support for measures to be taken forward which will encourage platforms to improve how they tackle unlawful and harmful content online, without compromising the rights of users.