03 Jul 2020

The EU’s Digital Services Act—what might it mean for freedom of expression?

By

On 2 June, the European Commission published its long-awaited consultation paper on the proposed Digital Services Act—which will create a regulatory framework for digital services and online platforms in the European Union, replacing the current e-Commerce Directive, which was adopted 20 years ago.

Under the e-Commerce Directive, online platforms have been largely protected against liability for the content that users generate and share, save for a few narrow exceptions. These strong intermediary liability provisions have been largely seen as positive for freedom of expression, as they have enabled platforms operating in the EU to set their own rules for content moderation without fear of legal reprisals or challenge from governments and other actors. (The extent to which those rules, and their enforcement, are consistent with international human rights standards on freedom of expression, is, however, a separate question).

Judging from the consultation paper, it looks like the Digital Services Act might change this. While no specific proposals are outlined, the questions it asks, and its broad themes, strongly suggest that the Act will impose new legal responsibilities on platforms in relation to different types of content. This is not exactly a surprise—in the last few years, there’s been a broad shift among EU governments in favour of stricter content regulation—but it warrants close scrutiny and attention from human rights defenders. 

The consultation paper is long and somewhat dense—but it offers important insight on the likely direction of the Digital Services Act. Below, we unpack the key takeaways and themes from a freedom of expression perspective.

*

Scope of harms to be addressed

It is clear from the consultation paper that the Commission wants to do more than simply tackle unlawful online content. It also asks a number of questions relating to “activities online that are not necessarily illegal but could cause harm to users”, including disinformation, content that could be “harmful” to minors (such as grooming, bullying and “inappropriate” content), and “hatred, violence and insults” which don’t amount to unlawful hate speech. It asks both what online platforms are currently doing in relation to these different forms of unlawful and harmful content, as well as what more they could do. (There is an entire section of the consultation paper on disinformation, and what measures platforms should take to tackle its impacts, suggesting this is a particular issue of concern for the Commission).

Without knowing precisely what online platforms will be expected to do about these kinds of content, it is hard to determine what risks to freedom of expression might result. If the regulation ultimately just requires online platforms to be clearer in their terms of service about what content is and is not allowed, this would actually be welcome. But if the regulation forces platforms to filter, or use technology to detect particular kinds of vaguely defined types of content (such as “insults” or “inappropriate” content), then there would be very real risks of legitimate speech being censored.

 

NEW RESPONSIBILITIES

As noted above, the consultation paper doesn’t set out any concrete proposals as such. However one of the questions, which asks for thoughts on various “possible responsibilities” that could be imposed through regulation, gives some idea of the Commission’s current thinking. These possible responsibilities include:

  • Maintaining an effective “notice and action” system for reporting illegal content;
  • Maintaining a system for assessing the risk of exposure to illegal content;
  • Having appropriately trained content moderation teams;
  • Systematically responding to requests from law enforcement authorities, and cooperating with them in accordance with clear procedures;
  • Cooperating with trusted organisations with proven expertise that can report illegal activities for fast analysis (“trusted flaggers”);
  • Detecting illegal content;
  • Cooperating with other online platforms for exchanging best practices, sharing information or tools to tackle illegal activities; and
  • Being transparent about their content policies, measures and their effects.

Many of these actions are already being undertaken by the larger online platforms.  However, the prospect of a new responsibility to “detect illegal content” is potentially very concerning. More details are needed to assess whether it poses risks to freedom of expression. If, for example, it turns out to be simply a requirement for online platforms to use hashing databases to identify and remove copies of already identified and unambiguously illegal content (as is already done for child sexual abuse imagery, for example), it might be relatively unproblematic. If, however, this responsibility extended more broadly— and ultimately ended up making online platforms play the role of law enforcement and courts by determining what is and isn’t illegal—this would be worrying, and even more so were they to use automated processes to do this.

 

INTERMEDIARY LIABILITY

The consultation paper includes a specific section on intermediary liability, suggesting that the Commission is considering revising the existing regime set out in the e-Commerce Directive. Indeed, the consultation paper itself says that the Commission is seeking “informed views on how the current liability exemption regime is working and the areas where an update might be necessary”.

The questions asked here are open ones, giving little indication of the types of revisions that the Commission is considering. The consultation asks how important the existing regime is for companies, whether the terms used in the eCommerce Directive (such as “mere conduits” and “hosting services”) are clear, and whether the existing liability exemptions are sufficiently clear. One question which is concerning, however, relates to the current prohibition of EU member states from imposing upon online platforms’ “general monitoring obligations or obligations to seek facts or circumstances of illegal activities”. The consultation paper asks whether this approach is still appropriate today.

As noted above, the existing intermediary liability regime is critically important to ensuring that online platforms are not incentivised—through fear of sanction—to remove legitimate content. Any reforms to these existing protections would need to be scrutinised closely to avoid inadvertent risks to freedom of expression from being created.

*

SAFEGUARDS FOR FREEDOM OF EXPRESSION 

The consultation paper recognises the risks that new regulation may have on freedom of expression and, in particular, the risks of incentivising the removal of content in breach of an individual’s right to freedom of expression. It asks for thoughts on a number of possible responsibilities that could be included in regulation to help ensure that users’ freedom of expression is protected, all of which would be welcome. These include:

  • Demanding high standards of transparency when it comes to terms of service and removal decisions;
  • Greater diligence when online platforms assess content notified to them for removal or blocking;
  • Maintaining an effective complaint and redress mechanism;
  • Greater diligence in informing users whose content was removed or blocked or whose accounts are threatened to be suspended;
  • Demanding high accuracy and diligent control mechanisms, including human oversight, when automated tools are deployed for detecting, removing or demoting content or suspending users’ accounts;
  • Enabling third party insight – e.g. by academics – of main content moderation systems.

The consultation paper also invites comments on other issues that could help protect freedom of expression, including the use of algorithms by online platforms to prioritise or deprioritise certain pieces of content, and what steps should be taken specifically with regard to the largest platforms, such as Google and Facebook, which dominate the online ecosystem.

*

NEXT STEPS

The consultation is open until 8 September. We’ll be responding in full to the consultation paper in our formal submission which will be published on the GPD website in due course.