Regulating social media companies: une approche alternative?

14 May 2019

By Richard Wingfield

When the UK government launched its Online Harms White Paper in April, it was emphatic in claiming it as the “world’s first” holistic approach to the regulation of online content. Only a few weeks later, a rival has already emerged – in the shape of “a French framework to make social media platforms more accountable”.

The document (available in French and English) is the interim report of a French inter-ministerial working group, which—since the start of this year—has been given a broad mandate to develop proposals which would ensure “high standards and quality requirements in moderating the content that [social media platforms] host”. The interim report is directed to the French Secretary of State for Digital, who will consider its findings ahead of the publication of the final report on 30 June 2019.

Below, we briefly run through the key findings and proposals in the report, and offer some initial thoughts on how it measures up from a human rights perspective.

*

What does the report propose?

The report proposes a general legislative framework for the regulation of “social networks”, with a “social network” defined as “an online service allowing its users to publish content of their choice and thereby make them accessible to all or some of the other users of that service”.

While specific online harms—including terrorist content and child sexual exploitation and abuse—are highlighted in the report, the framing is notably different from the UK’s harm-based approach. Here, the harms are set out as the context for regulation, rather than the focus of it—and the regulation proposed is based on increasing the transparency and accountability of social media companies, values which receive far less attention in the Online Harms White Paper. It is outcome-based rather than prescriptive, and explicitly makes the protection of human rights the core objective of the new regulatory framework, with proportionate actions required from social networks. While the current version of the model is (for now at least) light on detail, the report does set out five pillars upon which it would be based.

  1. The regulatory framework would have the primary objective of defending “the exercise of all rights and freedoms on social media platforms”. This includes individuals’ freedom of expression and communication, individuals’ protection of physical and moral integrity, and social networks’ own entrepreneurial freedom (such as to define and apply its terms of use and to be able to innovate).
  2. The regulation would focus on the accountability of social networks and be implemented by an independent administrative authority. It would only apply to services where the number of monthly users was above a fixed percentage of the population (between 10% and 20%). The authority would also be able to apply the regulation to services with fewer users but only “in the event of an identified and persistent breach for services”. (Note: it is not clear from the report what would be considered a “breach”). Under this pillar, social networks would have “transparency obligations” relating to how they order content and their systems for implementing terms of use and moderating content. This would include their methods for interpreting community rules, their procedures, and the human and technological resources they use to ensure compliance with those rules and to tackle illegal content. The regulator, or an independent auditor, would have the power to audit those systems.
  3. There would be “broad, informed political dialogue conducted transparently between the government, the regulator, the actors and civil society”. Although the government would have a key role in setting the threshold for triggering obligations, setting out those obligations, and setting up the regulator, it would also organise a dialogue with social networks which involved the regulator and civil society to look at particular political issues that needed attention. Social networks could, themselves, make voluntary commitments to the government (which would be overseen and enforced by the regulator), such as an action plan to tackle a newly identified type of abuse.
  4. The independent administrative authority would act in partnership with other parts of the state and civil society. The authority’s focus would be on policing the transparency obligations of social networks. Importantly, it would not be able to regulate content, but would have access to information held by the platforms, including an ability to require access to algorithms used. It would also have the power to impose sanctions: the ability to publicly report where social networks failed to comply with their obligations, and financial penalties of up to 4% of their total global turnover. These powers could only be exercised after formal notice.
  5. There should be European cooperation which reinforces the capacity of EU member states to deal with global platforms. The report notes the global nature of social networks and so proposes EU-wide coordination—including EU-level regulation to ensure consistent national-level regulation, cooperation between national authorities and civil society, and an EU mechanism to prevent excessive regulation.

*

First thoughts

While the report is not final‚ and therefore relatively light on detail, it is nonetheless a breath of fresh air when compared to proposals from the UK and elsewhere, which are far more interventionist and pose serious risks to freedom of expression. Five elements are particularly welcome:

  1. Instead of focusing only on harms, this model looks at the broader range of positive and negative impacts of social networks and online platforms holistically, including impacts on human rights. The model therefore recognises the need to protect freedom of expression alongside, for example, the right to physical and mental security.
  2. There is an explicit recognition of international human rights standards in developing and implementing the model. Indeed, the proposal has as its primary objective the need to defend the exercise of human rights on social networks. To ensure consistency with international human rights standards, the proposal does not focus on pressurising social networks to remove particular types of content, but instead emphasises greater transparency around platforms’ content moderation policies and enforcement. Rather than setting out specific rules for what content moderation policies should be, or how they should be enforced, the regulator’s role would be directed toward demanding transparency from social networks and challenging companies whose policies are inadequate, or whose enforcement is non-transparent or ineffective.
  3. The model is flexible and proportionate—which means that obligations would automatically apply only to large platforms, with smaller platforms brought into scope only where they have demonstrated persistent poor practice in the past. By focusing on desired outcomes rather than prescribing specific actions that should be taken, the model recognises the diversity of the online ecosystem and the differences between online platforms.
  4. The model has been designed in a way which mitigates against the risk of over-removal of content protected by the right to freedom of expression. For example, it does not propose removing the immunity from liability that online platforms have with regard to the content they host. And while the regulator would be able to impose sanctions—including large fines—the regime proposed is measured, with an emphasis on public “naming and shaming”, and a formal notice period before the imposition of any sanction. There are no suggestions of criminal liability or the shutting down of platforms, as is being considered in other proposals.
  5. The proposal recognises the need to involve all stakeholders, including government, the regulator, companies and civil society. The political dialogue involving all these stakeholders on issues of key concern, for example, has the potential to ensure that all interested voices are heard when particular challenges need to be addressed.

While there are still questions to be answered—particularly over the specific issues on which platforms will be expected to develop and enforce terms of service, when smaller platforms would be in scope, and how engagement with other stakeholders would work in practice—the proposals are a welcome, serious attempt to deal with some of the challenges that arise from online platforms in a way that is proportionate and consistent with international human rights law and standards.

The final report is scheduled for publication at the end of June. GPD will be following developments closely, and will review the final proposals as soon as they come out. For regular updates and insight on this, as well as other ongoing debates in the online content regulation space, subscribe to our monthly digest here.