Resolution and regulation?: The challenge of unlawful and harmful online content

11 Jan 2018

By Richard Wingfield

On 1 January, 2009, Facebook’s CEO Mark Zuckerberg made a public New Year’s resolution: “wear a tie everyday”. Subsequent resolutions, in what has become a light-hearted annual tradition, have seen Zuckerberg learning Mandarin, coding daily and writing thank you notes. On 1 January 2018, however, the resolution was noticeably starker – “fix” Facebook – with the accompanying post pledging to tackle a range of social problems commonly laid at his company’s door – from abuse and hate speech to ‘fake news’ and foreign interference in domestic politics.

Facebook isn’t alone in this, of course. Most (if not all) major social media and search platforms have been engaged in working out how best to respond to different kinds of unlawful or harmful content being posted, uploaded and shared. Whether it’s sexual abuse on Twitter, videos showing graphic violence on YouTube, or links to copyrighted content on Google, platforms everywhere are trying to balance the need to ensure the internet remains free and open with demands for unlawful and harmful content to be restricted.

As companies grapple with these challenges, however, some governments are growing impatient. In the last few weeks alone we’ve seen a number take steps towards greater regulation of platforms and online content in response: the EU Home Affairs Commissioner has demanded that platforms take down illegal content within two hours and suggested that the Commission would introduce legislation if platforms failed to do so voluntarily; in the UK, a Home Office Minister proposed taxing online platforms which fail to take down ‘radical’ or ‘extremist’ content; in France, President Macron has promised to introduce a new law to prohibit ‘fake news’ during elections. Most notably, in Germany, the Network Enforcement Act (or NetzDG) came into force on 1 January, requiring online platforms with more than two million subscribers to remove content which is ‘manifestly unlawful’ within 24 hours, with fines of up to €50 million for failure to do so. In response, Facebook is reported to have hired hundreds of new staff to review content, and Twitter started deleting posts from a far-right politician who referred to “barbaric, gang-raping hordes of Muslim men”.

But is greater regulation the right approach? Clearly, governments consider the response from online platforms so far to be too timid and slow. And the principle that content which would be prohibited offline shouldn’t be available online is a sound one. But these sorts of interventions have two key consequences which should cause concern to anyone who cares about online freedom of expression.

First, it’s that regulation like the NetzDG law forces private businesses to make decisions about what is legal or illegal – which is primarily the role of courts and other public authorities. Human rights apply online as well as offline; and so the same principles which underpin permissible restrictions on freedom of expression also apply to its exercise online. This means that restrictions, including the removal of content, should only take place following a clear, transparent and rights-respecting process, with appropriate accountability and the possibility of an independent appeal process. There is very little transparency over platforms’ decisionmaking processes, with broadly worded Terms of Service, inconsistent and seemingly arbitrary results, and little public reporting of the decisions reached and how they were made. Given the scale of content uploaded and posted each day, it is unrealistic to expect courts to be making all decisions relating to its lawfulness, but platforms currently have neither the level of expertise, nor the level of openness and accountability, that is provided by an independent judiciary.

Secondly, tight time limits and high sanctions risk incentivising the removal of content which might in fact be lawful. In cases where platforms are unsure whether content is unlawful or harmful, many will play it safe and simply delete the content rather than risk a fine. Already, we have seen Twitter delete tweets in Germany which were controversial, satirical and ironic, but not obviously illegal. And it is precisely in this grey area that the right to freedom of expression can be most important. As the European Court of Human Rights has said:

Freedom of expression constitutes one of the essential foundations of such a society, one of the basic conditions for its progress and for the development of every man. (…) [I]t is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’.

The internet offers all of us the opportunity to share information, discuss ideas and debate important issues. For those who are isolated or who live in countries with oppressive censorship, the internet may be the only way that they can engage and have their voice heard. But the greater the pressure on platforms to remove content, and more quickly, the greater the risks to freedom of expression for everyone.

So, if platforms aren’t doing enough, and governments are too heavy-handed in regulation, what, then, is the answer? This is a question that Global Partners Digital has been looking at as part of our broader work on privacy and free expression. In December, we published our response to the UN Special Rapporteur on freedom of expression’s consultation on content regulation in the digital age in which we set out what platforms and governments should be doing to ensure that any online content regulation complies with international human rights law and standards. This means greater transparency from platforms, engagement with experts and stakeholders when developing and implementing Terms of Service, and providing clear appeal and remedial processes for affected users. It also means governments ensuring there are sufficient limitations and safeguards in place when making companies liable for content on their platforms, with no general liability for content which they merely host, and realistic timeframes for compliance once they are notified of content which may be unlawful or harmful.

None of us wants to see videos of child sexual abuse or incitement to racial hatred, for example, allowed to proliferate freely online, but nor should responses to the problem of unlawful and harmful content result in unjustified or arbitrary restrictions on freedom of expression. While the challenges are significant, in the next few weeks we will be publishing a White Paper on Online Content Regulation, which will take discussions on this issue forward and propose a model for dealing with the legitimate challenges that platforms and governments face which is consistent with – and supportive of – human rights.