The terrorist attacks in Christchurch in March of this year shocked the world and – as ever, after events of this kind – forced governments to look again at how they can keep their citizens safe. One of the elements of the Christchurch attacks, in particular, has caught the attention of governments: the killer used Facebook Live, a live-streaming function on Facebook’s platform, to broadcast the attacks. Although fewer than 200 people were watching the actual livestream, and Facebook removed it within minutes of being notified of its existence, over 1.5 million times attempts were made to share and upload the video in the following days on Facebook alone.
On 15 May, and led by President Macron of France (the current G7 President) and Prime Minister Ardern of New Zealand, fifteen governments, the European Commission, and eight large tech companies signed up to the Christchurch Call. The Call contains a series of commitments made by governments and companies to tackle the dissemination of terrorist content online.
While the focus of the Call is narrowly focused on one particular form of harmful content, the commitments go to the very heart of how the internet is governed, and who makes decisions about what people can and cannot say and do online. These are complex issues, and while the motivation behind the call is entirely understandable, it is critical to ensure that in the rush to deal with particular threats within societies, fundamental principles such as human rights and a free, open and secure internet are not overlooked. Looking at the Christchurch Call with these principles in mind, there is much to welcome, but also parts of the Call – and particularly the process in which it was formulated – that could have been better.
The vast majority of what’s in the Call can be welcomed, starting with the fact that its preamble recognises the important principles noted above, stating that:
“All action on this issue must be consistent with principles of a free, open and secure internet, without compromising human rights and fundamental freedoms, including freedom of expression. It must also recognise the internet’s ability to act as a force for good, including by promoting innovation and economic development and fostering inclusive societies.”
For the most part, the commitments in the Call don’t raise any particular concerns, and are sensible, proportionate and potentially effective means of dealing with the threats of terrorism and violent extremism. For governments, these include:
- Building the resilience of populations to resist terrorist and violent extremist ideologies through education, media literacy, and fighting inequality;
- Enforcing legislation that prohibits the production or dissemination of terrorist and violent extremist content, consistent with the rule of law and international human rights law, including freedom of expression;
- Encouraging media outlets to avoid amplifying terrorist and violent extremist content when they depict terrorist events online; and
- Supporting industry standards and other frameworks so that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.
Similarly, many of the commitments made by companies are also to be welcomed and may, in fact enhance the level of protection of freedom of expression online. These include:
- Providing greater transparency in community standards and terms of service;
- Enforcing those community standards or terms of service in a manner consistent with human rights and fundamental freedoms;
- Regular and transparent public reporting; and
- Reviewing the operation of algorithms and other processes that may drive users towards or amplify terrorist and violent extremist content.
There are also a series of important joint commitments, such as to work with civil society to promote community-led efforts to counter violent extremism in all its forms; to support research and academic efforts to better understand, prevent and counter terrorist and violent extremist content online; to work and support smaller platforms to build their capacity; and to protect and respect human rights and avoid adverse impacts through business activities.
Finally, there is also a specific section focused on how the commitments in the Call will be taken forward with the support of civil society, through obtaining expert advice on how to do so in a manner consistent with a free, open and secure internet and with international human rights law, working to increase transparency, and supporting users through company appeals and complaints processes.
What Could Have Been Better
There are five key aspects of the Call, however, that do raise concerns or which were not done as well as they could have been:
- The first is the failure to define the limits of “terrorist” or “violent extremist”. It is well known that governments across the world use the labels of “terrorism” and “extremism” to target dissidents, political opponents and minority groups. Any commitments to remove such content therefore risks being used to justify the inappropriate censorship of individuals and groups. There is no universally agreed definition of “terrorism” or “extremism”, and this would not have been the appropriate forum to develop one, but the call could nonetheless have highlighted that the terms should not be misinterpreted or misused to restrict the expression of legitimate speech.
- The second is the commitment from governments to “consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions, such as (…) [r]egulatory or policy measures consistent with a free, open and secure internet and international human rights law”. While there is the welcome caveat at the end, recent regulatory proposals from Australia and the UK, for example, have raised serious concerns about risks to freedom of expression through their broad scopes, models which incentivise the removal of legitimate speech, and the privatisation of decisions about the legality of speech.
- The third is the commitment from companies to “take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services”. Again, while this commitment does go on to say that this must be done “in a manner consistent with human rights and fundamental freedoms”, there are nonetheless risks involved in trying to “prevent” particular material from being uploaded, which invariably relies on automated processes which are not always accurate or transparent.
- The fourth is that the process was rushed and not open, inclusive or transparent. Many different stakeholders are affected by issues of internet governance and the regulation of online content, however the Call was drafted behind closed doors between a small number of governments and large companies. Civil society’s input was very much an afterthought, with a meeting organised between governments and civil society at late notice meaning few were able to attend or participate meaningfully. Our colleagues at Access Now were able to participate in this meeting, but have highlighted its shortcomings. InternetNZ deserve praise for their efforts, in difficult circumstances, establish a platform, to facilitate conference calls and oversee the drafting of a set of civil society positions. However, the governments leading the effort should have made the process far more open and inclusive at an earlier stage so that all stakeholders could have meaningfully participated.
- Finally, at the same time as the Call was launched, five companies – Amazon, Google, Facebook, Microsoft and Twitter – published a complementary joint statement and set of nine actions that they would take. These are largely similar to the commitments of the companies in the Call itself (such as better terms of service, easier reporting of content, improved technology, and transparency reporting), but failed to recognise that any measures taken should be consistent with international human rights standards. All companies have a responsibility, under the UN Guiding Principles, to respect human rights, and the moderation of online content inevitably poses risks to freedom of expression. The failure to even note this in their joint statement and set of actions, when the Call itself did repeatedly, and when three of the five companies are members of the Global Network Initiative, is worrisome.
The Christchurch Call has no means of follow up or monitoring, and so it is unclear what will happen now with the commitments made. There is talk of the issue being revisited at the next meeting of the G7 in August, also in France, and potentially the development of a new “Charter” with a broader scope of all forms of illegal and harmful online content. If such a process is to be established, it is critical that lessons are learned from the Christchurch Call development, particularly around the process of ensuring all impacted stakeholders are able to participate meaningfully in its development and implementation.