Trends in the encryption debate: from intelligence gathering to tackling online harms

18 Jan 2021

By Richard Wingfield

Policy debates around encryption are nothing new. The 1990s saw what was known as the “crypto wars”, with efforts from governments around the world (and particularly the US) to restrict the availability of strong encryption, whether through demanding “backdoors” for intelligence agencies, or limiting the key lengths of encryption products for export. And our World map of encryption laws and policies contains legislation going back to the mid-90s.

In 2021, where does the debate stand? At first glance, the core premise seems to have changed little. The supporters of strong encryption continue to argue that the privacy and security it provides are critically important for ensuring cybersecurity, keeping people’s data and communications safe and secure online, and enabling human rights, such as freedom of expression and association. On the other side are those who see encryption as a threat to national security—creating spaces that are impossible for security and law enforcement agencies to access.

However, one change in recent years has been how this issue is framed, and consequently the type of actors engaging. Today, the debate’s focus has shifted from being solely about intelligence gathering to include tackling different forms of illegal (or even legal but “harmful”) types of communications, such as the sharing of child sexual abuse material (CSAM), child grooming, cyberbullying and disinformation. This move towards a focus on these different types of online content has brought new actors into the debate, with certain increasingly vocal groups arguing against ubiquitous encryption, and the availability of platforms that use end-to-end encryption in particular.

While the trend can be seen across the world, the particular forms of online content which attract attention vary. In the US and in Europe, CSAM and other harms to children (such as grooming) dominate the debate, not least since Facebook announced that it would roll out end-to-end encryption on its Messenger app, currently the source of many reports of CSAM. It is also driving legislative proposals and policymaking more broadly. In the US, for example, the EARN IT Act was under consideration by the previous Congress. In its most recent draft, the Act would establish a governmental commission headed by the Attorney-General, who would produce guidance on how platforms should undertake content moderation in order to combat CSAM. While not legally binding, this guidance could still strongly discourage the use of end-to-end encryption. In addition, online platforms would lose the protections they currently enjoy, which prevent criminal or civil claims being brought against them in individual states or territories due to the existence of child sexual abuse material on their platforms. This would be the case even when such material was on encrypted channels, risking litigation where a platform uses or encryption, or fails to actively monitor all content on their platform.

In Europe, similarly, it is through policies on CSAM that encryption is being considered. 2021 will likely see some form of regulation at the EU-level on encryption, although it’s unclear at this point whether the EU will take a light-touch approach and continue to look for “technical solutions” or go further and introduce requirements for “backdoors” or other means for encrypted communications to be accessible. In the UK, the Home Office has been particularly vocal in its criticism of end-to-end encryption, spearheading a recent Five Eyes statement, joined by India and Japan, on the subject. As well as considering simply legally prohibiting Facebook from rolling out end-to-end encryption on Messenger, it is also pushing for the government’s upcoming Online Safety Bill to set out duties on online platforms to prevent illegal and harmful content and behaviour on private and encrypted platforms, de facto requiring these platforms to introduce backdoors or some other means to monitor the content. Across the EU, the UK and US, this change in focus has led to many children’s safety groups playing an increasingly active role in policy discussions around encryption.

In other parts of the world, the focus is different. In Brazil, for example, disinformation is the main source of concern for many legislators, and particularly its virality on platforms such as WhatsApp. Due to their own experience of seeing disinformation on online platforms, particularly during election periods, many legislators are pushing for legislation which would include traceability requirements, i.e. requiring companies to be able to identify the original sender of content which is shared on platforms. While doing so would not necessarily break encryption, it would be hard to comply with these requirements without undermining encryption in some way.

The situation is similar in India, where a series of murders following false rumours spread on WhatsApp has galvanised efforts within the government to force online platforms to identify and reveal the original creator and sender of messages, even when those messages are protected by end-to-end encryption. As well as joining the most recent Five Eyes statement, the Indian government is considering revising its intermediary liability legislation so that online platforms would become legally liable for the content on those platforms unless, among other things, they introduced traceability requirements.

What does this trend mean for human rights defenders and others who support the availability of strong encryption? There are, perhaps, a few lessons. First, we need to build our relationships and conversations with those who are concerned about encryption, whether child safety groups or others who experience harmful content and behaviour online. There are many steps that could be taken to better protect people online, to build people’s resilience, and tackle threats in a way that doesn’t undermine privacy or security. Second, we need to tell the stories of those who rely on encryption for their physical safety and security—journalists, whistle-blowers, minority groups—so that the ramifications and trade offs involved in encryption policy are front and centre of debates. Third, while the encryption policy debate is now new, many legislators and policymakers are only coming to the issue now. This means we need to build their understanding of encryption technology, what is feasible and infeasible, and ensure that discussions of alternatives and “technical solutions” are grounded in technical reality, and not theory.

With the attention of so many governments on content regulation, encryption is at the front and centre of policy debates in every part of the world. And acceleration in the digitalisation of society since the outbreak of COVID-19 has meant that protecting our privacy and security online has never been more important. As the framing of the issue evolves, however, so must our strategies. One way to keep on top of policy developments is through our new Encryption Policy Hub, which offers a range of tools and insight for human rights defenders engaging in the encryption debate.

*

The author would like to thank Emma Llanso, Director of CDT’s Free Expression Project; Owen Bennett, Senior Policy Manager at Mozilla; and Bruna Martins dos Santos, Advocacy Coordinator at Data Privacy Brasil, for their insights and reflections when putting this piece together.