Last week, investor George Soros’ keynote speech at the 2019 Davos World Economic Forum took an unexpected turn:
“I want to call attention to the mortal danger facing open societies from the instruments of control that machine learning and artificial intelligence can put in the hands of repressive regimes.”
His intervention reflects the increasing prominence of AI as a preoccupation for policymakers. As speculative technologies like biometrics, virtual assistants and robotics have become integrated into the fabric of ordinary life, the giddy optimism of some AI boosters – who believe “the mysteries of the universe could be unravelled by the unification of man and machine chips” – is increasingly tempered by an appreciation of the potential risks for human rights.
2018 saw a number of important studies into the potential impacts of algorithmic filtering on freedom of expression and privacy – from UN Special Rapporteur David Kaye’s Report to the General Assembly in October 2018 to Article 19’s paper on Privacy and Freedom of Expression In the Age of Artificial Intelligence. And the future of AI governance – or, in other words, what regulatory, legislative and other mechanisms can successfully square AI’s economic and social benefits and risks – is, for the first time, being seriously discussed in a range of forums – notably UNESCO, which will explore a “humanistic approach” to AI in its upcoming high-level meeting, and the International Telecommunication Union (ITU), which just held a high-profile workshop on the subject.
The role international bodies should (or shouldn’t) play in the regulation of these technologies is a controversial and highly politicised matter which I won’t focus on here (we discuss it briefly in this month’s newsletter). Instead, I want to look more closely at an emerging new instrument for the management of AI at the national level: the National AI Strategy (NAS).
Currently, National AI Strategies serve as vehicles for a range of policy aims: from promoting research and development in the field of AI, to announcing legislative reviews, and discussing social provisions like public education and capacity building to manage the impact on AI and society. At present, 18 have been published. These are predominantly found in the global North, but some global South countries are in the process of developing them – such as in Mexico and India where conversations to develop strategies are well underway.
There are several reasons why I think National AI Strategies deserve our attention as human rights defenders. Due to the breadth of what they cover, they have the potential to produce a range of human rights impacts at the national level, both positive and negative. That in itself is reason enough to pay attention to them.
Despite these potential impacts, very few existing NAS processes explicitly refer to human rights. Most opt instead for a discussion of “ethics”, a term which is notably more ambiguous and less concrete. In Mexico’s NAS, it stands in for the “complex social, economic and political issues associated with widespread AI implementation”; in India’s, it is simply undefined. Yet there’s reason to believe that these strategies can be shaped productively by engagement from civil society. Our extensive work on National Cybersecurity Strategies (NCSS), a similar cross-departmental policy instrument, has shown that the presence of human rights defenders can decisively shift a strategy in the direction of greater openness, inclusiveness, and transparency. We also found that – due to transnational patterns of knowledge-sharing among policymakers – once rights-respecting practices were established in one national strategy, they tended to set a precedent for others. There is currently no defined corpus of good practice for NAS. The sooner we can intervene to help build and shape it, the more likely it is thereat rights-respecting practices will become the norm.
So this brings us to the main question: what should human rights defenders be doing about NAS?
The first necessary step, we think, is exploration and research. There needs to be a better understanding of how NAS work as instruments, and what a “good practices” framework might look like in terms of both development and implementation. At the moment, our work on this question is relatively open-ended. But here are some of the directions and critical questions we’ll be examining in the coming months:
- What can we learn from the development and implementation of National Action Plans and National Cybersecurity Strategies which are more established and widely adopted mechanisms for dealing with (respectively) BHR and cyber-related issues at the national level?
- What potential openings does the inclusion of “ethics” offer for human rights defenders? Can we use this framing to ensure human rights considerations are integrated in the deployment of initiatives set out in the NAS?
- What does growing consensus that AI should be developed in a manner consistent with international human rights law (e.g. from Google), actually mean in practice?
- What would openness, inclusiveness and transparency look like in the context of a National AI Strategy?
- Does the economic emphasis of the NAS model impose limits on what we can achieve with it, from a human rights perspective?
- What role should / can international bodies and policymaking processes play in ensuring good practice in the development and implementation of rights-respecting NAS?
- Is advocacy to promote human rights in NAS cost-effective? Or should we be focusing on other instruments at the national level?
A final note – it’s early days for the NAS model, and this post is only a brief sketch of possibilities for further study and consideration. Feedback and suggestions are welcome, and encouraged.