As we noted in last month’s Digest, numerous efforts are taking place around the world to shape the way that artificial intelligence (AI) is developed, used and governed.
April proved a busy month in this regard, with major developments at the Council of Europe, UNESCO and the European Union. While the outputs of these three processes will be different in nature and form, all are ultimately trying to achieve the same thing: to shape national level regulation and governance of AI technologies:
The Council of Europe is doing so via a “legal instrument” which could be endorsed or ratified by states (including states outside of the Council of Europe) and which would provide a set of minimum standards on AI governance, similarly to its instruments on data protection and cybercrime. The instrument is still being drafted, and a consultation period has been extended (but hurry—it closes on 9 May). You can see our consultation response here, and our friends at ECNL have developed a great guide on how to answer the consultation survey.
The European Union is developing legislation which would be binding on all EU member states, and potentially a model for others. A draft was published this month which would prohibit certain applications of AI, set out detailed regulatory requirements for “high-risk” AI systems, and transparency reporting obligations for certain other uses.
UNESCO has published the latest version of its draft Recommendation, a non-binding but highly detailed instrument setting out recommended policy actions UN member states should take. We just published an initial assessment of the draft Recommendation—read it here.
How do these frameworks shape up from a human rights perspective? One welcome aspect of all three is the recognition of human rights as a fundamental consideration. All explicitly recognise the risks to human rights that stem from AI, and all propose measures to mitigate them. While the level of detail varies, all include (or are likely to include) requirements for greater transparency around AI systems, restrictions on AI systems which carry the greatest degree of risk to humans, and greater responsibility and accountability for companies developing AI technologies.
However, concerns remain:
The EU’s Regulation contains only limited prohibitions and limitations on the most harmful AI systems—with a number of exceptions, and weak transparency requirements (as noted by many organisations, including Access Now, AlgorithmWatch, ARTICLE 19 and EDRi), which mean a significant proportion of AI systems would be largely unaffected.
The Council of Europe process has yet to develop a draft text—but the recent multistakeholder consultation (which we submitted to) gave respondents only limited input to recommend what the legal instrument should contain. Most of the questions simply asked respondents to “agree” or “disagree” with broad statements or prioritise certain issues or uses of AI that raise concerns.
Finally, UNESCO’s draft Recommendation takes an “ethical” approach, with a lack of clarity on the relationship between “ethics” and “human rights” in the text, the inclusion of unclear and undefined language, and a limited approach towards the need for human oversight of AI. For further analysis, see our blog post on the draft Recommendation.
Regardless of the merits of these processes, a further concern is the risk of fragmentation in human rights protections. While multiple instruments and frameworks on the same issue is not a concern per se, widely diverging standards could lead to “race to the bottom” rather than a universally high level of human rights protection.
To keep track of developments, we’ve upgraded our AI Policy Hub to include an interactive calendar of key events and meetings at these (and other) forums, alongside a comprehensive set of resources and tools to help navigate and understand the issues involved.