CAHAI’S proposed elements of a legal framework on AI: our thoughts

22 Mar 2022

By Richard Wingfield

At the end of 2019, the Council of Europe started to explore the possibility of developing a legal instrument, such as a treaty, on artificial intelligence, and established an Ad Hoc Committee on Artificial Intelligence (CAHAI) to put together proposals for one. After two years of work, the CAHAI has now published its proposals for “possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law”. A successor committee (the Committee on Artificial Intelligence, or CAI) will now take this work forward and develop such a legal instrument over 2022 and 2023, with its first meeting starting on 4 April.

If developed, this instrument would likely become the first international treaty relating to the governance and regulation of AI, complementing other international initiatives such as UNESCO’s Recommendation on the Ethics of AI (adopted in November 2021) and the development of an Artificial Intelligence Act by the European Union. As with other Council of Europe instruments, like those on data protection (Convention 108) and cybercrime (the Budapest Convention), it may well be open to states from outside of the Council of Europe, becoming more of a global, rather than regional standard.

CAHAI’s proposals were ultimately determined by the member states in the process. However, alongside a number of other civil society organisations, GPD has been closely involved in the CAHAI’s work as an Observer. As the work of the CAHAI concludes, and its successor CAI gets ready to commence, we take a look at the proposals and what they might mean for any legal instrument.

 

WELCOME ELEMENTS

The mandate of the CAHAI explicitly required the Committee to base proposals for any instrument on the Council of Europe’s standards on human rights, democracy and the rule of law. Given this, it is unsurprising that many of those proposals hold up well from a human rights perspective.

We welcome the proposal that the instrument apply to “all development, design, and application of AI systems, irrespective of whether the actor is public or private”, as well as the recognition that the new instrument need not create new rights and must not undermine the existing international human rights framework. The proposals state that this means “further tailoring rights and obligations relating to human rights, democracy and the rule of law for the purpose of this instrument only where and when, after careful examination, the conclusion is reached that existing standards in their current form cannot provide sufficient protection of the rights of individuals in the specific context of the development, design and application of AI systems”.

The proposals for a risk-based approach to assessing the impacts of applications of AI systems are sensible, with greater safeguards required for the more risky applications. That said, the proposals do not go into much detail as to what the different risk classifications would be (save that they may include “low risk” and “high risk”). They do, however, indicate that there should be an initial review of all AI systems for potential risks to human rights, with a full human rights impact assessment required if these are identified. This two stage approach, with all AI systems undergoing a form of triage, is important, helping to ensure that no AI systems fall through the cracks.

The same part of the proposals also includes a welcome acknowledgment that some applications of AI may need to be subject to a full or partial moratorium or ban where the risks involved “present an unacceptable risk of interfering with the enjoyment of human rights, the functioning of democracy, and the observance of the rule of law”. Examples given include AI systems which use biometrics to identify, categorise or infer characteristics or emotions of individuals, in particular if they lead to mass surveillance, and AI systems used for social scoring to determine access to essential services. That being said, it is likely that the final instrument will leave it up to states to determine what are “unacceptable risks” at the national level, which could result in a fragmented level of protection.

The proposals also set out some positive ideas as to what the minimum safeguards and expectations should be during the design of AI systems in order to identify risks to human rights. While no real substantive detail is provided, the proposals do set out that safeguards should include provisions on transparency around the use of AI systems, on ensuring equal treatment and non-discrimination, on proper data governance, and on “robustness, safety and cybersecurity, transparency, explainability, auditability and accountability” and “ensuring the necessary level of human oversight over AI systems and their effects” throughout the AI lifecycle. Fleshing these out will be a key part of the CAI’s work.

Finally, the proposals also provide that enforcement of the instrument will need to be supervised by a national authority with national law ensuring “their expertise, their independence and impartiality in performing their functions, and the allocation of sufficient resources and staff”.

 

TROUBLING TEXT

Despite many positive elements of the proposals, there are several significant parts of the text which should give rise to concern. 

A key area of contention in discussions was whether the instrument should apply to the use of AI for national defence and national security purposes, as well as for “dual use” technologies. Given that many of the greatest risks to human rights come from the use of AI for these purposes, it is critical that they be within scope, and the reasons that member states provided in support of their exclusion are unconvincing.

It is true, as noted in the report’s proposals, that the Statute of the Council of Europe provides that “matters relating to national defence do not fall within the scope of the Council of Europe”, but the most significant Council of Europe instrument, the European Convention on Human Rights, does apply to matters relating to national defence, and other instruments developed by the Council of Europe have not contained such explicit exclusions. 

In relation to “national security”, many European governments are pushing for the EU’s Artificial Intelligence Act to exclude AI systems which are used for national security purposes. But, whereas the EU’s mandate explicitly excludes “national security” via its treaties, this is not true for the Council of Europe, and a similar exclusion here would set a dangerous precedent. Indeed, to exclude these AI systems from the instrument would also run counter to CAHAI’s own consultation which found that the use of AI for national security purposes created some of the greatest risks to human rights. The final compromise text does not take a firm position, stating simply that “the CAHAI is of the opinion that the issue of whether that scope could cover “dual use” and national security should be further considered in the context of developing a Council of Europe legal framework on AI, taking into account possible difficulties in this respect”. However, the fact that these areas are even suggested to be outside of scope is concerning.

Finally, the section on the rights of those affected by AI in a way which breaches their human rights is also disappointingly weak. The proposals say there should be a series of safeguards for individuals when AI is used to decide or inform a decision which impacts their legal rights or some other significant interest. These include the right to an effective remedy before a national authority (including judicial authorities) against such decisions, the right to be informed about the application of an AI system in the decision-making process, the right to choose interaction with a human in addition to or instead of an AI system, and the right to know that one is interacting with an AI system rather than with a human. However, this positive text is then largely undermined by a provision that it should be up to national governments to determine how these rights can be exercised, and that these rights can be restricted, provided that the restrictions are provided for by law and are necessary and proportionate in a democratic society. Similarly, in the section which looks at the rights of individuals affected by the use of AI by governments and the public sector, the proposals provide that those rights need not apply where “there are competing legitimate overriding grounds”. These are significant loopholes and caveats which could lead to national law gutting the ability for individuals to enforce these rights, or creating broad exceptions. They could also lead to the instrument being weaker than the existing international human rights framework which, for example, does not permit exceptions or restrictions on the right to an effective remedy when an individual’s human rights have been violated.

 

NEXT STEPS

As noted above, the CAHAI will be succeeded by a new Committee on Artificial Intelligence (CAI) which will work until 31 December 2024, although the deadline set for it to develop the legal framework is 15 November 2023. As with CAHAI, the CAI is open to member states who will be able to vote as well as observers and other participants (but who will not have voting rights). Russia, which has notified the Council of Europe of its intention to withdraw from the body, will not participate.

While the proposals developed by CAHAI are likely to strongly influence the new CAI’s work, they are only proposals—meaning there is still scope for the final text to provide stronger (or, of course, weaker) protections for human rights. We’ll continue to provide updates on its work, so take a look at our AI Policy Hub and Forums Guide, and sign up to our monthly Digest for regular updates and analysis.