20 Apr 2023

First thoughts on the revised zero draft of the Council of Europe’s AI Treaty

Note: GPD is an observer to the Council of Europe’s Committee on Artificial Intelligence. All comments are based solely on public documents.


The Council of Europe’s Committee on Artificial Intelligence (CAI) is currently developing the world’s first treaty on AI, working towards an ambitious completion date of the end of 2023.

The significance of this convention cannot be overstated, with some experts predicting that it “will prove indispensable”. As the first legally binding framework for the regulation of AI, it could radically drive up protections on human rights by providing a common framework for the design, development and application of AI systems throughout their lifecycle, regardless of whether these activities are undertaken by public or private actors. The convention has the potential to become not just a regional framework, but a global standard as countries outside of the Council of Europe such as the USA, Canada, Israel and Japan could sign on.

The body developing the treaty, CAI, comprises the 46 member states of the Council of Europe, as well as observer states, representatives of Council of Europe bodies and sectors, other international and regional organisations, the private sector, and representatives from civil society. Efforts to regulate AI and algorithmic decision making at the Council of Europe go back several years, and GPD has been closely engaged in CAI as an observer since 2021. A key priority for us in the process has been ensuring that human rights considerations are front and centre.

The current draft under consideration—which was prepared by the Chair of CAI with the support of the Secretariat—is the “Revised Zero Draft [Framework] Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law”. It was published at the beginning of 2023. While this document doesn’t reflect the final outcome of negotiations, it does provide a strong indication of efforts to elaborate a legally binding instrument on AI and the general direction of travel. Below, we take a deep dive into the text: highlighting its strengths, as well as a few specific areas for clarification and further work.


A step in the right direction

The revised zero draft gets a number of things right by ensuring that it builds on established frameworks for human rights and emerging norms for the governance of AI. States that ratify the convention will have to translate and implement its provisions into their domestic framework, including requirements for impact assessments, transparency, accountability, as well as measures ensuring availability of redress, amongst others. Many of these elements are drawn from the final report produced by the CAHAI, the predecessor to CAI.

Ensuring that the convention reinforces international human rights law and standards

The proposed convention, in accordance with the CAI’s terms of reference, is based on the Council of Europe’s standards on human rights, democracy and the rule of law. We welcome that the need to protect human rights is featured prominently throughout the revised zero draft. This is immediately evident from the preamble, which sets out “the need to ensure respect for human rights as enshrined in the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental freedoms and its protocols, the 1966 United Nations International Covenant on Civil and Political Rights and other applicable international human rights treaties”. The 1981 Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data and amending protocols are also referenced. These provisions ground the proposed convention in existing international frameworks and ensure that it will build on previously established obligations for the protection of human rights, as opposed to fragmenting or undermining them.

Beyond the preamble, the revised zero draft sets out various obligations and principles for the protection of human rights. For example, Article 9, would require state parties to take measures “to preserve individual freedom, human dignity and autonomy, in particular the ability to reach informed decisions free from undue influence, manipulation detrimental effects which may adversely affect the right to freedom of expression and assembly, democratic participation and the exercise of other relevant human rights. Chapter III contains various principles, including Article 12 on “principle of equality and anti-discrimination” and Article 13 on “principle of privacy and personal data protection”. Despite being framed as principles, these provisions reaffirm obligations with respect to human rights, including those on equality and non-discrimination, privacy and data protection.

It builds on existing frameworks for human rights and emerging norms for governance of AI

We welcome that many of the obligations and principles set out in the revised zero draft build on existing frameworks and emerging norms for the governance of AI. This is clearly exemplified by the “principle of accountability, responsibility and legal liability” set out in Article 14 and the “principle of transparency and oversight” in Article 15, which align with the OECD AI Principles (2019) and UNESCO Recommendation on AI Ethics (2021). This suggests that the draft has taken account of relevant initiatives from other international forums, and is seeking to advance broader coherence on AI governance.

We are pleased that the revised zero draft sets out a number of obligations and general principles aimed at ensuring that the design, development and application of AI systems is fully consistent with respect for human rights. It includes principles relating to equality, privacy, accountability, transparency, safety and safe innovation. Many of these not only reaffirm existing obligations of states, but further enable the realisation of existing and universally recognised human rights, even when not explicitly referenced.

The right to effective remedy is guaranteed under Article 2(3) of the International Covenant on Civil and Political Rights and Article 13 of the European Convention on Human Rights, but is not mentioned within the zero draft. However, the principles of transparency and accountability, alongside the mechanisms within Chapter V “on measures and safeguards ensuring accountability and redress”, directly support the realisation of this right with respect to AI. These principles and safeguards aim to resolve issues relating to the opacity of AI systems and to provide individuals with the means and mechanisms to effectively challenge AI-informed decisions. We are particularly pleased that Article 20 would require state parties to ensure that any person has the right to know they are interacting with an AI system.

That doesn’t mean that these obligations and principles wouldn’t benefit from modifications or refinement. We would like to see the obligations in Chapter V more clearly grounded in international standards on access to information. Still, we believe the revised zero draft signals a step in the right direction and hope that negotiations will only improve it from a human rights perspective.


Areas for clarification and further work

Alongside these commendable aspects of the revised zero draft, issues remain. We hope that further negotiations will result in clarification and revisions that result in more comprehensive protections for human rights. As always, the devil is in the detail.


Article 2 of the revised zero draft sets out the definitions within the proposed convention. Some of these definitions—notably “artificial intelligence system”, “artificial intelligence provider” and “artificial intelligence user” diverge, in varying respects, from those in the EU’s proposed Artificial Intelligence Act and others such as the OECD. The final CAHAI report recommended that all definitions should as far as possible be compatible with similar definitions used in other relevant instruments on AI. That doesn’t appear to be the case at the moment. The current formulation of these definitions would suggest that CAI needs to pull together and ensure that the definitions are precise and futureproofed so as to not be rendered invalid by future developments.


Article 4(3) of the revised zero draft provides that “the Convention shall not apply to design, development and application of artificial intelligence systems used for purposes related to national defence.” This provision would seem to align the scope of the convention with Article 1(d) of the Statute of the Council of Europe. But the scope of the proposed convention and  its treatment of AI systems used in the context of national security or ‘dual use’ systems has been a thorny issue for several years now. Civil society has consistently advocated for the scope of the convention to include AI systems concerning national security or dual use, and some groups have provided detailed views on this provision. GPD would like to see this sub-article either removed entirely or reformulated in a more generic manner that does not explicitly mention “national defence”, and leaves interpretation to the appropriate bodies.

Assessment and red lines 

Chapter VI of the revised zero draft sets out an obligation for state parties to provide AI providers and users with effective guidance on how to identify, assess, prevent and mitigate risks and adverse impacts on human rights, democracy and the rule of law. It notes that the preventative and risk mitigation measures should be proportionate and target the specific context of application of AI systems. This chapter does not, however, provide for explicit prohibitions on particular AI systems or those used in specific contexts, as in the proposed EU AI Act, nor does it outline specific categories of high-risk AI systems. It only requires that states provide for the possibility of imposing bans or moratoriums on certain applications of AI systems within their domestic legal framework.

This approach appears to diverge from the recommendations contained in the CAHAI final report that called for risk classification to include a number of categories (e.g., “low risk”, “high risk”, “unacceptable risk”), and which drew attention to particular examples of AI systems that could qualify as unacceptable risk, such as AI systems “using biometrics to identify, categorise or infer characteristics or emotions of individuals, in particular if they lead to mass surveillance, and AI systems used for social scoring to determine access to essential services”. There have been recent calls by civil society for the convention “to prohibit AI systems that pose an unacceptable or unmitigable risk to human rights, such as inherently discriminatory uses of biometrics or systems/ uses leading to mass surveillance in the context of law enforcement or migration”. The EPDS has similarly expressed support for red lines and the prohibition of particular AI systems or uses.

It is therefore uncertain whether this lack of red lines or absence of a more defined classification system will remain. Article 29 provides that national supervisory authorities will oversee and supervise compliance with the requirements of the risk and impact assessments. This demonstrates the degree of discretion states will have when addressing the risks posed by AI systems.  While there are certainly advantages to a flexible and principle-based framework, there may be downsides to simply delegating oversight to national authorities in the proposed manner, resulting in divergent approaches by states with respect to classification, prohibitions and corresponding obligations.

These uncertainties and current structure require clarification, as well as requiring independent oversight at both the national level and at the Council of Europe, with a clear need for cooperation with national human rights structures and civil society—something that is currently missing from the revised zero draft and stressed by particular groups. There is also a need for an effective methodology for undertaking such assessments, which has been discussed by CAI at a previous plenary and to be decided at a later date.


We welcome that article 36 of the revised zero draft provides that “no reservation may be made in respect of any provision of this Convention”. Reservations are made by states when signing or ratifying a treaty to enable them to exclude or to modify the legal effect of certain provisions of the treaty. This statement on reservations is a great starting point, as broad or unrestricted reservations may enable states to avoid the obligations and principles within the convention that are consequential for the protection of human rights. But—based on our experiences in other treaty negotiations, particularly the UN’s cybercrime convention—we’re aware that  further negotiations may result in changes to this provision. The consolidated negotiating document of the UN’s Ad-Hoc Committee on Cybercrime currently provides that the need for reservations will be assessed once discussion on the substantive provisions have reached a more advanced stage (A/AC.291/19).

If reservations were considered necessary to facilitate wider ratification of the treaty, we would prefer that it only allow reservations for a limited number of provisions which are clearly set out within the substantive text. This approach has precedent in other Council of Europe instruments, such as the Budapest Convention.



We commend CAI for its efforts to create a binding legal instrument that provides much needed protections for human rights, democracy and the rule of law in the context of AI. Plenary meetings are planned for this upcoming June and September where GPD will continue to advocate for a robust and rights respecting framework to safeguard human rights and support the work of CAI throughout its mandate.