19 Sep 2023

What would a human rights-based approach to AI governance look like?

Over the past year, discussions around artificial intelligence (AI) have saturated media and policy environments. Perspectives on it vary widely: from boosterist narratives which posit the limitless potential of AI-powered technologies to help overcome social inequalities and accelerate industrial development; to apocalyptic framings, which suggest that a (speculative) ‘artificial general intelligence’ could make humans extinct. 

In the middle, civil society groups—including GPD—have been emphasising the critical, real life opportunities and challenges that AI presents for individuals and their human rights in the here and now. We posit that the trajectory of AI is unlikely to lead to either utopia or apocalypse; rather, the technologies it comprises have both rights supporting and oppressive potential. To take one example, generative AI models like Chat GPT could, with effective governance, release us from routine industrial tasks and unlock time for human creativity and free expression. They also have the potential to drive disinformation, negatively disrupt education, and diminish cybersecurity. 

How, then, can we harness the potential benefits of AI while avoiding its risks to human rights? Governments are currently racing to enact regulation to address precisely this question. At the time of writing, national, regional or global specific AI frameworks are not yet operational. So far, the general frameworks that have been proposed are anchored in ethics and have a mixed record in considering human rights. At this juncture, when so many frameworks are under development—and so little precedent exists—civil society has a small but critical window to try and shape them in a more rights-respecting governance and equitable direction. 

As our contribution to this collective effort, GPD has developed a set of principles and guidelines to help inform human rights-respecting regulation around AI. Designed to be used both directly by policymakers and in advocacy by other actors, they set out clear and actionable considerations which can be applied to any regulatory process around AI. In this blog—the first in a new series—we set out this guidance framework, as well as providing an introductory explainer to AI, its key technologies and intersections with human rights. Later entries in the blog series will further explore and unpack the evolving state of play in AI governance.

 

What is artificial intelligence?

AI refers to an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. In the early days of the technology’s development, algorithms could carry out a limited range of tasks that ordinarily need human intelligence: like problem-solving, pattern recognition and judgement. Machine learning (ML) —a breakthrough for AI introduced in the 90s— enabled algorithms and statistical methods to infer patterns or predictions from data, gradually improving in accuracy over repeated iterations. 

In recent years, Large Language Models (LLMS) a type of ML model trained on text data—have improved their ability to perform a variety of natural language processing tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another. Today, we are seeing a shift toward interactions that mimic human behaviour, with text or voice assistants that can understand natural language and generate human-like responses to a wide range of queries and prompts. As we explore below, this new generation of AI technologies carries particular risks for human rights, as it can often be difficult for users to understand and appreciate that a generative AI model is simply fabricating responses based on the likelihood of sequences learnt from its training data. 

 

Why is artificial intelligence relevant to human rights?

AI tools which use satellite data to predict extreme weather patterns, or interpret medical data to facilitate early diagnosis of a life-threatening disease, can help protect an individual’s right to life. Conversely, someone’s right to life might be threatened by being denied safe asylum at a border due to a decision made by an AI system. Facial recognition tools might protect an individual’s right to liberty and security by helping law enforcement to quickly identify or find a missing person. But inaccurate facial recognition tools used by law enforcement could also result in wrongful arrest, violating this right. 

Similarly, the implementation of LLMs could support a person’s exercise of freedom of expression by providing information in an accessible and engaging format, yet may also sometimes provide false or misleading information, which could negatively impact an individual’s agency—for example, by undermining their right to free and fair elections.

As we can see from these examples, AI has the potential to both support and negatively impact the enjoyment of the full range of human rights. We need to remember that all human rights are indivisible and interdependent. This means that violation of a civil and political right (such as privacy or non-discrimination) might also result in a violation of the right to access to an economic, social, cultural and environmental right (such as access to health, work or education).  

Under international human rights law, states are obliged to protect, respect and promote human rights; and, under the UN Guiding Principles on Business and Human Rights (UNGPs), companies have a responsibility to respect them. AI technologies can have both positive and negative impacts on a range of human rights, including civil and political rights, as well as cultural, economic, social and environmental rights. AI systems may also have unique impacts on groups whose rights are protected in specific international legal instruments, including women, ethnic minorities, children, people with disabilities, refugees and migrants.

One of the major risks that AI tools pose to human rights is facilitating or advocating discriminatory outcomes which negatively impact vulnerable and traditionally marginalised groups, violating the civil, political, economic, social and cultural rights protected in these instruments. AI tools replicate the patterns they identify in their training data. Therefore, they can amplify existing biases resulting from particular worldviews and deficiencies in the representation of particular groups due to entrenched patterns of discrimination and marginalisation. This means that the rights to equality and non-discrimination must be considered with particular care in the development and deployment of an AI tool.

 

What would a rights-respecting approach to AI governance look like?

Drawing on legal research, close study of existing good practice in the governance of emerging technologies, and our specific experiences engaging in global discussions around AI—from the Council of Europe’s treaty negotiations to the Global Digital Compact—we’ve developed a rights-based policy approach to AI governance based on five core principles:

  1. Build policy approaches grounded in International Human Rights Law;
  2. Develop a risk-based approach to the design, development and deployment of AI;
  3. Promote open and inclusive design, deployment and use of AI;
  4. Ensure transparency in the design, development and deployment of AI;
  5. Hold designers and deployers of AI accountable for risks and harms.

 

1. Build policy approaches grounded in International Human Rights Law

Policy approaches to the design, development and deployment of AI systems should be firmly rooted within the existing international human rights framework and should not undermine or seek to replace existing human rights standards. This is because the international human rights framework and the specific rights guaranteed under it—including the rights to life, privacy, freedom of expression, association, peaceful assembly, freedom of movement, non-discrimination, and effective remedy—are already applicable in the context of AI systems. 

There are challenges in the application of the international human rights law framework to AI systems. This is due both to the complexity and opacity of AI systems, as well as the fact that international human rights protections and obligations are often broadly worded, making them difficult to interpret and apply in the context of new technologies. Such challenges have spurred some entities to propose alternative approaches to AI governance, including those grounded purely in ethics (such as the Recommendation on the Ethics of AI adopted by UNESCO). An ethical approach can be useful as a complement but detrimental if it is regarded as a substitute for a human rights-based approach. This is because there is a risk of undermining the applicability of the existing international human rights law framework, which has a level of normative value, geopolitical recognition and status that any alternative approach would be unlikely to match. There is also a risk that ethical approaches to AI policy may suggest that the international human rights framework is inappropriate or insufficient, which could encourage the development of standards that lack consensus, or are even inconsistent with the existing human rights framework. 

Policy approaches must therefore reaffirm the existing international human rights framework and seek to enable the full realisation of such rights in order to address the unique challenges posed by AI systems. This can be accomplished through clarification of the scope and applicability of particular rights, as well as the imposition of tailored requirements or obligations to enable practical mechanisms for protecting human rights. However, this should only take place where it is determined that existing frameworks and standards cannot provide sufficient and comprehensive protection for human rights in the context of AI development and deployment. 

 

2. Develop a risk-based approach to the design, development and deployment of AI 

Not all AI systems or particular uses of AI pose the same type or level of risk to individuals’ human rights. Risk-based approaches require, at a minimum, some form of assessment to determine and classify risk levels, so that any new obligations are proportionate to the identified risk. But while there are some sectors—including law enforcement, healthcare, military use and migration control—that demand particular attention or concern, the design, development and deployment of AI is often not limited to a particular sector. This is what we are seeing with foundation models, which capture general learning patterns to be used later as the basis for more specific AI systems, and can be deployed across products and services in different fields. 

A risk-based approach must therefore both recognise the general applicability of AI technologies and sensitively assess their impacts in different use cases. By doing this, we can accurately identify risks to human rights and mitigate them through the imposition of appropriate obligations—whether they relate to transparency, accountability or otherwise—across the whole AI lifecycle. This risk-based approach should be embedded across the public and private sectors.

A risk-based approach should also include the ability to impose prohibitions or moratoriums on AI systems when it is determined that they present an unacceptable threat to human rights. This could include, for example, AI systems using biometrics to identify, categorise or infer the personality or emotions of individuals, leading to mass surveillance, or AI systems used for ‘social scoring’. 

The task of assessing risk to human rights should fall on those actors best placed to identify it throughout the AI lifecycle, including designers, developers and deployers. They should be responsible for conducting ongoing evaluation and for communicating—to each other, impacted communities, oversight authorities, and the general public—the results of any assessment exercise.

 

3. Promote open and inclusive design, deployment and use of AI;

Data representativeness (the diversity of people represented in data) and quality (accuracy and relevance of data) are necessary prerequisites for AI systems to be designed in an open and inclusive manner. Those in charge of the design and deployment of AI systems should be able to provide transparent information about the provenance of the training data, the values applied to data selection, the quality assurance process that data went through, and the link between the inputted data and the populations or context within which the AI system would be deployed. A broad range of perspectives and interests should also be taken into account, reflecting differences in culture, language, expertise, and socio-economic conditions.

Designers and deployers of AI systems should devote specific resources and attention to monitoring and mitigating disproportionate impacts on particular groups caused by bias and discrimination in the AI system. Mechanisms for redress should be made available in those cases. To that end, existing legal frameworks dealing with non-discrimination in different fields (including employment access, consumer affairs, or healthcare) should be strengthened and used to guide the deployment of AI systems. In jurisdictions where such frameworks do not exist, they should be created.

 

4. Ensure transparency in the design, development and deployment of AI

Those deploying AI systems to perform particular tasks—whether public or private bodies—must clearly inform affected individuals when a decision that affects them has been made by or with an AI system. This includes “hybrid” decisions, where an AI system was used to predict, augment or flag decisions for human review, or where a human has reviewed a suggestion made by an AI system. This disclosure is important to ensure that individuals can appropriately appeal any decisions which they feel are made in error. 

It is also essential to be transparent with users and regulators about how the AI system works. For example, many current LLMs have been released without including sufficient information for consumers on their actual capabilities and limitations and their data provenance. As a result, users have been deceived by erroneous generative AI outputs—such as nonexistent academic citations and legal precedents, fake profiles and misleading facts. 

Because of the opacity of the models themselves, AI developers must be transparent about exactly how their system or model was trained, developed and tested, in order to be able to effectively exercise accountability and remedy regarding system outputs and impacts. This disclosure should include, at a minimum:

  • information about how the training dataset was acquired or built, and by whom;
  • assumptions that underpinned its labelling or coding for use in machine training;
  • the nature of quality assurance performed to check data quality or weed out degraded or ‘noisy’ samples;
  • information about the AI model itself, including the type of learning algorithm and reward mechanisms used, and the number of parameters and training/testing iterations;
  • information about how the model was fine-tuned to complete relevant tasks through specific data inputs and reinforcement learning; and
  • details on how the model’s robustness or accuracy was assessed through testing and controls before determining it was safe to launch (for example, through red teaming).

Transparency requirements can create costs for AI developers and implementers. However, these costs should be considered as an integral part of the AI system’s development and deployment. To help ease this burden, regulation should provide guidance on how companies of all sizes—including small businesses—can implement transparency requirements in a proportionate manner.

 

5. Hold designers and deployers of AI accountable for risks and harms

Any decisions around AI design and deployment should consider potential human and environmental harms, as well as how to remedy and mitigate them in a timely manner. In addition to risk assessment and mitigation, appropriate mechanisms should be available for handling grievances and providing effective remedy for individuals and groups adversely affected by the performance of AI systems. Accountability mechanisms should avoid diluting responsibility among different actors and entities within the AI lifecycle. Liability should be clearly and proportionately assigned to the different entities which are best positioned to prevent or mitigate harm in the AI system’s performance.

Accountability mechanisms for AI systems in their research and testing phases—or for their implementation in downstream or third party products—might differ from those appropriate for mass market products. For mass market products, accountability mechanisms should be able to both provide necessary quality and safety assurance to prevent consumer harm once in the market, and offer mechanisms of remedy for impacted consumers in instances where those measures have not been taken.

 

What next for AI governance?

AI regulation is spreading rapidly at the national, regional and global levels, with most regulation currently emerging in global North jurisdictions. The most advanced current efforts are led by Europe; one at the regional level, with the EU’s proposed risk-based AI Act, likely to be adopted at the end of 2023; the other at the global level, through the work of the Council of Europe’s Committee on Artificial Intelligence (CAI)  which is currently developing the world’s first treaty on AI. Though emerging from a European body, this instrument—anticipated to emerge in 2024—has the potential to become a global standard that can be adopted by countries outside of the Council of Europe. 

In the UN system, the ongoing Global Digital Compact (GDC) process, led by the UN Secretary General, has proposed a High-Level Advisory Body for AI (the Body), which would bring together state experts, relevant UN entities, industry, academia and civil society groups to advance recommendations for the international governance of AI. This proposal also includes a digital human rights advisory mechanism facilitated by the Office of the High Commissioner for Human Rights (OHCHR), which would provide practical guidance on human rights and technology issues. 

Through these governance efforts, the UN is taking a leading role in responding to the intensifying public debate around the appropriate modes and forums for global AI oversight. However, we were disappointed that the recent call for nominations to the Body’s constitution fell short on both timeframe and provision of information regarding the criteria and selection process. Both elements are key to ensuring that the Body’s work is human rights-based, open, inclusive and transparent. 

Industry standards also continue to be relevant in establishing good practice for governance. Given the central role played by industry in developing and implementing AI, industry associations and multistakeholder working groups can help inform more nuanced and effective governance approaches by sharing key learnings.

Whatever the eventual form of international governance taken forward, it is imperative that it is not shaped solely by global North leadership, but rather has the active engagement of a range of global South actors—including governments, companies, and civil society in general.