Global Partners Digital and Stanford’s Global Digital Policy Incubator have published a report examining governments’ National AI Strategies from a human rights perspective.
The report looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future.
As AI applications are embedded into different areas of life—from healthcare to labour, criminal justice to retail—governments are increasingly conscious of the potential impacts of AI. At the same time they seek to take advantage of the economic opportunities offered in this rapidly developing sector. One way that countries are addressing AI policy is by developing comprehensive, cross-governmental strategies—National AI Strategies—outlining the actions the government will take. Since 2017, when Canada became the first country to adopt such a strategy, almost 30 more have been developed.
As countries begin to develop strategies around AI, it is critical that they also consider the potential harms and opportunities as they relate to human rights, and prepare themselves to protect and promote human rights in the context of this new technology.
“The key take-away message of this report is that human rights principles should be embedded into National AI Strategies at the outset. A human rights-based approach to AI is the best way for governments to protect citizens from potential harms as they arise, but also to capture the benefits for society,” says Eileen Donahoe, Executive Director of the Global Digital Policy Incubator at Stanford University, one of the partners on the report.
Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage.
The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:
- Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
- Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
- Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
- Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
- Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
- Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process.
“We hope that our report will help governments in better addressing human rights questions in relation to AI in their national strategies, and provide others with guidance on how to evaluate these strategies as they are released,” adds Dr. Megan Metzger, Associate Director of Research for the Global Digital Policy Incubator, and co-author of the report.
Charles Bradley, Executive Director at Global Partners Digital, observes: “This report comes at a critical moment globally, with most governments yet to develop a National AI Strategy. We hope the early adopters will follow our recommendations and ensure their strategies are in line with international human rights law, setting a positive precedent for years to come.”
About the Global Digital Policy Incubator
The Global Digital Policy Incubator (GDPi) is committed to advancing policy and governance innovations that reinforce democratic values, universal human rights, and the rule of law in the digital realm. Situated within the Cyber Policy Center at Stanford University, GDPi serves as a multistakeholder collaboration hub for the development of norms, guidelines, and laws that enhance freedom, security, and trust in the global digital ecosystem.
About Global Partners Digital
Global Partners Digital (GPD) is a social purpose company dedicated to fostering a digital environment underpinned by human rights and democratic values. We do this by making policy spaces and processes more open, inclusive and transparent, and by facilitating strategic, informed and coordinated engagement in these processes by public interest actors.