31 Oct 2023

Navigating the Global AI Governance Landscape 

The recent Internet Governance Forum (IGF) in Kyoto reaffirmed AI as an urgent cross-cutting priority for a diverse set of stakeholders. At the same time, it highlighted the rapidly expanding scale and complexity of the AI governance landscape. 

This complexity presents a challenge for stakeholders, particularly those working at the intersection of AI and human rights—who must make informed and strategic decisions on where to engage and invest their limited time and capacity. 

In an effort to support such actors, GPD has developed a mapping of the landscape, anchored in a data-driven basis for human rights assessment, which examines how initiatives align with human rights as well as their provision of opportunities for open, inclusive and transparent stakeholder engagement. Below, we share some key insights and highlights from the mapping and research, as well as the framework which underpins our approach to AI governance.

 

Snapshot of the landscape

GPD’s research revealed more than 50 active international AI governance initiatives, related to nearly as many forums, bodies, or actors driving them. Most of these are anchored within the established international cooperation system, with nearly a quarter originating within the UN system itself, including the recently announced UN Secretary General’s High-level Advisory Body on AI (HLAB-AI), negotiations on the Global Digital Compact (GDC), as well as discussions under the auspices of UNESCO, the Human Rights Council (HRC), International Telecommunications Union (ITU), and the IGF. Among these, only one initiative (HLAB-AI) explicitly seeks to establish a new UN body. 

The UN-driven initiatives are complemented by various multilateral or state-led initiatives, including the Council of Europe’s AI Treaty (CAI), the EU AI Act, the G7 Hiroshima Process, and the UK AI Safety Summit. Among these, only two are seeking to develop binding cross-border regulatory frameworks (CAI and the EU AI Act). 

The rest represent a mix of multistakeholder, industry-led, and public-private initiatives primarily focused on developing voluntary guidance or high-level principles, developing reports and other resources, or hosting AI-related events. There are also several notable observatory-type initiatives, such as the OECD’s AI Observatory, which has provided multidisciplinary, evidence-based policy analysis and data on AI’s areas of impact since its launch in 2020. 

 

Different approaches to AI

Initiatives vary in the way they frame benefits and risks associated with AI, as well as in their approach to developing relevant outputs. Most take a cross-cutting or multidisciplinary view of AI, with some overlap with sectoral approaches (e.g. through the lens of climate, health or military applications of AI) or those focused on ‘safety’. 

It is worth noting that the very use of ‘safety’ as a frame of reference varies across initiatives. Whereas some refer to safety as a way to capture concern around existential risks of advanced or “frontier” AI systems that escape human supervision and control (e.g., UK AI Safety Summit), for others (e.g., Partnership on AI) safety relates to the identification and mitigation of very concrete harms already happening with the deployment of AI such as discrimination, surveillance or information integrity impacts. The latter more closely reflects the way in which the term has been used in other connected fields of technology governance such as platform regulation. 

While a small number of initiatives take an explicit human rights-based approach (e.g. CAI, the EU AI Act, the Freedom Online Coalition), others are proposing alternative ‘principles’ for AI governance—including those grounded in ethics (UNESCO). These approaches could be useful if viewed strictly as a complement to the existing international human rights framework. However, if seen as substitutes for this framework, they could erode its effectiveness and legitimacy, and lead to the development of weaker human rights standards with a lower level of consensus. 

 

Role of the Global Majority

Currently, the majority of key initiatives, whether multilateral, state or industry-driven, are spearheaded by Global North actors—mainly European or from the US, as was noted in a recent joint blog by civil society actors engaged in CAI. 

There are signs that this is beginning to shift. Efforts to advance the AI governance agenda have recently been announced by the G20 with the 2023 New Delhi Leaders’ Declaration and the BRICS Institute of Future Networks through its AI Study Group. And at the recent 2023 Belt and Road Forum, China announced the launch of a new Global AI Governance Initiative aimed at developing a framework to “promote equal rights and opportunities for all nations” in the development and governance of AI. The timing of its announcement—a day after the latest US ban on advanced chips and chip making processes—highlights the geopolitical dynamics shaping the development of the AI governance ecosystem.  

The growing appetite for a broader and more inclusive discussion on AI governance has not been lost on the UN Secretariat, which has been pitching its own initiative (HLAB-AI) as a means of facilitating a global conversation. As the various UN negotiations heat up in 2024 (GDC, Summit for the Future) and 2025 (WSIS+20), the position of the G77 and other developing countries will inevitably play an important role in shaping the emerging AI governance ecosystem. 

 

Connections and overlaps

A number of initiatives are interlinked and overlapping. In some cases, the overlap is institutional (e.g. the OECD-AI serves as the Secretariat for the Global Partnership on AI), while others share a more informal link. A notable example is the G7 Hiroshima AI Process, which has published a Code of Conduct anchored in Guiding Principles for Organizations Developing Advanced AI Systems, and involves the G7, the EU-US Trade and Technology Council (TTC) and the OECD. Earlier this year, G7 Digital & Tech Ministers’ endorsed a report by the OECD summarising priority risks, challenges and opportunities of generative AI, and explicitly reaffirmed its commitment to promote a human-centric and trustworthy AI based on the OECD’s Recommendation on AI. In turn, the TTC’s Joint Roadmap for Trustworthy AI and Risk Management intends to build on the European experience in the passage of the AI Act. The document also served as a baseline for the G7’s outputs outlined above. Some also suggest that the G7 framework will inform the work of HLAB-AI. 

 

Stakeholder Engagement

Finally, initial research reveals limited avenues for meaningful stakeholder engagement across most identified initiatives. One positive exception has been the Partnership on AI which, despite being an industry-led initiative, recently announced its expansion to include 5 new global partners from a wide range of stakeholders including civil society organisations. 

The broader picture is, however, gloomy in this regard. The organisers of the upcoming UK AI Safety Summit have invited only a handful of civil society organisations— predominantly non-governmental organisations and research institutes, and a limited number of human rights organisations. Other organisations have been offered the opportunity to organise events on the sidelines, but without any support or resourcing to do so. Confirmed participants are overwhelmingly government officials representing mainly Global North countries (and, controversially, China), the UNSG’s Envoy on Technology, and industry leaders. Civil society groups have since published an open letter criticising the summit for being a “closed door event” and for privileging the interests of the private sector.

Even where stakeholder input is notionally encouraged, the modalities often hinder rather than facilitate engagement. Recently, two members of the G7 Hiroshima process, the EU and the US, attracted criticism for the narrow window of time and restrictive format provided for stakeholder input on the proposed Guiding Principles, Such factors make it challenging even for well resourced actors to engage, let alone a diverse range of stakeholders with more limited capacities. The fact that the Guiding Principles were published a mere three working days after the last deadline for stakeholder comments raises further questions. Such a short period of time seems insufficient to properly review stakeholder input and consider it as part of the final drafting.

 

Key takeaways

This research brings to the fore several questions and areas requiring further examination. Firstly, it illustrates a need to better understand concerns, motivations and approaches to AI governance from non-Global North stakeholders, countries and groupings. Particular attention should be paid to how this picture relates to broader geopolitical dynamics playing out at the UN, including efforts to challenge the existing international cooperation system and tensions between major global powers. 

Similarly, there is a need for additional research into the various existing and emerging multistakeholder initiatives, particularly those with a strong presence of key industry stakeholders. While not binding, the impact of voluntary codes and practices that these initiatives may develop should not be underestimated: the natural lag between the passage of regulation and its enforcement leaves space for other actors—including the private sector—to fill the gap, and such voluntary codes and practices provide practical guidance in the absence of binding legislation. Additionally, there is value in leveraging good practices and learning from past shortcomings in a context of rapid change.

There is a clear need to further interrogate overlaps and connections between initiatives to help understand how the landscape might evolve. It is likely that we see some degree of consolidation among initiatives as bandwidth and resources start running low, and efforts to streamline intensify. 

Whatever form of international governance for AI emerges, the key takeaway from this research is the urgent need for it to be shaped in a more open, inclusive and transparent manner. AI is often promoted to Global Majority countries as a tool to help overcome structural inequalities and drive economic growth. But a one-size-fits all approach, which ignores specific and local factors and risks, cannot deliver this. Only a diverse range of perspectives and stakeholders from these countries can ensure that benefits from AI are equitably harnessed, and that the implementation of AI technologies does not reproduce existing inequalities and power imbalances.

The decision on which initiatives to prioritise will invariably depend on one’s own assessment of potential impact and relevance. From a human rights perspective, there is a clear rationale to follow and continue to engage in initiatives seeking to develop binding solutions that are likely to have global reach and local impact. This will inevitably include efforts to track how international initiatives are shaping national legal and regulatory frameworks (consider for instance how the EU AI Act catalysed responses in other jurisdictions, including Brazil and, most recently, the US). Ensuring that the outcomes and implementation efforts of these key initiatives are rights-respecting continues to be paramount. But for this advocacy to be effective in shaping the landscape as it continues to evolve, such efforts must be complemented by a proactive agenda for meaningful stakeholder engagement.