The AI Governance Void: How Do We Escape the Unilateral Power Trap?
As the AI policy community returned to business as usual following the India AI Impact Summit two weeks ago, GPD–like many civil society groups– left with a sense of unease. Caught between the tech industry’s growing embrace of the language of “ responsible, transparent, and accountable AI” and the host country’s visible cosiness to those same companies in pursuit of investment and competitiveness, we left India wondering where meaningful progress on safe and accountable AI governance can realistically take place next.
The structure of the Summit did little to dispel that uncertainty. Its sheer breadth meant that virtually every AI-related topic appeared somewhere in the official programme or among the numerous side events. While these spaces enabled frank and valuable conversations, the highly fragmented nature of the discussions made it difficult to build the kind of broad coalition needed to advance an AI governance agenda that is both actionable and politically feasible in the current geopolitical climate.
Since Minister Carney’s speech in Davos last January, much attention has been given to the idea of a coalition of “middle powers” and to the notion of “principled pragmatism”. But what do those concepts mean in practice for AI governance? Could a coalition of states committed to steady and pragmatic progress help move the agenda forward at a moment of geopolitical fragmentation?
For such an approach to gain momentum, civil society also has an important role to play in shaping a more focused and actionable agenda for AI governance. Rather than attempting to address every issue simultaneously, we should concentrate on identifying the areas where tangible progress can realistically be achieved over the next two years– whether while waiting for more favourable geopolitical conditions, or despite their absence.
Many of the most insightful conversations on the sidelines of the Summit hinted at what such an agenda might look like. Among the ideas raised were: clarifying redlines for certain AI capacities and uses; developing transparency guidelines for AI capacity and guardrails implementation; and advancing the standardisation of incident reporting and information-sharing mechanisms.
The urgency of this agenda became apparent only a week after the Summit. The AI governance void we currently face can quickly translate to real-world consequences when universally agreed redlines are absent.
The US government announced it had cut ties with Anthropic AI products and designated the company a national security risk. This followed the company’s refusal to remove internal safeguards that restrict the use of its systems for mass domestic surveillance and fully autonomous weapons. The dispute reportedly emerged after the US government required the company to allow “every lawful purpose in defence” in the deployment of its technology. This standard created concern within the company, particularly given the absence of comprehensive federal legislation governing the use of AI. In practice, removing such safeguards in this context could amount to granting companies broad discretion to deploy their technologies with minimal constraint or oversight.
This episode highlights the fundamental purpose of governance: to provide frameworks for managing unilateral exercises of power such as the one we have witnessed. The issue at stake is not whether governments should be able to adoptAI systems for defense purposes. Rather, it is whether companies can take principled positions on “what today’s technology can safely and reliably do”.
No governance framework will entirely prevent the harmful use of technology. But it can create conditions that allow non-state actors to refuse, withdraw from, or place conditions on the deployment of their systems in unsafe context–and to establish clear responsibilities and consequences when violations occur.
At the global level, the question becomes: where could a focused agenda around AI “red lines” realistically gain traction? During the Summit, several forums were frequently mentioned as potential avenues including technical standardisation processes, the OECD/GPAI ecosystem, the G7, the UN Global Dialogue on AI, and the upcoming AI Summit to be hosted by the Swiss government.
Yet beyond institutional venues, another theme also emerged: the need to reshape the broader narrative around AI governance.
As the Anthropic episode illustrates– and as reflected in the alignment of employees across other technology companies (https://notdivided.org/)– people are capable of challenging the assumption that the rapid deployment of AI systems is inevitable. In doing so, they can reclaim a measure of agency in shaping how these technologies are developed and adopted.
This points to the need for a diversified strategy: combining movement-building efforts with a civil society community equipped with clear and targeted priorities, alongside more traditional multilateral diplomacy anchored in the leadership of “middle powers”.
The risks created by the absence of governance are no less serious than those posed by the frontier capabilities currently being developed in advanced AI systems. Ultimately, it is the human in the loop embedded within the governance structure that will determine whether AI is steered towards humanity’s benefit–or towards the unchecked exercise of unilateral power and inequality.