Year XXXVIII, Number 3, November 2025
The Global AI Governance Alliance
Robert Whitfield
Convener of the Global AI Governance Alliance and chair of the One World Trust and the Transnational Working Group on the Governance of AI of the WFM/IGP.
The issues
Artificial Intelligence (AI) has immense benefits to offer humanity, but it comes with a range of risks, some ethical, some safety-related, some current, some imminent, and some very grave. If we race to develop AI without precautions, we put its benefits in jeopardy and we threaten the future of humanity. We need trustworthy AI, and we need it now.
Some of the problems associated with AI such as privacy may be best decided nationally, reflecting the local culture. Many issues do however need to be addressed on an international scale, a global scale. It would not be possible for instance to radically reduce / eliminate misinformation without including every state in the world. AI is a cross-border technology, and its governance needs to reflect that fact.
AI ethics embrace such issues as bias, transparency, accountability and privacy. Human rights, democracy and the rule of law are addressed by the Council of Europe’s Framework Convention on Artificial Intelligence. That Convention is an important step in the right direction, being the world’s first binding international agreement – but it is not (yet) as strong as some would like and the signatories, whilst substantial, are not global. The ethics community has been waiting for a long time for a global agreement embracing AI ethics.
Equity is a key issue, often seen as a subset of ethics, but reflecting in this context the whole question of how access to AI and the associated
wealth generation is to be handled in the future. Power concentration is increasing and is due to increase a lot further. Is this the basis of the kind of world we want? We need to think carefully about the different possible scenarios and make sure that society is moving down the path that it wants to go down, not simply the path that is being laid out by the tech barons. This needs discussion and agreement at the global level.
International business could make good use of AI for the benefit of its customers around the world. But with a global patchwork quilt of regulation being steadily created by nations, the goal of interoperability, that is the interoperability of AI governance systems, is challenging. A global AI Treaty could provide the framework within which this work could be pursued.
The taxonomy of AI safety is multifaceted, embracing both the issues related to the machine itself (and in particular the loss of control of advanced AI) and issues related to humans, particularly bad actors, whether they be terrorists, failed states or others intent on seizing power. But it also embraces risks associated with military AI and risks triggered by a race mentality, pushing safety to the bottom of the list of priorities. Risks associated with advanced AI can be catastrophic or even existential.
The development of AI is associated primarily with two states, the US and China – and only a few companies within those two states. There is a strong argument for an agreement between these two states to mitigate the safety risks as soon as possible. address and to the headlong charge towards more advanced AI.
The machine related risk reflects the widely held concern that humanity is unlikely to remain in control of something significantly more intelligent than itself. There are already examples of AI systems today exhibiting very disturbing behaviour including attempts to avoid being switched off (writing code in one instance and attempting blackmail in another) and attempts to deceive for other purposes.
Actions by bad actors embrace the risks from enhanced cyberattacks through AI-powered malware, sophisticated social engineering, and deepfakes, as well as the potential for weapons development, from novel bioweapons to information-warfare campaigns. The more powerful the AI, the greater the danger.
Machine intelligence capabilities grew by roughly 30% p.a. from 1952 to 2018 and by some 300% since 20181 but there are suggestions that this rate may be slowing down2. This would be wonderful news in that it would give governance a chance to catch up. We need every chance we can get because currently the rate of progress in AI capability is dwarfing the rate of progress in AI governance.
AI Global Governance
The UN has recently announced two new initiatives, namely the establishment of the UN Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. These are useful steps forward but they were first agreed in principle at the Summit of the Future in September 2024 and the first dialogue will be 21 months later: AI is developing much faster than our wisdom.
A recent survey by the Seismic Foundation in Europe and the US, On the Razor’s Edge3 shows that people are concerned that AI will worsen almost everything about their daily lives. People feel AI is developing too fast and there is broad support for regulation of the industry: people do not trust the AI labs to have our best interests at heart.
Whilst Western governments hold a complex and evolving attitude toward the UN, characterized by a fundamental support for its core principles alongside criticisms of its inefficiency and a perceived lack of democratic representation, the Global South support is more overt. A recent BRICS leaders’ statement on AI global governance4 proclaims a strong desire for AI to “operate under national regulatory frameworks and the UN Charter” and for the need to “strengthen AI international governance through the United Nations system as a fully inclusive and representative international framework”.
The way forward
There is clearly a very unhealthy mismatch between the need for effective global governance of AI and the actual negotiation and delivery of such governance. A multitude of actions need to be taken around the world to rectify this situation. One such action is to seek greater coherence amongst those calling for AI global governance. The different groups of concerns are often advocated by different groups of people, with Human Rights activists being vocal about ethical concerns, the Global South being vocal about equity concerns, global business being vocal about interoperability concerns, and (typically newly formed) organisations being vocal about safety and security concerns. Whilst their priorities may differ, these diverse voices share a desire for Global AI Governance.
The Global AI Governance Alliance – www. gaiganow.org - seeks to bring these voices together to help turn that desire into a reality. We are committed to:
1. Building an alliance in support of trustworthy AI global governance
2. Working with Governments to achieve such governance soon
3. Addressing the most urgent issues with pragmatism.
- Ronn, ,92024) The Darwinian Trap, Crown Publishing
- Newport, The New Yorker Aug 12th2025
- Seismic Foundation (2025) On the Razor’s Edge (accessed August 30th 2025) https://report2025.seismic.org/media/documents/On_the_Razors_Edge_ pdf
- BRICS Leaders Statement on the Global Governance of Artificial Intelligence http://www.brics.utoronto.ca/docs/250706-ai.html (accessed August 30th 2025)

