Home IssueArtificial Intelligence Why The Pursuit of Sovereign AI is Not the Right Call for the UK

Why The Pursuit of Sovereign AI is Not the Right Call for the UK

by Ayesha Bhatti
by and

Earlier this month, UK Prime Minister Keir Starmer announced his full support for the 50 recommendations outlined in the AI Opportunities Action Plan, a proposal aimed at boosting the UK’s AI economy and using AI to improve living standards. While the Action Plan offers a thoughtful roadmap to keep the UK competitive and at the forefront of AI adoption, one pillar of the plan—the proposal to establish a new UK government unit focused on sovereign AI—reveals an oversimplified and unrealistic view of why the government is pursuing domestic frontier AI.

There is no doubt that the UK should aim to be competitive in AI. While it may not be at the same tier as the United States and China, it has a vibrant AI innovation ecosystem and developing domestic frontier AI capabilities could still enhance its competitiveness. However, this is not the rationale presented in the Action Plan. Instead, the plan advocates for the UK to pursue sovereign frontier AI based on the mistaken belief that the UK “must be an AI maker, not just an AI taker” so that the UK can shape the technology’s future. This reasoning exposes two flawed assumptions: that developing domestic frontier AI models automatically grants significant influence over global AI governance, and that creating frontier AI is a prerequisite for participating in AI governance discussions.

First, technological capability does not equate to regulatory influence. While it is true that tech companies play a significant role in shaping technological advancements, they do not operate in a vacuum or hold unilateral power over the rules that govern their innovations. Technology governance is shaped by a diverse ecosystem of actors, such as government bodies, standards organisations, professional associations, international institutions, civil society groups, academic institutions, and think tanks. These entities collectively contribute to the development, interpretation, and enforcement of regulations, ensuring that technology aligns with broader societal values and priorities.

Second, influence over global governance often stems from its economic might and soft power rather than its role in developing cutting-edge technology. The EU, despite lagging in digital economy capabilities, has wielded significant—though harmful—influence on global data governance by promoting the General Data Protection Regulation (GDPR) and requiring other countries to align with GDPR standards to access the EU digital market. Now, the EU is attempting to do the same with AI. The EU has only two globally competitive AI companies, yet it passed sweeping AI regulation that, in true Brussels fashion, affects everyone. These regulations, which form a package of digital regulations, have had significant extraterritorial effects, compelling compliance from companies far beyond Europe’s borders. The fact that U.S. tech giants have actively lobbied the new Trump administration to counterbalance the EU’s regulatory influence underscores this point—dominance in domestic AI development does not automatically grant a country influence in the shaping of global AI norms.

Third, the assumption that creating frontier AI capabilities automatically justifies setting rules for others is both exclusionary and dismissive. This perspective marginalises the majority of countries that lack the resources or infrastructure to develop globally competitive sovereign AI models, despite having valuable insights and priorities to contribute to global governance discussions. It perpetuates a narrative that only the most technologically advanced nations are qualified to shape the future of AI. This assumption risks creating a governance framework that prioritises the interests of a few powerful nations while sidelining the needs and rights of others. Effective global AI governance should be built on collaboration, recognising that valuable contributions come not only from those at the technological frontier, but also from those who can provide critical perspectives on the societal, ethical, and cultural dimensions of AI.

Indeed, the UK should take lessons from its own history. Its leadership in areas like financial regulation and climate change policy demonstrate that influence is built through collaboration, credibility, and thought leadership, not exclusively through technological dominance. The UK’s role in shaping the AI safety institute network, for example, demonstrates its capacity to convene stakeholders and drive consensus on complex global challenges, in spite of comparatively low domestic outputs.

Pursuing frontier AI capabilities for the sake of sovereignty risks diverting UK leadership away from other areas of AI innovation. The UK will have more success influencing global AI governance by becoming an early adopter of AI, so that it has practical experience and evidence from managing the technology.

Rather than chase its own tail, the UK should remain focused on the first two pillars of the Action Plan, which set clear, achievable, and necessary goals for infrastructure development and AI adoption. Success in these areas will reinforce the UK as a credible voice in AI governance worth listening to. And if the UK decides to pursue frontier AI, it should do so with a clear understanding of its purpose, and limitations.

You may also like

Show Buttons
Hide Buttons