Middle Powers Can Gain AI Influence Without Building the Next ChatGPT
Countries like Saudi Arabia, Singapore, and Germany are shaping AI governance through infrastructure investment, standard-setting, and partnerships with like-minded nations.
Summary
What’s happening: While the US and China compete on AI capabilities, middle powers like Saudi Arabia, Singapore, and Germany are finding alternative ways to influence global AI governance.
The opportunity: Key decisions about AI infrastructure and regulatory frameworks could be made this decade, creating openings for countries that act early.
Building the infrastructure: Middle powers can build physical AI infrastructure that great powers may depend upon to develop frontier models.
And setting the rules: They can also gain influence by setting standards in specialty areas or building focused partnerships on key issues.
While the US and China race to build the most powerful AI models, Saudi Arabia is playing a different game. The Kingdom has launched a $100 billion Digital Investment Fund, created a sovereign AI regulator, and is planning to build 15 GW of solar-powered data centres. Instead of building AI models, Saudi Arabia is laying the infrastructure AI systems depend on.
This reflects a broader opportunity for middle powers – countries such as Singapore, Türkiye, Germany, and Japan. In the emerging AI order, power doesn’t only come from developing the most advanced models. It comes from controlling the value chains, computing, data, regulation, and talent that make creating and running those models possible.
A closing window of influence
The second half of this decade could be pivotal for AI governance decisions. As companies and governments figure out how to adopt and regulate rapidly advancing AI systems, analysis from Goldman Sachs suggests that key decisions about AI infrastructure, safety norms, and regulatory frameworks will harden rapidly.
Others point to predictions that artificial general intelligence – AI systems that can perform most human cognitive tasks on par with humans – could arrive within the next few years. If this is true, the governance structures that are established now may determine who shapes transformative AI capabilities. If middle powers remain passive, they may have less influence over systems that could significantly affect their economies and societies.
At the same time, pressure points in global AI supply chains are creating new strategic opportunities. Global electricity demand from AI data centres is projected to quadruple by 2030, potentially shifting leverage toward countries investing in sustainable infrastructure. Chip export controls are forcing countries to diversify their supply chains. Meanwhile, countries and regions are competing to set regulatory precedents that others might adopt.
So what are the levers available to middle powers?
Controlling AI supply chains
AI governance researcher Anton Leicht has argued that middle powers should focus on becoming essential in physical bottlenecks between AI capabilities and real-world impact. He suggests middle powers should leverage sectors like compute supply chains, novel data sources, robotics, and industrial capacity to remain valuable to great powers who control frontier AI development.
Saudi Arabia’s investment in solar-powered data centers exemplifies this approach, positioning the Kingdom as an emerging major provider of computing infrastructure for AI development.
Meanwhile, Japan is leveraging its strengths in robotics and energy-efficient computing in a bid to become indispensable to frontier AI infrastructure. The country is investing $65 billion through 2030 in AI and semiconductor development, including the government-backed Rapidus foundry project, which aims to produce cutting-edge, energy-efficient chips to rival the world’s most advanced technology by 2027.
There are also other ways middle powers can wield influence beyond physical infrastructure. For instance, the governance and regulatory spheres offer other promising opportunities for countries that can set standards, build coalitions, and shape the rules of AI deployment.
Setting AI standards in a specialty area
Middle powers can shape global standards by establishing clear, practical rules in specific areas where they have expertise. Several examples are already available:
Singapore’s Model AI Governance Framework, released in 2019, provides detailed guidance for the private sector on ethical AI deployment and has been adopted by major companies including HSBC, Mastercard and Visa. Building on this foundation, in 2023 Singapore established the AI Verify Foundation with Google, IBM, Microsoft, and Salesforce to develop testing and assurance tools for AI governance. By securing buy-in from major technology and financial firms, Singapore is positioning its frameworks as a practical model for other countries.
Germany is establishing technical specifications for industrial AI through its AI Standardization Roadmap. These standards have been developed in consultation with experts across industry, academia and government, and will determine how AI systems communicate in manufacturing environments. Companies integrating AI solutions with German manufacturing (which has the 4th largest output in the world) typically need to comply with these specifications, influencing how industrial AI develops in major manufacturing economies.
France and Germany’s Gaia‑X Initiative is developing data‑sovereignty standards for cloud infrastructure to help European companies remain competitive while giving users control over their data. With several hundred members from Europe and beyond – including large cloud and technology firms – Gaia-X is working to establish data sovereignty principles as standard practice for companies operating in European markets.
Working in targeted groups
Beyond standard-setting, there are opportunities for middle powers to form partnerships on specific issues where they can create leverage.
The international government forum Global Partnership on AI (GPAI), now an integrated partnership with the OECD, allows middle powers to punch above their weight by driving agenda items in specific working groups. Canada’s Montréal Centre of Expertise supports the GPAI’s Responsible AI and Data Governance working groups. France’s Paris Centre supports the Future of Work and Innovation & Commercialization working groups, while Japan established a third centre in Tokyo in 2024, focusing on generative AI governance. Through GPAI’s governance structures, middle powers are shaping AI norms and standards more effectively than they could alone.
Middle powers could also leverage their capabilities to influence AI standards within alliance structures. As an example of future opportunities, Türkiye’s proven expertise in unmanned aerial vehicles (UAVs) has made it a significant drone exporter to NATO members. As NATO develops frameworks for interoperability of autonomous systems, Türkiye’s practical experience and market position could give it a voice in shaping technical discussions.
Policy recommendations
Middle powers need a strategic entry point where they can influence norms or the infrastructure others depend on. Here are three suggestions:
Audit national strengths and pick a global leadership area. Rather than trying to cover multiple AI domains, middle powers should align their funding, diplomacy, and regulatory efforts around the area they have the most comparative advantage. For some, this might mean AI data centers or green computing; for others, it might mean UAVs, healthcare AI, or sovereign data.
Create AI governance frameworks others want to adopt. Middle powers could develop certification systems that verify AI systems meet specific safety, ethics, or performance standards – tailored to areas where they have domestic strengths. For example, countries that excel in financial technology could create standards for AI in banking that other governments and companies will want to follow.
Form bilateral and minilateral partnerships for specific AI governance pilot projects. Build agreements between two to three like-minded countries to test shared standards in narrow areas like AI interpretability, medical AI safety, or green computing. Co-lead technical subgroups in international forums like the GPAI on specific issues.
By acting as bridges across different approaches to AI governance, as norm-builders in specific domains, and as leaders of practical pilot programs, middle powers can wield significant influence in shaping global outcomes.
However, strategy must precede capacity, and timing matters. Those who act early can shape the governance environment, while those who wait will be forced to adapt to it.