AI Access is Not Enough: Middle Powers Need Strategic Reserves
Most governments have no plan for when access to frontier AI is cut off.
Summary
Rented, not owned: Middle powers are increasingly reliant on frontier AI systems they neither own nor control.
Blind spot: The push for ‘AI sovereignty’ is focused on where the computing hardware sits, but is neglecting who controls access to the AI running on it.
Strategic reserves: Middle powers should develop ways to sustain critical functions in the event that access to frontier AI models is disrupted – whether by outage, dispute, or geopolitical compulsion.
Recommendations: Governments should audit their frontier AI dependencies, build ‘break-glass’ AI capabilities for essential functions, and pool reserve capacity with allied middle powers.
Access to a strategic resource does not equal control.
Europe was reminded of this in 2022, when Russia throttled gas flows in response to Western support for Ukraine. Europe weathered the crisis, but recovery took years and required €650 billion in public spending as well as physical alternatives: LNG terminals, renewable capacity, and coal plants brought back into service.
Now imagine a similar disruption affecting frontier AI. If a government loses access to the models underpinning its public infrastructure, there is no reliable fallback.
Policymakers often think that ‘AI sovereignty’ means hosting compute within their national territory. Geography, though, is an incomplete proxy for control. Both hardware and software can be remotely degraded through vendor licensing and ‘control planes’ – the remote management layers through which providers can update, restrict, or disable systems.
Even where systems are hosted domestically, control may still sit externally. The US CLOUD Act, for instance, gives Washington legal reach over American AI providers regardless of where their servers are located.
When much of today’s AI is not owned but permissioned, an AI strategic reserve might be an answer. Most middle powers lack one.
The real problem: dependence without continuity
The problem is sharpest for the ‘AI bridge powers’: countries with significant AI capabilities but whose compute resources are orders of magnitude too small to independently develop frontier AI models. Such countries arguably include the UK, France, Germany, Canada, Japan, South Korea, Spain, and Singapore.
The UK set out its ambition to be “an AI maker, not just an AI taker,” but it still partnered with US frontier labs to use their models in the UK public sector. Canada has committed over C$2 billion to strengthen its sovereign AI compute capacity, while remaining reliant on US cloud providers for frontier infrastructure.
Access to frontier AI is already shaped by political conditions rather than purely commercial terms. Less than 5% of global AI compute capacity is controlled by European entities. The US plan to export its AI tech stack while exerting diplomatic pressure against European digital sovereignty is creating a framework of tech dependency – where availability, pricing, and terms of use are subject to geopolitical shifts.
Government and corporate actors are increasingly integrating frontier AI models into critical functions. This creates vulnerability: access to essential national capabilities will increasingly depend on legal and policy decisions made by foreign actors.
What an AI strategic reserve looks like
What is often described as ‘AI sovereignty’ tends to fall apart when systems are put under real pressure. The November 2025 Summit on European Digital Sovereignty, co-hosted by France and Germany, focused largely on joint ownership and investment. Both issues matter, but neither addresses operational sovereignty: whether a country can keep essential AI-enabled functions running when access is constrained.
Countries need to think in terms of AI strategic reserves: pre-positioned assets and arrangements that keep critical functions running if provider access drops out. In practice, this spans different measures: reserved compute capacity for priority use in emergencies; pre-negotiated contingency access arrangements with frontier AI companies; and locally held fallback models – typically fine-tunes of open-weight bases – ready to take over the most critical functions.
AI strategic reserves are not a cure for dependence. To reduce their overall dependence on external frontier AI, middle powers have a range of strategies – from trying to build frontier capability in coalition, to negotiating infrastructure-for-access arrangements with US providers, to leveraging hardware chokepoints as bargaining tools. None will reliably remove exposure to external control in the near term.
Strategic reserves address a different problem: ensuring critical systems continue to function when that dependency is tested.
Three recommendations for middle powers
1. Perform a full dependency audit. Identify where frontier AI is actually being used across critical functions in both the government and private sector; understand where systems still rely on cloud-managed control layers; and assess which functions would start to fail if access to external models were cut. This audit should be classified where necessary, distinguishing between use cases requiring frontier performance and those that can be sustained with narrower fallback systems.
2. Build ‘break-glass’ capabilities – pre-arranged emergency measures that allow critical systems to keep operating when normal access fails. Identify a limited set of functions that genuinely require continuity – then pre-position the infrastructure to sustain them. In some cases, preserving continuity may mean pre-configuring existing sovereign compute (such as Germany’s JUPITER supercomputer or the EU’s AI Factories) for emergency inference, to be activated when needed.
Different measures will address different disruption scenarios. Contingency access arrangements with frontier developers can cover commercial and operational disruption (such as outages, capacity constraints, or prioritization of domestic demand). A break-glass capability here could include weight-escrow agreements, which release model weights into sovereign custody under defined disruption scenarios.
But such arrangements cannot be relied on against adversarial disruption, where the provider’s home government compels a cutoff. To address this harder scenario, governments should be prepared to use fallback models – whether distilled from frontier models or built on open-weight models. These fallbacks would be narrower and less capable than frontier AI, but locally operable and sufficient to keep essential government services running until normal access resumes.
3. Pool reserves across countries. Like-minded countries should make arrangements to pool the compute capacity needed to run fallback models in a crisis. Such arrangements should involve: first, agreed rules on who may access the pooled compute, under what circumstances, using which fallback models; and second, common technical standards allowing countries to plug in securely.
In practice, this could begin with a small coalition agreeing to pool a limited share of national compute capacity and test joint access arrangements through predefined emergency scenarios.
Before the next crisis
While initiatives such as the European Frontier AI Initiative focus on building long-term capability and reducing structural dependency, reserves are about ensuring continuity of critical functions. For middle powers, this is a more immediate and neglected approach to AI sovereignty.
Europe built its energy reserve infrastructure only after the crisis hit. The question is whether bridge powers will build reserves before the next turn of the screw – or after.







