Locating AI Growth Zones: the distinction between where compute lives and where innovation happens
A guest post by Professor Mark Parsons, Director of EPCC
In the race to establish AI leadership, the Government has committed to an ambitious 20-fold increase in sovereign compute capacity. But beneath this headline figure lies a complex challenge: how to design AI infrastructure that maximises both technological capacity and economic benefit, in both the public and private sector. Creating the enabling conditions where that compute can drive innovation across the UK economy is the real prize of a successful compute strategy.
Increasing sovereign compute is essential to this vision. A properly executed strategy would allow the UK to direct sufficient compute resources to national priorities, support academia and businesses – without depending on the US hyperscalers.
The Government’s AI Opportunities Action Plan also proposes AI Growth Zones. These are intended to be a way of cutting through the planning red tape and energy connection bottlenecks that often constrain the buildout of infrastructure. But implementing these effectively requires nuanced thinking about location, energy, infrastructure supply and demand and support systems.
Failure risks the Government building expensive monuments to political ambition rather than functional engines of technological sovereignty and economic growth.
AI Growth Zones can allow the UK to achieve two goals: efficient infrastructure placement and broader regional development beyond the Southeast of England.
Very large-scale AI compute facilities demand enormous amounts of power and cooling, often making remote or coastal locations with access to renewable energy sources attractive. But these areas frequently lack the skilled IT workforce and innovation ecosystems necessary to drive AI development. The upside is that they are often situated in de-industrialised areas with workforces that are highly skilled in infrastructure installation, operation and maintenance.
These two elements don’t need to be co-located. Low-latency, high-bandwidth networks allow for a more distributed approach, where data centres can be situated near power sources while innovation can flourish in AI talent-rich areas, building on existing centres of excellence.
Energy
The successful implementation of AI Growth Zones hinges on overcoming a key obstacle: the UK's electricity costs. Large-scale AI infrastructure demands substantial power, but the UK has some of the highest electricity costs in Europe.
To tackle this, several options can be explored:
Behind-the-meter arrangements which involve directly sourcing power from a dedicated generation facility, bypassing traditional grid infrastructure and associated costs. This could involve:
Onsite generation – building power generation facilities directly adjacent to data centres, allowing for greater control over costs and supply.
Direct corporate Power Purchase Agreements to contract directly with renewable energy generators to secure long-term fixed-price power.
Collaborating with energy providers to develop solutions tailored to the specific needs of AI infrastructure. This could involve:
Securing preferential electricity rates for AI facilities in exchange for long-term commitments.
Jointly investing in grid upgrades to ensure reliable power delivery to strategic locations.
Engaging with Ofgem to ensure that their forthcoming package of connections reforms is fit for purpose in terms of the supply of connections to AI data centre sites.
Small Modular Reactors (SMRs) hold potential, but they are not a proven solution yet. Regulatory and implementation challenges remain – current planning processes could take years, hindering rapid deployment. And building SMRs next to data centres is not legally possible right now.
International example: NuScale, a leading SMR provider, partnered with the US Department of Energy's National Lab in Idaho to build a 462MW reactor capability on their site from six of NuScale’s 77MW SMRs. This bypassed regulatory hurdles and expedited approvals because it was constructed on a government-owned science property i.e. a research reactor. Unfortunately, In 2023 it was announced this project was not going ahead due to customer concerns around being “first mover” and a lack of subscribers.
Data centre infrastructure
The challenges of power provision often focus on the delivery of significant amounts of power to data centre sites. In the UK, the transmission of electricity is managed by Transmission Network Operators who supply power at high voltages to Distribution Network Operators (DNOs). It is the role of the DNOs to provide power to the data centre infrastructure operators. Power starts at as much as 400,000 volts (enough to run a small city) and passes through a series of steps that transform it to 33,000 volts and then to 11,000 volts as it arrives close to or at the data centre. Each of these steps involves specialised high-voltage transformers – sophisticated devices currently in short supply worldwide. A final set of transformers reduce the power to 3-Phase 400V supplies that computers can actually use. But ordering these critical components now means waiting 1 or 2 years for delivery.
Today, few UK data centres operate dense liquid cooled infrastructure. As GPUs get more powerful and generate intense heat, traditional air cooling with fans is no longer sufficient. This is why all vendors are moving to direct liquid cooling of their systems, where specialised fluids carry heat away much more efficiently than air. While this has been commonplace in traditional supercomputing for the past 20 years, there is a lack of experience and skills in the operation of very dense, liquid cooled systems in the private sector. This is another key challenge for the UK as we seek to expand our capability quickly.
Compute allocation
As the Action Plan stresses, “allocation is an essential part of any compute strategy”. Demand for access to national supercomputing resources has always outstripped supply particularly in the UK over the past decade where the country has fallen well behind our main competitors.
Traditionally, access to supercomputer resources in the UK has been free to academia but paid for by non-academic organisations only when they are using the resource for “production use”. If a similar model is adopted for AI, it will be necessary to consider what constitutes pre-competitive research as opposed to production use. Alternatively, it could be argued that all access – from any organisation – should be free at the point of use, the resources being shared out based on quality of scientific proposal or business case.
The Action Plan envisions Programme Directors playing a central role in allocating compute resource. Recruitment of genuinely exceptional Programme Directors with both technical vision and practical implementation skills will determine whether the UK's compute investment delivers transformative results or merely incremental improvements.
Some of the ways to make this approach a success are through:
Competitive compensation and streamlined recruitment processes that avoid lengthy bureaucratic selection processes.
Ensure selection boards include genuine technical experts who understand the state-of-the-art in AI research and can evaluate candidates effectively.
Programme directors will need sufficient autonomy to make rapid large allocation decisions quickly, bypassing the current system of 6-months-long waits for calls for access - instant small allocations have always been available.
Rather than relying solely on individual Programme Directors, consider team-based leadership that brings together diverse expertise across research, industry, public sector.
For each mission area, support multiple technical approaches rather than placing all resources behind a single solution, mirroring the successful approach used in the vaccine development programme.
Compute access alone is not enough. Innovation may be blocked not by compute availability but by the surrounding technical support ecosystem. AIRR compute access should take into account the following:
Technical expertise and guidance for organisations unfamiliar with AI supercomputing environments including training resources from beginner to expert
Machine learning engineers who can act as interfaces between AI researchers and the complex software and computing system frameworks used today
Support for developing workflows that suit different R&D needs
Connections to secure data environments (particularly for sensitive applications like healthcare) with access to appropriate, responsive information governance capability
“Communities of practice” that help translate raw compute capability into practical innovation
Conclusion
The AI Opportunities Action Plan has outlined an ambitious set of activities to ensure the UK can realise the benefits that AI will bring to its public and private sectors. But implementing this ambition by developing a large-scale sovereign AI capability, provided by both the public and private sectors, is a complex, difficult task. Only by tackling the challenges of location, power, service provision, access models, skills and training at pace, will the country see the benefits of the large planned investments.
Professor Mark Parsons is the Director of EPCC, the supercomputing centre at the University of Edinburgh and Dean of Research Computing. He started his career at EPCC in 1994 as a software developer following a PhD in experimental Particle Physics at CERN. He became EPCC Director in 2016 following many years as the centre’s Commercial Director. From 2020 - 2024 he was on part-time secondment to the Engineering & Physical Sciences Research Council where he was the Director of Research Computing. He has led the development of the UK Exascale Supercomputer Project for the past decade.