United Kingdom: Balancing Safety, Security, and Growth
A Beyond the US-China Binary country profile
The UK’s AI Strategy
Prime Minister Keir Starmer frames AI as “a chance to turbo‑charge growth and radically improve public services,” pledging to make Britain “an AI maker, not just an AI taker.” The AI Opportunities Action Plan (AOAP, Jan 2025) set out that intent in three pillars.
The first pillar – AI Foundations – aims to secure the “foundations” of world‑class computing, data, talent and regulation. Specific goals include a twenty‑fold expansion of publicly controlled AI compute by 2030 and deployment of more than 5,000 advanced NVIDIA chips in the new super computer “Isambard‑AI”. The second pillar - AI Adoption - intends to spur rapid AI diffusion throughout the UK economy. Examples of AI adoption range from revamped National Health Service (NHS) diagnostic workflows to a Cabinet‑Office productivity suite. The third pillar – the UK as an AI Maker – argues that the UK government must take a more activist approach to cultivate its own national champions – companies with frontier capabilities across critical layers of the AI stack. To this end, the plan saw the creation of a Sovereign AI Unit (SAIU) with £500 million in potential funding, but outcomes for the independence of the UK’s AI capability development remain to be seen.
Despite shifts in rhetoric, the UK’s current AI development and international influence remain significantly shaped by the safety legacy established by the previous Conservative Prime Minister, Rishi Sunak. The Sunak government positioned the UK as a global safety leader by hosting the first AI Safety Summit in Bletchley Park in 2023, and launching the first AI Safety Institute to evaluate frontier models. This represents an aspiration to balance pro-innovation regulation alongside extreme risk mitigation, creating the strategic foundation upon which the current national framework is built.
What the UK Offers
The UK’s strategy begins with its proactive diplomatic initiatives, which leverage a middle-ground regulatory posture to brand itself as the world’s convener on AI safety and security. In 2023, it hosted the first AI Safety Summit and brokered the multi‑stakeholder “Bletchley” commitments. Britain also built early credibility as a leader in AI governance by pioneering the AI Safety Institute (AISI) (renamed to AI Security Institute) at the summit, which offers an institutional platform for evaluating advanced AI models. Through the AISI’s collaborations with major frontier AI firms, including OpenAI, Google DeepMind, and Anthropic, the UK government aspires to maintain a central, practical role in developing and setting global AI security standards, especially around pre-deployment testing and evaluation. The AISI model has proven influential, spurring the creation of similar national bodies in the US, Japan, South Korea, Singapore, and China and the establishment of the International Network of AI Safety Institutes to drive global technical alignment. The AISI has also undertaken notable work on online safety, specifically tackling child sexual abuse.
Another strength core to the UK’s soft power strategy lies in its development of AI assurance and auditing frameworks. This focus leverages the UK’s deep-rooted historical strength in professional services, financial auditing, and corporate governance, providing a trusted foundation for validating AI systems. Although often overlooked in discussions focused solely on frontier risk, the UK government has prioritized trusted third-party AI assurance for over five years, positioning itself as a global leader in AI assurance services to distribute related knowledge internationally. This approach culminated in the recent Trusted Third-Party AI Assurance Roadmap (2025).
Domestically, Britain’s research pipeline offers structural advantages. Universities including Oxford and Cambridge continue to produce world‑class AI scientists and attract investment for AI research and innovation. London remains a magnet for top international talent, as well as the preferred European base for many global technology companies. These researchers feed a steady stream of spinouts that tap the City of London’s deep pools of venture and growth capital, underpinned by a sophisticated legal framework that makes contracts and intellectual property protection relatively stable.
What the UK Wants from the World
The UK emphasizes the need for energy and compute resources to realize its AI sovereignty and leadership ambitions, making the scaling of national infrastructure a priority for the current government. Downing Street aims to scale up the nation’s compute power “by a factor of 20” by building up power, permits, and a diversified AI hardware supply. Successfully achieving this scale is viewed as key to reaping the potential economic rewards of AI. However, whether investment in compute will materialize into actual AI sovereignty and economic benefit remains an open debate.
While the UK boasts a strong research pipeline that has successfully incubated high-value firms, these companies have often been bought by foreign entities before achieving domestic scale, including DeepMind (acquired by Google), Darktrace (acquired by Thoma Bravo), and semiconductor intellectual property (IP) giant Arm (acquired by SoftBank). This pattern results in the export of UK-generated research and IP. The UK retains its research base, while ceding control and profits from the downstream products. Leaders in the UK AI startup scene view the lack of government-backed funding and the high cost of talent as the main barriers to company retention. To address these concerns, the SAIU is tasked with using public procurement, access to datasets, and equity stakes to help domestic firms clear funding hurdles. The government has also recently launched various initiatives to attempt to attract and sustain global AI talent. Concurrently, the government has sharpened its defensive regulatory strategy, utilizing the National Security and Investment Act (NSIA) to block foreign takeovers of sensitive technology, such as the forced divestment of the Chinese-owned Nexperia from Newport Wafer Fab. However, this regulatory effort focuses heavily on security concerns rather than broad market incubation.
Although well-positioned as a global convener on AI safety and security, the UK’s soft power ultimately relies on international momentum to drive substantive progress. Currently, the international AISI network is still in its initial phases, with uncertainty in terms of actionable outputs and domestic support from member countries. Although an early leader in changing the named focus from safety to security, the securitization of AI safety may yet challenge UK-supported international cooperation efforts and the UK’s respective soft power in the AI governance arena.
US-China Alignment
The United Kingdom pursues an asymmetric approach to the US-China AI competition, maintaining deep strategic and commercial alignment with the US while preserving selective channels for dialogue with China. This positioning reflects both the UK’s historical “special relationship” with Washington and its pragmatic recognition that meaningful AI development requires access to American technology, capital, and security partnerships. Senior British officials have acknowledged this reality, with one recently stating “Only America has the capability, wealth and determination to compete with China technologically” reflecting the government’s assessment that the UK’s AI ambitions depend fundamentally on US partnership.
The September 2025 Technology Prosperity Deal elevated AI to a central pillar of the UK-US relationship, formalizing cooperation across research, commerce, and security domains. This partnership manifests concretely in both government and private sector collaboration: American firms have committed hundreds of billions in investments to UK AI infrastructure, including Palantir’s £480 million NHS contract, extensive Microsoft and OpenAI strategic partnerships, and pervasive involvement throughout the UK’s AI stack from data centers to cloud services. The UK’s AISI works closely with US counterparts and major American AI companies including OpenAI, Google DeepMind, and Anthropic on model testing and safety research.
The UK’s AI engagement with China is more limited than with the US, reflecting broader geopolitical tensions and national security considerations. Notably absent are the deep commercial partnerships, technology transfers, and infrastructure investments that characterize UK-US AI cooperation, with the NSIA acting as a potential major barrier. Compared to the US, the UK takes a more open position towards China, including inviting China to the Bletchley Summit and stressing the importance of Chinese participation in the AI safety and security conversation. The China-UK Artificial Intelligence Dialogue, launched in May 2025, represents the primary formal channel for bilateral AI engagement, with its inaugural meeting focusing on AI governance and safety cooperation. Anchored in the US and European AI ecosystems, the UK continues to carefully engage with China on select topics and platforms, leveraging its convening power when it might add economic or global security value.
Note: This living set of country profiles is intended to be an accessible resource for policymakers, academics, and industry professionals, and all others seeking to understand the international relations of the technology transforming our virtual feeds and physical environments. It reflects the state of affairs at the time of writing (December 2025).
Authors: Sydney Reis*, Zilan Qian*, Karuna Nandkumar*, Kayla Blomquist+, Sam Hogg, Sumaya Nur Adan, Julia Pamilih, Jonas Balkus, Renan Araujo, Songruowen Ma, Tiffany Chan1
*Denotes primary authors who contributed most significantly to the content of the paper. +Denotes authors who contributed most significantly to the framing and direction of the paper. We are grateful to Caroline Jeanmaire, Clint Yoo, Huw Roberts, Joël Christoph, Luis Enrique Urtubey De Cesaris, Nikhil Mulani, Saad Siddiqui, Sharinee Jagtiani, and Zar Motik Adisuryo for their valuable feedback on this project. Authorship of this project indicates contribution but does not imply full agreement with every claim.

