Where Does an “AI Race” Lead? Talent, & More…
The Oxford China Policy Lab Newsletter
Dear friends and colleagues,
So far this summer, OCPL has been hard at work conducting research and bringing scholarly depth and expertise to the table—whether in convening global stakeholders or helping policy decision-makers understand the rapidly evolving geopolitical and technological landscape around us. Recently, I had the opportunity to brief members of US Congress on the state of international AI competition, America's position, and how to skilfully navigate the present strategic competition with China.
My core message centred on a fundamental challenge: the US needs—and currently lacks—a positive vision for AI development and governance amidst its attempts to “win the AI race.” It must not lose sight of its own democratic foundations by focusing only on a narrow and ill-defined goal of "winning the AI race" without sufficient reflection on where that race ultimately leads. Fundamentally, the US must ensure it is running a race worth winning.
Critically, US actors must ensure the building blocks of American society—rule of law, checks and balances on power, freedom of speech and thought, constitutionally ensured mechanisms for listening to citizens' voices and needs, and genuine opportunity for all—remain intact as we enter more fully into the age of AI. These foundations have long served as the backbone of American society, but they're far from guaranteed in an AI-heavy future. The pathways for using advanced technologies for control and conflict are more clearly trodden; preserving these cornerstones of democratic and healthy society will require much intentionality, creativity, and sustained effort. In short, there is a steep uphill journey ahead, and for the US, much of it is at home.
As I emphasised to Congressional representatives, they have a central role to play in listening to the American public as we navigate impending change as AI advances and diffuses throughout society—the whole path forward may not be clear, but responsive leadership and genuine civic dialogue are crucial stepping stones.
Additionally, the diffusion of AI technologies globally matters enormously for the unfolding global AI landscape. Smaller, more resource-efficient, and localisable models are gaining significant traction. Some Chinese models like DeepSeek are achieving high international adoption precisely because of, rather than despite, their relatively modest computational requirements. This points to what I call the Quality, Reach, and Adaptability (QRA) framework—the idea that lasting AI leadership depends not just on cutting-edge and ever-more advanced capabilities, but on building AI tools that work reliably and safely, reach users around the world, and adapt effectively across different cultural, linguistic, and business contexts. Forthcoming US strategies around promoting American AI products globally must take these factors into account.
Finally, people matter. Talent drives the cutting edge, and the US needs to renew its efforts to cultivate both international and homegrown talent through education and research investments. Current cuts to research funding, the turning away of foreign talent through visa restrictions and intimidation, attacks on university independence, and hostile rhetoric all severely damage the US’s long-term prospects for lasting technological edge. US policymakers seeking to preserve and further advance the US’s research and tech competitiveness must work to swiftly reverse these actions.
You can read more insights from the week’s conversations in the Aspen Institute's report.
Speaking of Talent…
OCPL is fundamentally grounded in the belief that exceptional talent is not just necessary to build powerful technology but to effectively govern it and engage in challenging diplomatic and geopolitical questions. We recently wrapped up our 2025 OCPL Fellowship, which featured 19 PhD and Master's-level researchers who joined us from seven departments across the University and hailed from nine different countries. The fellows formed three working groups, with focuses spanning AI development and governance in China; digital and critical infrastructures and supply chains; and the global impacts of US-China tech competition.
From these three core research themes, several pieces have already emerged. Leia Wang & Zilan Qian’s analysis of China's robotics push, "China's Humanoid Marathon Signals a New Kind of AI Race," examines how Beijing's strategic focus on rapid deployment and manufacturing integration could position it to lead in embodied intelligence. Songruowen Ma and former OCPL Fellow, Haitong Du, hosted an expert panel on making British China policy in an era of great power competition, cohosted with the Oxford International Relations Society. We have also launched a monthly Fellows Feature series, with recent articles including "How Hacktivists in China are Using Data Leaks for Dissent" and "How Much Revenue Have Tariffs on China Made for America?"
Looking ahead, we have exciting forthcoming work exploring the extent to which the PRC is conducting AI capacity building in developing countries, the impact of Made in China 2025 on dual-use technologies, and the roles that various countries play in shaping US-PRC AI competition dynamics. This research advances OCPL’s mission to provide rigorous, policy-relevant insights into the global dynamics of US-China technological competition.
Applications for the 2026 fellowship open this fall—stay tuned for details.
Kayla Blomquist
OCPL Director
News and Views
The research and analysis of OCPL experts are their own and do not reflect organisational views
OCPL’s evidence on economic security was published by Parliament’s Business and Trade Sub-Committee on Economic Security, Arms and Export Controls. Authored by Kayla Blomquist, Sam Hogg, and 2025 Fellows Yi-Ting Chang and Jeffrey Love, the submission outlines risks from AI and weaponised interdependence and proposes principles for a resilient UK economic security framework. It will help inform Parliament’s approach to helping the Government shape its economic security strategies.
OCPL Fellow Zilan Qian was featured in the ChinaTalk newsletter, analysing how Chinese netizens access blocked U.S. AI models. She noted that “China’s ban is selectively enforced and mainly focuses on ChatGPT,” highlighting a grey market shaped by uneven censorship, platform loopholes, and persistent demand.
OCPL Fellow Renan Araujo contributed to a UN-commissioned report on AGI governance, as part of a High-Level Independent Panel to the UN General Assembly. The report urges urgent action on catastrophic AI risks and recommends a global observatory, international certification system, and a UN framework convention on AGI.
Scott Singer was quoted in The Guardian discussing the UK’s strategic AI posture. He noted Britain is “positioning itself between the US and EU,” seeking to balance innovation with consumer protection as ministers delay regulation to pursue a more comprehensive AI bill.
In his independent scholarly capacity, Scott Singer served as lead writer for the California Report on Frontier AI Policy. The report leverages broad evidence—including empirical research, historical analysis, and modeling and simulations—to provide a framework for policymaking on the frontier of AI development, outlining an approach rooted in an ethos of ‘trust but verify’. It was featured in Financial Times, TIME, Bloomberg, and more.
What We’re Reading
“How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute” by Scott Singer, Karson Elmgren, and Oliver Guest. This Carnegie Endowment paper analyses the launch of China’s AI Safety and Development Association (CnAISDA), highlighting its role as hub of China’s international AI safety efforts amidst a broader focus in China onAI as an engine for economic growth.
“The Trade-Offs of Innovating in China in Times of Global Technology Rivalry” by Jeroen Groenewegen-Lau and Jacob Gunter. This MERICS report examines how European firms navigating China’s innovation ecosystem face mounting risks of technology transfer, market exclusion, and geopolitical tension, prompting a shift toward localization, decoupling, and tighter alignment with national innovation interests.
“The Global A.I. Divide” by Adam Satariano and Paul Mozur. The New York Times article draws from Oxford researchers’ work to examine the concentration of AI data centers in the United States, China, and Europe, highlighting how limited access to computing power in many regions exacerbates global inequalities in AI development and technological sovereignty.
“Southeast Asia Is Starting to Choose: Why the Region Is Leaning Toward China” by Yuen Foong Khong and Joseph Chinyong Liow. This Foreign Affairs article argues that Southeast Asian countries, long hedging between Washington and Beijing, have gradually shifted toward China over the past 30 years due to economic ties, domestic politics, and geography, challenging U.S. influence in the region’s strategic rivalry.

