Hello friends,
“The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.”
Marcel Proust, the French novelist
Every year, the Oxford China Policy Lab team organises a day trip to London. It’s a chance for our Oxford-based fellows to meet Westminster’s think tankers, politicos, journalists and wider policy brains, many of whom are shaping the world our fellows are graduating into.
This year’s trip included a morning of discussions in Chatham House under its famous rule on AI governance and emerging geopolitical trends, followed by an afternoon of seminars from UK civil servants, a major British media company, and a leading think tank. As the sun began to dip over London, we closed the day with a well-deserved drink with OCPL’s wider network of diplomats, analysts and China experts.
While building a network is critical to one's professional success, there’s a deeper driver at play here. One of OCPL’s foundational views is that too often expertise is siloed. This presents a fundamental problem when you’re trying to think about geopolitics, AI and emerging technologies, and their impact on society, politics and industry. We’ve come across AI experts with minimal China knowledge, China analysts who’ve never used ChatGPT, and private sector investors who haven’t heard of AI safety and security.
I think that’s because this space is so vast, and the incentives and timelines that govern those in it - even tangentially - are often so different. Here are some themes I’ve observed:
Academics spend months pouring their efforts into original research, which infrequently ends up on the right policymaker’s desk at the right time. That lack of timely input impacts…
Politicians and journalists, who work on schedules tyrannised by the present, with an urgency which can make their insights naturally reactive rather than reflective, and often dependent on the first three Google links that appear on their search, or the first person to respond to a REQUEST FOR COMMENT email. That lack of horizon scanning can impact…
Private sector professionals, who work in quarters, or to an annual results plan agreed in a board room some time earlier, and who scour the pages of papers or Parliamentary procedures looking for potential signals. Compared to those who think about AI and/or China all day every day, this group can be poor at internalising the latest frontier research and views.
Placed in the middle of these three worlds? OCPL. In many ways, we are professional ‘separators of wood from trees’. We spend our lives thinking, writing, analysing, researching, and talking about China, the US, AI, geopolitics, and emerging tech. And we do our best to see who’s sat where, discussing what, and if there’s duplication or disparity. That’s what makes the human element of these days so important. Yes, the networking matters - but if we want to begin to tackle some of these challenges, and jump on some of the emerging opportunities, we need to build trust, get new minds working, and create connections across divides.
Sam Hogg,
Head of Policy Engagement
News and Views
The research and analysis of OCPL experts are their own and do not reflect organisational views
Kayla Blomquist authored an article for Just Security analysing how US and Chinese strategies for global AI leadership diverge, with the US prioritising control and frontier capabilities and China focusing on accessibility and diffusion. The article argues that for the US to strengthen global adoption of its AI systems, it must prioritise quality, reach, and adaptability.
OCPL Fellows Leia Wang and Zilan Qian wrote an article for The Diplomat examining how China’s recent humanoid robot half marathon signals a new phase in global AI competition. They argue that China’s strategic focus on rapid deployment, manufacturing integration, and international standard-setting could position it to lead in embodied intelligence and humanoid robotics.
OCPL Non-Resident Expert Saad Siddiqui, Scott Singer, and a group of AI experts co-authored a paper exploring areas where geopolitical rivals, including the US and China, could cooperate on technical AI safety. The paper highlights AI verification mechanisms and shared protocols as potential collaboration points, aiming to balance global benefits with national security risks.
2023 OCPL Fellow Julia Carver published a paper in Contemporary Security Policy examining the strategic role of capacity building (CB) assistance in digital development, exploring how the U.S., EU, and China use it to gain geopolitical advantage in Africa.
Kayla Blomquist provided training to 16 UK MPs on China’s AI development and governance ecosystem as part of the Demos AI Parliamentary Scheme at Ditchley Park.
Kayla Blomquist was quoted in Rest of World, highlighting how startups building on models like DeepSeek could reduce investment needs and democratise AI development.
OCPL Fellow Huw Roberts was quoted in a The Wire China article discussing the implications of the U.S. blacklisting Chinese research institute BAAI on global cooperation on AI safety.
What We’re Reading
"Entangled Narratives: Insights from Social and Computer Sciences on National Artificial Intelligence Infrastructures" by J.P. Singh, Amarda Shehu, Manpriya Dua, and Caroline Wesson. This article examines how nations articulate their values and priorities within AI infrastructure policies. It employs machine learning and natural language processing to analyse AI infrastructural plans, highlighting the complexities of how these plans diverge and cluster across regions. The paper challenges existing theories on technological diffusion and geopolitical competition, offering new insights into how AI policies evolve globally.
“Understanding U.S. Allies’ Current Legal Authority to Implement AI and Semiconductor Export Controls” by Gregory C. Allen & Isaac Goldston. This paper explores the disparity between U.S. export control mechanisms and those of its allies, particularly in semiconductors, AI, and military technology. It highlights the strengths and weaknesses of allied regulations, China’s expanding control system, and how enforcement delays and policy gaps allow China to circumvent restrictions, with implications for global technology competition.
“Dual Use Deception: How Technology Shapes Cooperation in International Relations” by Jane Vaynman & Tristan A. Volpe. This article explores the challenges dual-use technologies pose to international cooperation. It analyzes how the distinguishability of military from civilian uses and the integration of technology within both sectors impact arms control efforts. The authors argue these factors create tensions between detection and disclosure, offering new insights on how these dynamics shape arms control agreements and international cooperation.
“AI as Normal Technology” by Arvind Narayanan & Sayash Kapoor. This essay challenges both utopian and dystopian views of AI, proposing instead that AI is a “normal” technology, like electricity or the internet, that humans can control. The authors argue that a gradual, transformative societal impact of AI over decades is most likely, and advocate for policy focused on reducing uncertainty and ensuring resilience. They stress that AI’s risks and implications, including inequality, should be managed through continuous oversight rather than drastic interventions.