Dear friends and colleagues,
This January, we’d like to welcome the new kids on the block: 20 talented OCPL Fellows, one exceptional non-resident expert, and, last but not least, DeepSeek’s market-shattering R1 model. While the Chinese AI company DeepSeek has been on the radar of the AI-and-China watching community for months, it seems that the rest of the world was not ready for R1. But the domestic competition in China was ready, with Alibaba releasing Qwen-2.5, a model it claims surpasses the latest from both DeepSeek and ChatGPT, during the Chinese New Year holiday (watch this space).
I joined OCPL from the broader China and foreign policy world. So as I try to catch up to the accelerating global AI race, I’ve relied on the OCPL community to help me parse through how a highly capable open-source model might impact the broader AI ecosystem, why this is a lot more complicated than “export controls don’t work”, and why the ongoing global freak out might be counterproductive.
Exploring this complexity from an academic and technical perspective and translating it for policy audiences is core to our mission at OCPL. That’s why we’re excited to have 20 outstanding scholars from diverse disciplines join our research community in the third OCPL Fellows cohort. The OCPL Fellowship Programme cultivates expertise at the intersection of US-China relations and emerging tech, providing opportunities to produce high-impact research, collaborate in themed working groups, and engage with leading policy professionals. Fellows will dig into pressing issues from AI and strategic stability to critical mineral investment, and we can’t wait to see what they find.
Looking forward to navigating the next month in China and emerging tech with you,
Karuna Nandkumar
Head of Policy Programmes
News and Views
The research and analysis of OCPL experts are their own and do not reflect organisational views
Kayla Blomquist provided expert commentary on DeepSeek and US-China AI dynamics across multiple media outlets, including BBC, BBC 5 Live Radio, Times Radio, Reuters, AP, and Australia’s national broadcaster, ABC Radio Australia.
Scott Singer wrote for the Carnegie Endowment on DeepSeek and Chinese-Western frontier companies converging on security measures, and for Just Security on a second Trump administration’s AI policy toward China. He was also quoted in a Time article on the necessity of engaging with China on AI security.
Elisabeth Siegel gave a live radio interview on Dubai Eye 103.8 Radio discussing DeepSeek and China’s AI sector. She also coauthored an article in the Oxford Political Review on Big Tech’s influence over AI policy and sidelining of civil and academic input.
Sam Hogg was quoted in a Financial Times article on Rachel Reeves’ trip to China, offering commentary on the UK-China audit.
2024 OCPL fellow Tristan Yip published an article in the RUSI Journal urging the UK to implement its Foreign Influence Registration Scheme to counter covert foreign influence, particularly from China.
OCPL Non-Resident Expert Saad Siddiqui co-authored a policy paper titled “Promising Topics for US–China Dialogues on AI Safety and Governance,” examining potential areas of common ground between the US and China for productive AI safety dialogues.
What We’re Reading
“The European Union’s Place in United States–China Strategic Competition” by Sebastian Biba. This article examines the EU’s increasing alignment with the U.S. in response to theUS-China rivalry. By using role theory, it argues that the EU's evolving self-conception and the positive shift in US-EU relations under Biden have led to greater cooperation between the two sides on China, although full alignment remains elusive.
“AI Has Been Surprising for Years” by Holden Karnofsky. This article explores the rapid progress of AI capabilities, highlighting the challenge of regulating risks that haven’t fully materialized yet. Karnofsky discusses how AI has surpassed humans in tasks once thought too complex, such as image and speech recognition, and the difficulty policymakers face in managing potential risks related to AI’s evolving abilities.
“Infrastructure for AI Agents” by Alan Chan et al. This paper introduces the concept of agent infrastructure—external technical systems designed to manage AI agents' interactions in open-ended environments. It argues that existing AI safety tools fail to ensure accountability and control and emphasizes that robust infrastructure is crucial for integrating AI agents into society while minimizing risks.
"Development of New Generation of Artificial Intelligence in China" by Shaleen Khanal, Hongzhou Zhang, and Araz Taeihagh. This article examines the role of China’s provincial governments in the development of AI, focusing on the diffusion of the 2017 New Generation AI Development Plan (NGAIDP). It argues that local priorities, focused on economic growth, often diverge from the central government’s emphasis on national security, influencing the pace and direction of AI policy across the country.
Meet Our New Non-Resident Expert
Ruby Osman leads the Tony Blair Institute for Global Change’s China work, where she supports political leaders in developing effective China strategies. Her research focuses on Chinese foreign policy and elite politics, and she also provides regular media, including for Bloomberg, Reuters, BBC and SCMP. She speaks Mandarin and holds a BA in China Studies from the University of Oxford, where she received the Gibbs Prize.