Dear friends and colleagues,
Coming to you from RightsCon in Taipei, I’d like to share a few reflections on the Paris Summit.
The Paris Artificial Intelligence Action Summit was a paradox. How, in a broad political-economic field rife with theoretical and practical disagreements, did we end up with panels and panels stacked full of policymaking luminaries, public intellectuals, and business leaders flown in from around the world who only seemed to express agreement with one another, dodging opportunities for meaningful and rigorous engagement?
Major governments have now responded to the overwhelming concentration of power into the hands of a few global tech firms by more aggressively courting these firms’ favour. And despite the summit series’ safety and ethics-based origins, at the latest Paris instalment, these same concepts took on negative connotations as perceived impediments for innovation, economic competitiveness, and one’s ‘national interest.’ All of this, despite what we know of the intertwined nature of AI risks — from the near-term and hyper-local to the national and international levels of analysis.
Most paradoxical of all? The simultaneous opportunities on the sidelines for critical discussion and knowledge-sharing in support of global stability — sorely needed, given the flywheel effects of both fast-emerging AI capabilities and escalating geopolitical tensions. The OCPL team spent an active several days preparing for and playing core roles facilitating a civil society event co-hosted by the Oxford Martin School AI Governance Initiative, Concordia AI, the Carnegie Endowment for International Peace, and Tsinghua CISS / I-AIIG, which contributed to high-caliber breakout group discussions on ways to collectively mitigate global security risks exacerbated by frontier AI. At the opening of the event, Kayla and I presented on our coauthored paper published jointly by AIGI and Concordia AI on “Examining Global Public Goods in an Age of Advanced AI: Implications, Challenges, and Research Priorities,” which explored the economic and socio-political utility of applying an academic ‘global public good’ framework to the field of AI safety (or, potentially, other aspects of the AI realm) and concluded with a research agenda to frame further work on treating AI security, safety, and ethical concerns as collective challenges.
Looking ahead can be difficult when each day of headlines feels like a week’s or month’s worth of news at once. We’re looking forward to wrapping up in-person fellowship programming for all of Oxford’s Hilary Term this coming March, as well as spinning up more opportunities for our fellows and experts in our broader community to share policy-relevant takeaways at the intersection of emerging technology and politics.
Signing off,
Elisabeth Siegel
Director of Events & Dialogues
News and Views
The research and analysis of OCPL experts are their own and do not reflect organisational views
Scott Singer co-wrote a Foreign Policy article with Matt Sheehan on DeepSeek’s latest AI breakthrough, its implications for U.S.-China competition, and the urgency of maintaining American leadership in frontier AI. The article examined the need for tighter U.S. export controls on advanced chips and strategic engagement with China on AI safety to address emerging risks.
Sam Hogg contributed an Institut Montaigne report on China’s future in 2035, exploring the tendency for retrospective policymaking amongst Chinese leaders.
Scott Singer was quoted in The Wire China on leading Chinese AI firm Zhipu’s AI safety commitments, highlighting its role in opening the door for greater international engagement.
OCPL Fellow Huw Roberts analysed the release of DeepSeek's latest model in a commentary for the Royal United Services Institute, exploring what a true "Sputnik moment" in U.S.–China AI competition would entail.
OCPL Fellow Shannon Hong wrote an article for The Diplomat examining how U.S. tech companies and government officials are framing AI development as a strategic competition with China, and how this narrative is driving investment, shaping policy, and influencing the AI landscape.
What We’re Reading
“What do we know about China’s new AI safety institute?” by Caroline Meinhardt & Graham Webster. This article examines China’s launch of the China AI Safety and Development Association (CNAISDA) as its counterpart to Western AI Safety Institutes. It argues that while CNAISDA aims to secure China’s role in global AI safety discussions, its structure as a consortium of research institutions — rather than a formal government body — raises questions about its influence and alignment with international efforts.
“Geopolitical Game-Changer” by Alvaro Mendez & Gaspard Estrada. This chapter examines China’s expanding role in Latin America’s defence, aerospace, and telecommunications sectors and the challenge this presents to U.S. influence in the region. It highlights China’s arms trade, space initiatives, and telecom advancements, with a focus on Venezuela and Brazil, and offers policy recommendations for Latin America and the U.S.
“The Quantum Panic” by Rachel Cheung. This article examines fears surrounding China’s advances in quantum computing, sparked by a controversial research paper claiming encryption vulnerabilities. It explores the broader geopolitical stakes, assessing whether China’s quantum capabilities pose an imminent threat or if concerns are overblown. The piece also highlights the role of U.S. policy, export controls, and global competition in shaping the future of quantum technology.
Appreciate this! We also just put up a piece reflecting on RightsCon, would love to see your talk/link to a paper, as we are doing something similar in the research and design space!