Podcasts

Podcasts

Episode 1: Not so peaceful technology? What risks may civilian AI pose to peace and security?

In this first episode, project leads Charles Ovink (UNODA) and Vincent Boulanin (SIPRI) kick-off the series by looking at some key concepts, explore the ways misuse of civilian AI can present risks to international peace and security, and introduce the EU funded Promoting Responsible Innovation in Artificial Intelligence for Peace and Security programme.

Responsible A.I. for peace podcast.

Show notes:

Episode 2: with Emily Bender, on the risks of large language models and generative AI.

In this episode, Charlie Ovink (UNODA) and Vincent Boulanin (SIPRI) guest Professor Emily Bender, an American linguist who co-authored one of the most cited article on the risks posed by large language models. They unpack the relationship between Large Language Models (LLM) and the current hype around Generative AI. They also talk about what LLM can and cannot do and how she sees the technology evolving. More importantly, they discuss the risks she associates with tour increasing reliance on LLM based AI tools and what the AI community – and regulators – could do about those risks.

Responsible A.I. for peace podcast.

Episode 3. Eleonore Pauwels: The dark side of AI for medicine, pharmacology and bioengineering

In this episode, we guest Eleonore Pauwels, a Senior Fellow with The Global Center on Cooperative Security, who works on the security and governance implications of the convergence of AI with other dual-use technologies, notably the biotechnologies used for medicine, pharmacology, bioengineering and for our scientific understanding of biological processes. 

We wanted to explore with Eleonore why the convergence of AI and biotechnology has been generating so much excitement as well as concerns in the scientific and policy community recently.  

We talk first about the promises AI holds in different areas such as medicine, pharmacology, and bioengineering. Then we dive into the question of whether – and if so how –these advances could be misused for bioterrorism, bioweapons and bio-crime. Finally, we talk is or should be one about these risks.  

Responsible A.I. for peace podcast.

Resources

Brockman, K., Bauer, S. Boulanin V. Bio+X. Arms Control and the Convergence of Biology and Emerging Technologies, (SIPRI: 2019) 

Carter et al. The Convergence of AI and the Life Science, (NTI: 2023),⁠  

⁠Sandbrink, J., Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools (Arxiv: 2023)⁠  

⁠Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022).⁠  

⁠Mouton, C., Lucas, C., Guest, E., The Operational Risks of AI Use of AI in Large scale Biological attacks (RAND: 2023)⁠  

⁠Soice, E., et al, Can large language models democratize access to dual-use biotechnology? (Arxiv, 2023)⁠  

 

Episode 4. Boston Dynamics: How to deal with possible misuse of general purpose robots?

Episode Overview

In this episode, our guest is Brendan Schulman, who is Vice President of Policy and Governmental Relations at Boston Dynamics. Boston Dynamics is one of the world’s most famous robotic companies. Boston Dynamics is notably known for its “agile legged robots” like Spot and Atlas that can autonomously navigate uneven terrain, climb stairs or open doors. Boston Dynamics is also known for having taken a position in the debate on weaponization of general-purpose robots. In 2022, Boston Dynamics signed with five robotics companies, a pledge “not to weaponize their general purpose robots” or the software that makes them function. 

We wanted to explore with Brendan, how Boston Dynamics approaches the risk that their robots could be misused by some malicious actors for harmful purposes, and possibly be turned into autonomous weapon systems. We discuss what companies like Boston Dynamics can do at their level to prevent or mitigate that risk, whether there are any lessons that other organizations could learn from what Boston Dynamics has done or is doing, and whether Boston Dynamics sees a need for governmental regulations and, if so, what type of measures the companies would deem useful at the national and international levels. 

Resources

Pledge: https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized

Responsible A.I. for peace podcast.

Episode # 5 – Preventing misuse of civilian AI: what can the international community do?

In this episode, we explore the role of the United Nations in coordinating an international response to the risks that advances of AI can pose to peace and security. We talk about the work of the AI Advisory body (AIAB) that the UN Secretary-General created last year. We also invite our guest to weigh in on the debate on AI risks and the potential need for new global AI governance institutions or mechanisms. 

About the guest

Dr Amandeep Singh Gill became the UN Secretary-General’s Envoy on Technology in 2022. He leads the implementation of the UN Secretary-General’s roadmap on digital cooperation, and serves as advocate and focal point for international dialogue and cooperation on responsible and inclusive digital transformation. Before taking up the position as Envoy, he was the Chief Executive Officer of the International Digital Health and Artificial Intelligence Research Collaborative project, based at the Graduate Institute of International and Development Studies, Geneva. Previously, he was the Executive Director and Co-Lead of the United Nations Secretary-General’s High-Level Panel on Digital Cooperation (2018-2019) Mr. Gill also served as India’s Ambassador and Permanent Representative to the Conference on Disarmament in Geneva (2016-2018).

Website: Secretary-General’s Envoy on Technology | United Nations Secretary-General

Responsible A.I. for peace podcast.

Episode 6. How to prevent AI falling into the wrong hands and prevent misuse of AI technology: what states can and cannot do

In this episode, we explore what States have done, or are considering doing to ensure responsible innovation, and prevent the misuse of civilian AI and mitigate associated harm. We take stock of the initiatives that States have undertaken or contributed to over the past year; we discuss what they have in common.

Responsible A.I. for peace podcast.

Episode 7. Partnership on AI: How to assess and mitigate the misuse risks associated with open research and innovation?

Episode Overview

In this episode, our guest is Madhulika Srikumar who is head of Safety Governance at the Partnership on AI, and industry organization that recently published guidance on responsible publication of AI research and responsible deployment of AI innovation. We talk about the misuse risks associated with openness in AI research and how AI practitioners can and do determine whether their work or aspects thereof (e.g., training data or method) should or should not be openly disclosed.

About the guest

Our guest is Madhulika Srikumar, head of the AI Safety program at the Partnership on AI. With nearly a decade of experience in technology governance, Madhu leads applied policy research initiatives, collaborating with tech companies, civil society, and governments to address the risks of emerging technologies. Her work focuses on turning ethical ideals into concrete actions and guidelines, including the launch of PAI’s Guidance for Safe Foundation Model Deployment. Trained as a lawyer, Madhu holds an LL.M. from Harvard Law School.

Responsible A.I. for peace podcast.

Episode 8. Google DeepMind: On the use of responsible scaling policies and safety frameworks to guide the development and deployment of advanced AI systems

Episode Overview

In this episode, our guest is Allan Dafoe who is the director of Frontier Safety and Governance at Google DeepMind. We talk about Google DeepMind (GDM) as an example of how a major private sector actor is analysing and mitigating future risks posed by advanced AI models. We ask him about the Frontier Safety Framework that GDM adopted to proactively identify, detect and manage AI capabilities that could cause severe harm, and put that in the context of AI and peace and security.

About the guest

Allan Dafoe is the director of Frontier Safety and Governance at Google DeepMind. He is also the founding director and board member of the Centre for the Governance of AI (GovAI), which was founded at the University of Oxford and is now an independent nonprofit, and founder and trustee of the Cooperative AI Foundation. Prior to working on the governance of AI, he also worked as a professor at Yale University on great power peace.

Publications / Resources

Signed Lethal Autonomous Weapons Pledge

Credit

This episode was produced by Charles Ovink and Vincent Boulanin, researched by Alexander Blanchard, edited and mixed by Gaston Collazo and was made possible thanks to the generous support of the European Union.

Disclaimer

The views expressed are those of the hosts or guests and do not necessarily reflect those of UNODA, SIPRI or the EU.

Responsible A.I. for peace podcast.

Funding Statement

This programme was made possible by the generous support from the European Union