The next generation of AI practitioners from around the world meet in Estonia to learn about responsible innovation of AI for peace and security

March 7th, 2024

On 14 and15 February a select group of young Artificial Intelligence (AI) practitioners from around the world met in Tallinn for a two-day capacity-building workshop on Responsible AI for Peace and Security, organized by the United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI), in cooperation with the Tallinn University of Technology.

The workshop—the second in a series of four—aimed to provide up-and-coming AI practitioners with the opportunity to learn how to address the risks that civilian AI research and innovation may pose to international peace and security. Over two days, the young participants from diverse backgrounds engaged in interactive activities aimed at increasing their understanding of (a) how civilian AI research and innovation may pose risks to international peace and security; (b) how they could help prevent or mitigate those risks through responsible research and innovation; and (c) how they could contribute to the promotion of responsible AI for peace and security.

Young AI practitioners from around the world celebrate their development of responsible AI skills.

Through interactive seminars, live polls, and scenario-based red-teaming and blue-teaming exercises, the workshop gave participants a grounding in the field of responsible AI, the particular peace and security dimensions of the misuse of civilian AI, the international governance environment, and the chance to work through their own risk assessments, challenge ideas, and engage creatively.

The event gathered 18 participants from 13 countries, namely China, Egypt, Estonia, Greece, India, Indonesia, Iran, Italy, Lithuania, Philippines, Romania, Singapore, and Türkiye.

The workshop is part of an EU-funded initiative on Promoting Responsible Innovation in AI for Peace and Security, and forms part of an ongoing pillar focused on engaging with diverse sets of Science, Technology, Engineering and Mathematics (STEM) students and providing in-person capacity building to introduce them to how the peace and security risks posed by the diversion and misuse of civilian AI development by irresponsible actors may be identified, prevented or mitigated in the research and innovation process or through other governance processes.

Young AI practitioners discuss how to manage risk during a scenario-based exercise.

The successful workshop series will continue throughout 2024, along with pillars focused on educators and curriculum developers, as well as the AI industry and others.

For further information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at