Updates


Call for students: educational workshop on Responsible AI for peace and security

The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm
International Peace Research Institute (SIPRI) are offering a selected group of technical
students the opportunity to join a 2-day educational workshop on Responsible AI for peace
and security.

When and where: 20-21 November 2024, VUB AI Experience Centre, Brussels
Who can apply: PhD students and Master students in AI, robotics and related disciplines in science, technology, engineering, and mathematics (e.g. computer science, data science, mechanical engineering) affiliated with universities in Europe, Africa, Central and South America, the Middle East, Oceania, and Asia.
How: Travel expenses and accommodation for non-local students will be covered by the Promoting Responsible Innovation in AI for Peace and Security initiative

As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them.

Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security.

As part of that initiative, SIPRI and UNODA are organizing a series of capacity building workshops for STEM students (at PhD and Master levels).These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.

The fourth workshop in the series will be held in Brussels, Belgium on 20-21 November 2024 in collaboration with VUB AI Experience centre.

The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.

Interested students are invited to submit their interest via this online form:
https://forms.office.com/e/Wb1t8bEE6F

SIPRI and UNODA will select participants based on motivation as well as consideration for diversity (including geographical representation, gender balance, disciplinary focus, career aspirations). Note that prior knowledge of AI ethics or international peace and security is not a requirement, and no work will be assigned outside of the workshop. The initiative will cover the travel expenses and provide accommodation for the students who need to travel to Brussels.

All participants who successfully complete the workshops will receive a certificate of participation from the United Nations.

The workshop is organized in collaboration with VUB AI Experience Centre, and is funded by the
European Union


Roundtable at Instituto Superior Técnico on responsible innovation for peace and security and the education of future AI practitioners

August 16th, 2024

The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI), in collaboration with Instituto Superior Técnico at the University of Lisbon, organized a roundtable discussion after the conclusion of the 4th Foundation of Trustworthy AI: Integrating Learning, Optimisation and Reasoning (TAILOR) Conference. The roundtable took place under the Chatham House Rule on 5 June 2024, at Instituto Superior Técnico, University of Lisbon in Portugal.  The gathering brought together a diverse group including AI experts, educators specializing in AI curricula development, and governance representatives.

Roundtable participants discussing the integration of peace and security considerations into AI training at Instituto Superior Técnico, University of Lisbon.

Roundtable participants discussed the integration of peace and security considerations into AI training at Instituto Superior Técnico, University of Lisbon. Participants at the roundtable discussed the integration of peace and security considerations into AI training with the project staff. Discussions focused on mainstreaming considerations of peace and security risks into the training of future AI practitioners. Participants explored strategies for integrating AI ethics and responsible innovation principles into educational frameworks, while also addressing governance challenges related to preventing the misuse of civilian AI technologies.

The roundtable underscored the critical role of collaboration and dialogue in shaping responsible AI practices, highlighting the commitment of stakeholders to advancing ethical standards in AI development.

Stay tuned for more updates on initiatives driving the future of AI education and governance. For more information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at charles.ovink@un.org


Empowering young AI Practitioners at Peace and Security Workshop in Lisbon

August 16th, 2024

As part of international efforts towards fostering responsible artificial intelligence (AI) development, the United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) organized a capacity-building workshop on Responsible AI for Peace and Security, from 6 to 7 June 2024, in Lisbon, Portugal. This event marked a milestone in an ongoing project aimed at addressing the risks posed by civilian AI research and innovation to international peace and security.

The workshop, conducted with the assistance of the Instituto Superior Técnico at the University of Lisbon, gathered a select group of young AI practitioners from around the world in Lisbon for a two-day capacity-building workshop.

Young AI practitioners from around the globe come together to celebrate their progress in developing responsible AI skills.

The workshop, the third in a planned series of four organized by UNODA and SIPRI, aimed to equip young AI practitioners with the knowledge and skills necessary to address the risks that civilian AI research and innovation may pose to international peace and security. Over two days, participants engaged in interactive sessions designed to enhance their understanding of the potential risks posed by civilian AI research and innovation to international peace and security. Methods to prevent or mitigate these risks through responsible research and innovation, and how to contribute to the promotion of responsible AI for peace and security, were important topics of discussion.

The workshop provided participants with an increased exposure to responsible AI practices through interactive discussions and sessions, including a red and blue teaming exercise. In addition to providing opportunities for participants to conduct their own risk assessments, challenge ideas, and engage creatively, workshop participants also explored the peace and security implications of potential misuse of civilian AI and the international governance landscape.

The event brought together 18 participants from 11 countries, including Brazil, China, France, Germany, India, Italy, Mexico, Portugal, the Republic of Korea, Syria, and Viet Nam. The group, comprising eight women and ten men, represented a diverse and inclusive cohort.

This workshop is part of an EU-funded initiative aimed at Promoting Responsible Innovation in AI for Peace and Security. This initiative is part of a broader effort to engage Science, Technology, Engineering, and Mathematics (STEM) students and provide them with in-person capacity-building opportunities. The goal is to introduce them to the peace and security risks posed by civilian AI development and how these risks can be identified, prevented, or mitigated through responsible research and governance processes.

The next and final workshop will feature sessions focusing on educators, curriculum developers, the AI industry, and other relevant stakeholders.

For more information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at charles.ovink@un.org


Public Multi-stakeholder Dialogue:  Addressing the risk of misuse:  what can the AI community learn from the biotech industry?

May 29th, 2024

Our recent public event delved into the fascinating overlap between Artificial Intelligence and Biotechnology. We gathered prominent experts, including Eleonore Pauwels, Rik Bleijs, and Clarissa Rios, to dissect the convergence of these fields and the risks involved. Through engaging discussions, we explored strategies for responsible innovation and risk mitigation.

Recording here


The next generation of AI practitioners from around the world meet in Estonia to learn about responsible innovation of AI for peace and security

March, 2024

On 14 and 15 February a select group of young Artificial Intelligence (AI) practitioners from around the world met in Tallinn for a two-day capacity-building workshop on Responsible AI for Peace and Security, organized by the United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI), in cooperation with the Tallinn University of Technology.

The workshop—the second in a series of four—aimed to provide up-and-coming AI practitioners with the opportunity to learn how to address the risks that civilian AI research and innovation may pose to international peace and security. Over two days, the young participants from diverse backgrounds engaged in interactive activities aimed at increasing their understanding of (a) how civilian AI research and innovation may pose risks to international peace and security; (b) how they could help prevent or mitigate those risks through responsible research and innovation; and (c) how they could contribute to the promotion of responsible AI for peace and security.

Young AI practitioners from around the world celebrate their development of responsible AI skills.

Through interactive seminars, live polls, and scenario-based red-teaming and blue-teaming exercises, the workshop gave participants a grounding in the field of responsible AI, the particular peace and security dimensions of the misuse of civilian AI, the international governance environment, and the chance to work through their own risk assessments, challenge ideas, and engage creatively.

The event gathered 18 participants from 13 countries, namely China, Egypt, Estonia, Greece, India, Indonesia, Iran, Italy, Lithuania, Philippines, Romania, Singapore, and Türkiye.

The workshop is part of an EU-funded initiative on Promoting Responsible Innovation in AI for Peace and Security, and forms part of an ongoing pillar focused on engaging with diverse sets of Science, Technology, Engineering and Mathematics (STEM) students and providing in-person capacity building to introduce them to how the peace and security risks posed by the diversion and misuse of civilian AI development by irresponsible actors may be identified, prevented or mitigated in the research and innovation process or through other governance processes.

Young AI practitioners discuss how to manage risk during a scenario-based exercise.

The successful workshop series will continue throughout 2024, along with pillars focused on educators and curriculum developers, as well as the AI industry and others.

For further information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at charles.ovink@un.org.


AI experts from around the world connect for a multi-stakeholder dialogue on responsible AI for peace and security

September 25th, 2023

On 13 and 14 September the United Nations Office for Disarmament Affairs, and its partners at the Stockholm International Peace Research Institute (SIPRI) brought together 15 artificial intelligence (AI) experts from industry, academia, civil society and governance for two days of discussion. Over the course of the multi-stakeholder dialogue participants explored trends in AI research and innovation that may generate risks for peace and security, and looked at how such risks may be mitigated, including through the promotion of responsible innovation practices read more…


SIPRI and UNODA’s engagement with next generation AI practitioners

December 4th, 2023

From 16 to 17 November, the Stockholm International Peace Research Institute (SIPRI) and the United Nations Office for Disarmament Affairs (UNODA) organized a two-day capacity-building workshop on ‘Responsible AI for Peace and Security’ for a select group of STEM students.

The workshop—the first in a series of four—aimed to provide up-and-coming Artificial Intelligence (AI) practitioners the opportunity to learn how to address the risks that civilian AI research and innovation may pose to international peace and security. 

The event, held in Malmö in collaboration with Malmö University and Umeå University, involved 24 participants from 17 countries, including Australia, Bangladesh, China, Ecuador, France, Finland, Germany, Greece, India, Mexico, the Netherlands, Singapore, Sweden, the United Kingdom and the United States. 

Over two days, the participants engaged in interactive activities aimed at increasing their understanding of (a) how peaceful AI research and innovation may pose risks to international peace and security; (b) how they could help prevent or mitigate those risks through responsible research and innovation; and (c) how they could contribute to the promotion of responsible AI for peace and security. 

The event featured the participation of professors from Umeå University and Malmö University.  

The workshop series, which will continue into 2024, is part of an European Union-funded initiative on ‘Responsible Innovation in AI for Peace and Security,’ conducted jointly by SIPRI and UNODA. 

The next iteration of the workshop will be held from 14 to15 February 2024, in Tallinn, Estonia in partnership with Tallinn University of Technology.

Funding Statement

This programme was made possible by the generous support from the European Union