Promoting Responsible Innovation in Artificial Intelligence for Peace and Security

The United Nations Office for Disarmament Affairs (ODA) and the Stockholm International Peace Research Institute (SIPRI) have partnered for a three-year initiative on responsible innovation in artificial intelligence (AI) for peace and security. This project, which is funded by a decision of the Council of the European Union (Council Decision (CFSP) 2022/2269 of 18 November 2022), aims to support greater engagement from the civilian AI community in mitigating the risks that the misuse of civilian AI technology can pose to international peace and security.

Advances in AI present both opportunities and risks for international peace and security. Peaceful applications of AI can help achieve the Sustainable Development Goals, or even support UN peacekeeping efforts, including through the use of drones for medical deliveries, monitoring and surveillance. However, civilian AI can also be misused for political disinformation, cyberattacks, terrorism or military operations. Prior work by ODA and SIPRI has established that those working on AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them.

ODA and SIPRI therefore decided to launch a joint initiative to help the civilian AI community better understand and mitigate the peace and security risks associated with the misuse of civilian AI.  Combining awareness raising and capacity building activities, it seeks to provide the civilian AI community – especially the next generation of AI practitioners – with the necessary knowledge and means to engage in responsible innovation and help ensure the peaceful application of civilian AI technology. 

This website is intended to constitute a one-stop resource for actors from the civilian community – from educators and AI-focused STEM students to those in relevant industry, professional associations, and civil society organizations. 

The website reports on relevant news and activities and makes available various educational and awareness raising material that ODA and SIPRI are producing or will produce in the context of the initiative, including a podcast and blog post series. 

For more information on the initiative, and on responsible innovation of AI for peace and security goals, please explore the resources on this site and consider engaging with us and the project activities directly.

Project Personnel

Charles Ovink

Political Affairs Officer, United Nations Office for Disarmament AffairsWorking with the Regional Disarmament and Science and Technology briefs of the United Nations Office for Disarmament Affairs (UNODA), Charles Ovink specializes in responsible innovation, the impact of emerging technologies on disarmament and non-proliferation, military uses of AI, and disarmament education and outreach. He has previously served as Associate Political Affairs Officer at the United Nations Regional Centre for Peace and Disarmament in Asia and the Pacific (UNRCPD), Programme Manager for the United Nations University World Institute for Development Economics Research (UNU-WIDER), and a consultant for the United Nations University and Creative Environmental Networks. He has led responsible innovation work with New York University, Umeå University, Sorbonne University Pierre and Marie Curie, the University of Tokyo, Nanyang Technological University, National University of Singapore, Singapore University of Technology and Design, and the ASEAN Foundation, among others. He has been a frequent speaker on Responsible AI in the military domain, including for the Stockholm Security Conference, the Nikkei AI Summit, and as an expert witness for the UK House of Lords. He has written on AI both in his UN capacity and for IEEE Spectrum, a publication of the Institute of Electrical and Electronics Engineers. He is a member of the IEEE-Standards Association Research Group on Issues of Autonomy and AI for Defense Systems. He received his Master’s Degree from Waseda University in Tokyo, focusing on political security and power transition. He can be reached at charles.ovink@un.org

Vincent Boulanin

Director of the Governance of Artificial Intelligence Programme at SIPRI

Dr Vincent Boulanin is Senior Researcher and Director of the Governance of Artificial Intelligence Programme at SIPRI. He leads SIPRI’s research on issues related to the development, use and control of autonomy in weapon systems and military applications of artificial intelligence. His current work focuses on risks associated with the misuse of civilian AI research and innovation and on responsible innovation as a form of upstream technology governance. 

He regularly presents his work to and engages with governments, United Nations bodies, international organizations, and the media. He has briefed the UN Security Council on the impact of emerging technologies on international peace and security and presented before the UN Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. Before joining SIPRI in 2014, he completed a doctorate in Political Science at École des Hautes en Sciences Sociales in Paris.  His recent publications include: Responsible reliance concerning the development and use of AI in the military domain, Ethics of Information Technology, 25, 8 (Feb 2023, co-author). Autonomous Weapon Systems and International Law: Identifying Limits and the Required Type and Degree of Human-Machine Interaction, SIPRI Report, (2021, lead-author); Artificial Intelligence, Strategic Stability and Nuclear Risk, SIPRI Report (2020, lead-author); and Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control, SIPRI/ICRC Report (2020, lead-author); and Responsible Research and Innovation in AI for Peace and Security (2020, lead-author).

Advisory Board

The Advisory Board is a group of AI experts from around the world who have generously agreed to engage with this project on a regular basis to enhance its impact and ongoing substantive relevance. The Board’s main function is to provide expert guidance. Members help to guide the work of the project, including by informing strategic and operational decisions. The Advisory Board is a purely consultative body; it does not make final decisions over the project, nor can members be held responsible for the decisions taken in the context of the project. Members include:

Raja Chatila is Professor Emeritus at Sorbonne Université. He is former Director of the Institute of Intelligent Systems and Robotics (ISIR) and of the “SMART” Laboratory of Excellence on Human-Machine Interactions. He was director of the Laboratory for Analysis and Architecture of the French National Centre for Scientific Research (LAAS-CNRS), Toulouse, France, from 2007 to 2010. His research covers several aspects of robotics, including robot navigation and simultaneous localization and mapping (SLAM), motion planning and control, cognitive and control architectures, human-robot interaction, machine learning, and ethics. He works on robotics projects in the areas of service, field, aerial and space robotics. He is the author of over 170 international publications on these topics. His current and recent projects include: HumanE-AI-Net, a network of AI centres of excellence in Europe; AI4EU, promoting AI in Europe; AVEthics, addressing the ethics of automated vehicle decisions; Roboergosum, on robot self-awareness; and SPENCER, on human-robot interaction in populated environments. He was President of the IEEE Robotics and Automation Society for the term 2014–2015. He is Co-Chair of the Responsible AI Working Group in the Global Partnership on AI (GPAI) and member of the French National Pilot Committee for Digital Ethics (CNPEN). He is Chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He was member of the High-Level Expert Group in AI with the European Commission (HLEG-AI). Honours: IEEE Fellow, IEEE Pioneer Award in Robotics and Automation, Honorary Doctor of Örebro University (Sweden).

Frank Dignum received his PhD at the Vrije Universiteit Amsterdam in 1989. Since 2019, he has been Wallenberg Chair in Socially Aware AI at Umeå University in Sweden. He also has an affiliation with Utrecht University and is an honorary professor of the University of Melbourne. Since 2014 he has been a EurAI Fellow. He is well known for his work on norms and his theory of social agents is employed in social simulations to support policy making and e-coaching. He has given invited lectures and seminars all over the world. He has published 22 books and more than 300 papers.

Abhishek Gupta is the Senior Responsible AI Leader & Expert with the Boston Consulting Group (BCG), where he works with BCG’s Chief AI Ethics Officer to advise clients and build end-to-end Responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he is leading the development of a Software Carbon Intensity standard towards the comparable and interoperable measurement of the environmental impacts of AI systems.

His work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, specializing in the operationalization of Responsible AI and its deployment in organizations and assessing and mitigating the environmental impact of these systems. He has advised national governments, multilateral organizations, academic institutions, and corporations across the globe. His work on community building has been recognized by governments from across North America, Europe, Asia, and Oceania. He is a highly sought-after speaker, having given talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School and Kellogg School of Management, amongst others. His writing on Responsible AI has been featured by Wall Street Journal, Forbes, MIT Technology Review, Protocol, Fortune and VentureBeat, amongst others. 

He is an alumnus of the U.S. State Department International Visitors Leadership Program representing Canada and has received The Gradient Writing Prize 2021 for his work on The Imperative for Sustainable AI Systems. His research has been published in leading AI journals and he has presented at top-tier machine learning conferences, including the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML) and the International Joint Conference on AI (IJCAI). He is the author of the widely-read State of AI Ethics Report and The AI Ethics Brief. He formerly worked as a Machine Learning Engineer in Commercial Software Engineering (CSE) at Microsoft, where his team helped to solve the toughest technical challenges faced by the company’s biggest customers. He also served on the CSE Responsible AI Board at Microsoft. You can learn more about his work here.

Edson Prestes is Full Professor at Institute of Informatics of the Federal University of Rio Grande do Sul, Brazil. He is leader of the Phi Robotics Research Group and CNPq Research Fellow. He received his BSc in Computer Science from the Federal University of Pará (1996), Amazon, Brazil, and MSc (1999) and PhD (2003) in Computer Science from Federal University of Rio Grande do Sul, Brazil. Edson is Senior Member of the IEEE Robotics and Automation Society (IEEE RAS) and IEEE Standards Association (IEEE SA). Over the past years, he has been working in several initiatives related to Standardisation, Artificial Intelligence, Robotics and Ethics. For instance, Edson is Member of the United Nations Secretary-General’s High-level Panel on Digital Cooperation; Member of the UNESCO Ad Hoc Expert Group (AHEG) for the Recommendation on the Ethics of Artificial Intelligence; Member of the Global Future Council on the Future of Artificial Intelligence at World Economic Forum; South America Ambassador at IEEE TechEthics; Chair of the IEEE RAS/SA 7007 – Ontologies for Ethically Driven Robotics and Automation Systems Working Group (IEEE 7007 WG); and Vice-Chair of the IEEE RAS/SA Ontologies for Robotics and Automation Working Group (ORA WG). Additional information can be foundat www.inf.ufrgs.br/~prestes/ or https://www.linkedin.com/in/edson-prestes/

Ludovic Righetti Righetti is an Associate Professor jointly appointed in the Electrical and Computer Engineering and in the Mechanical and Aerospace Engineering Departments at the Tandon School of Engineering of New York University. He holds an engineering diploma in Computer Science and a Doctorate in Science from the Ecole Polytechnique Fédérale de Lausanne, Switzerland. Prior to joining NYU, he was a postdoctoral fellow at the University of Southern California and a research group leader at the Max-Planck Institute for Intelligent Systems. He has received several awards including the 2010 Georges Giralt PhD Award for the best robotics thesis in Europe, the 2011 IROS Best Paper Award, the 2016 IEEE RAS Early Career Award and the 2016 Heinz Maier-Leibnitz Prize from the German Research Foundation. His research focuses on the planning, control and learning of movements for autonomous robots, with a particular emphasis on legged locomotion and manipulation.

Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, Director of the Center for Responsible AI, and member of the Visualization and Data Analytics Research Center at New York University.  Her goal is to make “Responsible AI” synonymous with “AI”.  She works towards this goal by engaging in academic research, education and technology regulation, and by speaking about the benefits and harms of AI to practitioners and members of the public. Her research interests include AI ethics and legal compliance, data management and AI systems, and computational social choice.  In addition to academic publications, she has written for the New York Times, the Wall Street Journal, and Le Monde.  She has been teaching courses on responsible data science and AI to students, practitioners and the general public.  She is a co-author of “Data, Responsibly”, an award-winning comic book series for data science practitioners and enthusiasts, and “We are AI”, a comic book series for general audiences.  She is engaged in technology regulation in the United States and internationally, having served by mayoral appointment on the New York City Automated Decision Systems Task Force, among other roles. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  Her work has been generously supported by the U.S. National Science Foundation, Pivotal Ventures and Meta Responsible AI, among others.  She is a recipient of the NSF CAREER Award and a Senior Member of the Association for Computer Machinery (ACM).

Funding Statement

This programme was made possible by the generous support from the European Union