On 13 and 14 September the United Nations Office for Disarmament Affairs, and its partners at the Stockholm International Peace Research Institute (SIPRI) brought together 15 artificial intelligence (AI) experts from industry, academia, civil society and governance for two days of discussion. Over the course of the multi-stakeholder dialogue participants explored trends in AI research and innovation that may generate risks for peace and security, and looked at how such risks may be mitigated, including through the promotion of responsible innovation practices.
Meeting under Chatham House rule, participant experts, representing a diverse range of backgrounds, looked at how peaceful civilian AI research and innovation may present risks for peace and security, mapped potential misuse scenarios, considered how aware the civilian AI community is of these risks, and how engaged in addressing them, and looked at technologies and research of particular concern.
Over the two days the group flagged a number of key issues, including the challenges presented by structural risks based on the nature of AI development, the convergence of physical, digital and political risks presented by misuse of civilian AI, and the need for broad education and capacity building amongst AI practitioners on the risks for international peace and security.
The dialogue series, which will continue into 2024, was held as part of the Responsible Innovation in AI for Peace and Security programme. The programme, funded by the European Union and conducted by UNODA and its partners at SIPRI, aims to support greater engagement of the civilian AI community in mitigating the risks that the misuse of civilian AI technology can pose for international peace and security. The objective of the dialogue series is to explore trends in AI research and innovation that may generate risks for peace and security, and how such risks may be mitigated, including through the promotion of responsible innovation practices. Each dialogue will inform and guide the work of the programme, including its capacity building work with AI practitioners, engagement with industry and multi-stakeholder groups, and the development of educational materials. The programme will also generate a final report with policy recommendations, informed in part by the dialogue series.
For further information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at email@example.com.