This month the United Nations Office for Disarmament Affairs (ODA) and its partners at the Stockholm International Peace Research Institute (SIPRI) launched their three-year initiative on responsible innovation in artificial intelligence (AI) for peace and security. The initiative, which is funded by a decision of the Council of the European Union (Council Decision (CFSP) 2022/2269 of 18 November 2022) , aims to support greater engagement from the civilian AI community in mitigating the risks that the misuse of civilian AI technology can pose to international peace and security.
ODA and SIPRI recognize that advances in AI present both opportunities and risks for international peace and security. Peaceful applications of AI can help achieve the Sustainable Development Goals, or even support UN peacekeeping efforts, including through the use of drones for medical deliveries, monitoring and surveillance. However, civilian AI can also be misused for political disinformation, cyberattacks, terrorism or military operations. Prior work conducted by the two organizations has established that those working on AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them.
‘This joint SIPRI–ODA initiative is all about raising awareness and building capacity. It is about providing the civilian AI community with the knowledge and means to take ownership of the problem,’ says Dr Vincent Boulanin, Programme Director responsible for the project at SIPRI. ‘The misuse of civilian technology is not a problem that states can address on their own. AI developers, because they are best placed to understand the technology, can serve as a first line of defence,’ he adds.
‘To make as broad and sustainable an impact as possible, our priority is to build the capacity of the next generation of AI practitioners to carry out responsible AI. To that end, the initiative aims to work closely with educators and AI-focused STEM students but also to work with relevant industry, professional associations and civil society stakeholders from around the globe. In doing so, it will produce a wide range of educational and awareness raising material— including podcasts, blog posts, a handbook, a report, and practical processes and tools for responsible AI education for peace and security goals. The initiative is committed to building a strong community of interested stakeholders, including through a series of workshops, dialogues and roundtables,’ explains Charles Ovink, Political Affairs Officer responsible for the project at ODA.
For further information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at email@example.com .