At the margins of the First Committee of the General Assembly, the United Nations Institute for Disarmament Research (UNIDIR) and the United Nations Office for Disarmament Affairs (UNODA) co-hosted a side-event titled Introduction to Responsible AI for Peace and Security.
While the global AI policy landscape is still at a nascent stage, many stakeholders share the goal that AI be designed, developed, deployed, and used in a responsible manner, in accordance with legal and ethical values and for the benefit of society. To ensure this, governments, international organizations, private sector entities and civil society members are developing tools such as principles, standards, codes of conduct and other normative instruments to guide AI design, development, and use. This approach to AI governance is broadly known as Responsible AI, or ethical or trustworthy AI. However, Responsible AI is a young and evolving field of research and practice. While it is widely being debated as the fitting approach to AI governance, more work needs to be done to understand how it can be put into practice across critical sectors, how it relates to disarmament, peace and security challenges, and how to coordinate the many different approaches.
Jerome Larosch, Deputy Head of the Non-proliferation, Weapons Control and Disarmament Unit of the Ministry of Foreign Affairs of the Kingdom of the Netherlands welcomed delegates and highlighted the importance of engaging with and understanding approaches to responsible AI.
Through moderation by Giacomo Persi Paoli, Programme Head of the Security and Technology Programme at UNIDIR and presentations by Abhishek Gupta, Founder and Principal Researcher at the Montreal AI Ethics Institute and Senior Responsible AI Leader & Expert at the Boston Consulting Group, Charles Ovink, Political Affairs Officer with UNODA, and Alisha Anand, Associate Researcher with the Security and Technology Programme at UNIDIR, the side-event introduced the audience to Responsible AI and peace and security.
Abhishek Gupta provided industry and civil society perspectives on AI itself, and responsible practices around its development and use. Charles Ovink followed by outlining how responsible AI relates to the military domain, what it means for international peace and security, and how it can respond to disarmament, peace and security challenges. He also reflected on UNODA’s work in this regard, through the Responsible Innovation in AI for Disarmament, Arms Control and Non-Proliferation project, made possible by a generous contribution from the Republic of Korea. Alisha Anand next presented on the findings of UNIDIR’s ongoing project on responsible AI efforts, which mapped and comparatively analyzed national AI principles.
The side-event concluded with an opportunity for questions to the panellists, which led to lively discussion on the implications for diversion and misuse of AI tools, and the importance of multi-stakeholder approaches to responsible development and use.
For further information on the Responsible Innovation and AI activities of the Office for Disarmament Affairs, please contact Mr. Charles Ovink, Political Affairs Officer, at email@example.com.