Experts explore the legal and ethical implications of algorithmic bias

On 26 July 2022, a webinar convened by the United Nations Office for Disarmament Affairs (UNODA), under “Project 3: Facilitation of discussions on under-explored, emerging & cross-cutting issues of relevance to the CCW”, gathered experts to delve into legal and ethical implications of algorithmic bias in the area of lethal autonomous weapons systems (LAWS).

The webinar was opened by its moderator, Mr. Vincent Boulanin, Director of the Governance of Artificial Intelligence Programme at the Stockholm International Peace Research Institute. Mr. Boulanin underscored the increasing significance of algorithmic bias, not only in civilian domains but also within the intricate landscape of military applications. He highlighted the societal challenges associated with biased algorithms and articulated the need for a comprehensive understanding of their technical, legal, ethical, and governance implications in military technologies. Mr. Jacek Sawicz, Deputy Head of Mission at Consulate of the Republic of Bulgaria, expressed gratitude to UNODA for coordinating the seminar, underlining the critical significance of the topic.

During the webinar, Dr. Julia Stoyanovich, Associate Professor in the Department of Computer Science and Engineering at the Tandon School of Engineering and the Center for Data Science, explored the technical intricacies of algorithmic bias, categorizing it into pre-existing, technical, and emergent biases. The presentation underscored the challenges arising from biased data and its potential to lead to catastrophic harm.

Mr. Jonathan Kwik, Academic Partner of the International Committee of the Red Cross, navigated the legal implications of algorithmic bias during the webinar. He clarified that bias, per se, may not be inherently bad or illegal, especially within the framework of International Humanitarian Law. Mr. Kwik delved into the legal challenges associated with biases, particularly false positives, stressing concerns about the availability and quality of military training data. The duality between upholding legal requirements and ensuring an effective system pose a significant challenge, highlighting the delicate balance needed in the development and deployment of military AI. During the webinar, the question of Epistemology and Ethical Human-AI Interaction was also addressed. Dr. Kate Devitt, Adjunct Professor QUT Centre for Robotics, highlighted

the significance of understanding AI biases through extensive training and background experiences, especially in military systems. Dr. Devitt’s exploration touched upon the ethical obligations of being in a specific epistemic state and presented practical ways to assure ethical human-AI systems given biases before, during, and after deployment.

Another panelist during the webinar was Mrs. Ariel Conn, Director of Operations and External Affairs of Global Shield and Consultant on issues relating to autonomous weapons, AI policy and ethics, and catastrophic risk. Mrs. Conn addressed the multifaceted challenges of algorithmic bias in LAWS. She introduced ten categories, highlighting the inherent difficulties in completely removing bias and the interconnectedness of algorithmic bias with broader challenges in autonomous weapon systems.

The last panelist, Dr. Liran Antebi, Research Fellow at the Institute for National Security Studies, provided insights into the implications of algorithmic bias for military AI governance, drawing lessons from non-military applications. She discussed efforts to regulate bias in commercial and civil domains, emphasizing the importance of data transparency and outlining critical aspects related to unintentional bias, deliberate data bias, and potential exploitation by non-State actors.

The webinar facilitated a comprehensive exploration of algorithmic bias in military technologies, urging for interdisciplinary approaches. Insights shared during the event emphasized the urgency of addressing these challenges to ensure responsible development and deployment of lethal autonomous weapons systems. Moving forward, a collective effort from policymakers, technologists, and ethicists is imperative to establish frameworks that mitigate bias and uphold ethical standards in the evolving landscape of military AI. The webinar was convened under the EU Council decision (CFSP) 2021/1694 in support of the universalization, implementation and strengthening of the Convention on Certain Weapons (CCW).

Text prepared by Anila Hysaj