Autonomous Weapon Systems: Understanding Learning Algorithms and Bias

أكتوبر 13th, 2017

On 5 October 2017, the United Nations Institute for Disarmament Research (UNIDIR) hosted a side event, “Autonomous Weapons Systems: Learning Algorithms and Bias” at the United Nations Headquarters in New York. The event welcomed a panel of four experts to participate in a high-level thematic discussion on the existence of data bias in machine learning and its potential effects on every-day technology as well as in militarized weapons systems.

Moderated by Kerstin Vignard, Chief of Operations and Deputy to the Director at UNIDIR, the panel discussion opened with Vignard commenting on the growing awareness towards algorithm bias, or the reality that the data used in algorithms is not objective or neutral as is often believed to be the case. In fact, Vignard explained that recent studies argue that machine learning not only mirrors biases that are inherent in the data set from which the machine draws its information, but that it amplifies these biases. Vignard also discussed the ways algorithms are inherently biased and how bias manifests in autonomous weapon systems.

Harvard educated mathematician and author Cathy O’Neil provided an overview on the topic of algorithm bias. O’Neil argued that the most important element of an algorithm is what the algorithm defines as a successful outcome or output from the data it processes. In the course of determining how an algorithm reaches this successful output, value judgements on how the algorithm should process data points are made by the programmer. O’Neil observed that “data is a reflection of imperfect, flawed humans”, and success is by extension defined by humans before a machine even begins to compute the data that it is provided.  This raises ethical and moral questions regarding an acceptable level of bias by the programmer.

David Danks, Head of the Department of Philosophy and Psychology at Carnegie Mellon University in Pittsburg, Pennsylvania, whose research largely focuses on the intersection of philosophy, cognitive science and machine learning furthered O’Neil’s arguments by applying them to current technologies such as autonomous vehicles. Speaking specifically to autonomous weapons systems, Danks highlighted the complexity of coding these systems noting that international laws and conventions pertaining to war and human rights are unquestionably ambiguous and vague, leading to a lack of clarity on the appropriate ways to algorithmically define success.

Finally, the Ambassador to the Permanent Mission of India, Amandeep Gill, who has a background in engineering, cautioned against jumping to conclusions regarding what determines good success criteria. Ambassador Gill noted that when working with the complexities of autonomous weapons systems on the battlefield, the human user must possess a deep understanding of the way an autonomous weapon is programmed and why the weapon responds and reacts in a particular way. Without such an understanding, the human user is unable to safeguard the machine’s outputs against unaccounted for algorithm bias or other malfunctions. The Ambassador for India stated that this enhanced relationship can only derive from further testing of autonomous weapon systems and increased training for their users.

 

Text and Photo by Emily Addison