On 7 March 2023, UNODA Geneva hosted a webinar on the notion of “meaningful human control” with regards to emerging technologies in the area of lethal autonomous weapons systems (LAWS). Financially supported by the European Union, the webinar was organized as a side event to this year’s first session of the Group of Governmental Experts (GGE) on LAWS.
The notion of “human control” with regards to weapons systems is not new. It has been referred to by States, civil society, academic institutions, and international organizations alike. The term “meaningful human control” was first introduced in the LAWS discussions in the framework of the CCW in 2013, and has since gained traction as a framing concept for discussions on autonomy in weapons systems in this Group.
This webinar unpacked the concept of “Meaningful Human Control” by exploring its meaning, operationalization, testing, and real-life applications. It aimed to explore the views of professionals conducting most recent academic and applied research on meaningful human control and working on concrete products and services, with the aim of deepening participants’ understanding of a term that has become central to the discussions of the GGE on LAWS.
The moderator, Ms. Tania Banuelos Mejia (Political Affairs Officer, UNODA), welcomed all participants and made a brief introduction to the topic and outlined the webinar’s structure. She then introduced her co-moderator Ms. Iona Puscas (Researcher on AI, UNIDIR) and welcomed Ms. Lene Lindholdt Hove Rietveld, Senior Expert of the Department for Defence and Security Policy of the EU European External Action Service (EEAS). In her opening remarks, Ms. Lindholdt Hove Rietveld elaborated on the EU decision in 2021 to strengthen and support the CCW. She emphasized that the GGE is the best forum for discussions on LAWS and challenges related to this issue and then highlighted that the concept of meaningful human control is a central issue that requires further clarification, as it poses a requirement to ensure that systems comply with International Humanitarian Law (IHL).
The first speaker, Ms. Petra Rešlová (Research Assistant, Institute of International Relations Prague) aimed to answer the guiding question of determining a sufficient level of meaningful human control. To answer the guiding question, Ms. Rešlová presented factors that influence the degree to which human control can be considered meaningful. These include (1) technological aspects, e.g., related to a line of communication between the system and a human operator: (2) conditional aspects, e.g., target restrictions; (3) decision-making aspects, which concern the human factor and elements, such as biases, that may influence the quality of decision-making.
The second speaker, Mr. Jonathan Kwik (Faculty of Law, University of Amsterdam) aimed at establishing a framework with concrete and workable solutions for military practice, for which defining meaningful human control is crucial. Mr. Kwik emphasized that his framework is conceptualized around two central elements: the weapon system and the operational environment. In this regard it is important to recognize that, being a two-directional framework, the impact of the system on the environment as well as the impact of the environment on the system must be taken into account. In his research, Mr. Kwik aimed at determining areas related to meaningful human control over which convergence and divergence exists and to build his framework accordingly. Consequently, four facets emerged that are essential for establishing and maintaining meaningful human control: (1) understanding/awareness; (2) involvement in the life-cycle; (3) prediction; and (4) accountability.
Ms. Iona Puscas then introduced the second panel, which focused on how human control can be achieved in practice. The first speaker of the second panel, Mr. Jurriaan van Diggelen (Senior Researcher Scientist on Artificial Intelligence and Program Leader on Human-machine Teaming, TNO), analyzed how meaningful human control over a system can be proven. Meaningful human control can be a means to ensure that system behavior is aligned with human values and that misbehavior can be compensated by humans. Measures for this include, e.g., ensuring that human operators have situational awareness, comprehend the system, and can predict outcomes. Mr. van Diggelen described examples to demonstrate when precisely meaningful human control can be retained or can no longer be retained. Meaningful human control can, e.g., no longer be guaranteed in a situation
where a change in the environment occurs, for which the system had not been sufficiently trained. Meaningful human control can, however, be retained in such situations, should the system have been sufficiently trained in operating in changing environments.
The second speaker, Mr. David Sadek (VP, Research, Technology & Innovation at Thales – AI & Information Processing) was asked by Ms. Puscas to sketch principles of strategy for trustworthy AI. Mr. Sadek emphasized that meaningful human control cannot exist without technology being trustworthy, especially AI technology. For his framework, Mr. Sadek elaborated on four criteria, being (1) validity, which concerns, e.g., the seamless execution of tasks by a system; (2) explainability, which concerns the ability of a system to provide understandable justifications for a system’s decision-making process; (3) security, which concerns the vulnerability of a system and importance of designing resilient systems; and (4) responsibility, which concerns the importance of ensuring compliance with legal and ethical regulation frameworks.
During the Q&A session, moderated by Ms. Banuelos Mejia and Ms. Puscas, speakers discussed the relevance and implications of a constant line of communication between a system and a human operator as a safe-guard to ensure meaningful human control, while considering situations in which communication may be broken. In such instances, measures such as training to guarantee meaningful human control are crucial. Aspects related to time critical decisions during military operations were also highlighted, as well as the relevance of trust. Speakers further discussed risk-mitigation measures related to human supervision of autonomous weapons systems, such as recognizing automation bias as well as ensuring adequate training for operators to ensure a system’s decision-making process can be understood.
Overall, the webinar provided interesting insights on how meaningful human control can be understood, as well as how specific factors can influence the quality of human control. The examples and potential real-life scenarios illustrated by the speakers were helpful for the audience to better understand and follow hypothetical usage cases and related ethical dilemmas. The brief Q&A session provided interesting insights. Extending the time frame set for the Q&A session could have been a good opportunity to discuss specific aspects more closely or generate a discussion on relevant challenges.
Story drafted by Clarissa Neder
 UNIDIR. 2014. “The Weaponization of Increasingly Autonomous Technologies1: Considering how Meaningful
Human Control might move the discussion forward.” [Online] Available at: https://unidir.org/sites/default/files/publication/