This curated blog series aims to raise the profile of efforts that cross boundaries between the civilian-focused ‘responsible AI’ and arms control and non-proliferation communities. The series provides a platform to disseminate insights, ideas and solutions regarding the identification and addressing of risks associated with the misuse of civilian AI, representing the diversity of thought and perspectives present in the AI field. If you are interested in contributing as a guest author in this series, please contact us at

#3 The CELL approach: What we can learn from the way a working group on Issues of AI and Autonomy in Defense Systems works.

By Lisa Titus & Ariel Conn

As a society we are rapidly acknowledging the importance of Responsible AI, but putting that commitment into practice can be a fraught endeavour. It can be risky for practitioners, policy makers, and concerned members of the public to ask hard questions or to admit when there is something we don’t understand. Moreover, issues related to the responsible design, development, and use of emerging technologies are often politicized, which can make it difficult to connect and make progress on their regulation and governance. The combination of all of these factors can lead to a collective ignoring of important questions or keeping discussions at too high a level of generality to enable progress towards useful and relevant solutions. It is therefore highly important that we not only try to address ethical questions related to AI use, but that we innovate on our methods for collectively making progress on these questions. This post introduces a new process for groups to use in grappling with ethical questions, which I call CELL, and aims to provide a “lessons-learned” from our experience, that highlights items to consider when addressing these topics. 

Motivation from Autonomous Weapons Systems

The above represent just some of the challenges I experienced while working on one of today’s most pressing issues: the ethics of autonomous weapons systems. In collaboration with colleagues at the University of Pennsylvania, I organized a conference on the topic for the International Conference for Robotics and Automation (ICRA) in 2022, and also did some informal advising, helping leaders in Penn’s General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory to address some student concerns about the ethics of autonomous weapons systems and how it might relate to their work. I am a philosopher of artificial intelligence and robotics, and I was surprised to see how the next generation of engineers were afraid to engage on these issues, especially in a public setting like a conference. Also, working on these issues I saw how legal and policy experts often struggled to understand and apply their expertise to cutting-edge systems.

On the heels of these experiences, I had the opportunity to participate in the kind of working group that I believe can really help us resolve these challenges. This working group was established by the Standards Association of the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest professional organization. Their Standards Association is a global leader in helping to set standards and best practices for technology. Our group was originally convened to help add a technical perspective to issues related to the ethics of increased autonomy in defense systems, but through the integration of expert perspectives from different fields over an extended period of time, I realized that the value-add went far beyond this objective.  

Led by Ariel Conn, Ingvild Bode, and Rachel Azafrani, this Industry Connection Research Group on Issues of Autonomy and AI in Defense Systems first produced a whitepaper in 2021 outlining ethical and technical challenges in the development, use, and governance of autonomous weapons systems. I joined the group in 2022. Our task was to better understand the challenges of the 2021 paper and increase clarity on how to start developing solutions. We have a forthcoming whitepaper that helps us better understand the challenges posed at each stage of the lifecycle of the development, use, refinement, and retirement of an autonomous weapon system. I hope that many stakeholders will find our whitepaper useful. What I want to do here is to help the working group make a different contribution – to articulate the benefits of the CELL process that we employed in action and motivate its adoption in other cases where cutting-edge AI technology meets complex ethical, legal, social, and policy issues.


This working group used what I call the CELL process: it was CLOSED-door, included EXPERTS and stakeholders from different disciplines, it met for a LONG period of time, and it was structured by a LIFECYCLE analysis, which starts with the design and development of a technology and tracks it through various stages all the way to implementation, refinement, and ultimately retirement.

Why a CLOSED-door environment?

With issues as charged as the ethics of autonomous weapons systems, being able to ask questions and try out ideas in an environment where you are not worried about repercussions is critical, and so ensuring a “safe space” for such discussion is key to this approach. Closed-door in this sense simply means that the meetings are not open to the public, and that participants can speak freely, without worries that ideas will be attributed against their will. In our example, we were able to get clarification on technical aspects of relevant technologies, ask questions about factual history, and get off-the-record assessments of real-world events. We were able to give our unfiltered opinions and get honest feedback about the way they were thinking through the issues. We were able to be honest and vulnerable about what we didn’t know. While transparency in decision-making is of course important, and there are many other valid approaches with a mixture of elements, because of the charged nature of these issues we also need safe spaces of this type, especially in earlier stages of understanding and addressing these issues.

Why an interdisciplinary group of EXPERTS and stakeholders?

Nobody should claim to understand everything about the wave of AI transformation that we are currently riding. The level of expertise we need to understand how these cutting-edge technologies work requires multiple degrees and the time to keep up with their rapid evolution. Similarly, understanding today’s complex social, legal, and political landscape that is being affected by these technologies requires deep knowledge and experience. We need experts in all of these areas, from diverse backgrounds, and we also need experts at helping people communicate with one another, drawing global insights, and developing a shared vocabulary and knowledge base. (By the way, this is something philosophers of artificial intelligence in particular are well-positioned to offer, but that’s a topic for a different blog post.)

Why a LONG term format like a months- or years-long working group?

In order for participants to learn from one another and deepen their understanding of the different facets of the issues, we need repeated meetings over an extended period of time. This develops trust and rapport within the group and also allows time for information to sink in, for people to learn new concepts and vocabulary, and for the members to begin applying what they’ve learned from the group to their own everyday projects within their own domains of expertise.

Why a LIFECYCLE analysis?

To keep conversations relevant to the goal of making progress on ethical, social, and political issues related to the ethics of AI technology (in our case autonomous weapons systems), we need structure that is going to help us get down into the specifics, to see where technical information might be crucial to governance discussions, to understand where opportunities for improving processes and guardrails might be. While perhaps there are many ways to do this, I found the structure of a lifecycle analysis to be particularly useful. Focusing on lifecycle stages helped us look at both the technical and the governance issues that arise with each stage, and it helped us to identify common issues that run throughout the lifecycle. By iteratively drilling down into each stage and zooming out to connect it to the lifecycle as a whole, the process helped experts ask questions that expanded their current understanding.  The process also gave the experts concrete issues and problems to apply their expertise to, helping us all better understand the technical as well as the legal, ethical, and other risks. Because of this, the lifecycle analysis paves the way for action-guiding, relevant policy.

Using the CELL approach

One example of how CELL helped to advance our group’s discussions was in thinking through issues related to Human-System Integration (HSI). Participants who were military and legal experts from different countries helped the group understand that systems need to be designed so that commanders and operators can foresee whether the effects of an attack would be lawful. Attention to the lifecycle helped us identify that this ethical requirement generates other requirements at earlier stages, for example that designers need to understand and take into account the kinds of real-world contexts in which their systems might be used and of psychological and other factors that might affect the operation of the system in a real-time contexts. Once this point was made, engineers and other group members with technical expertise could then weigh in on the feasibility of various requirements for design plans and testing and monitoring protocols. Depending on those assessments, legal experts could again weigh in on whether the conceptualized protocols indeed satisfied legal conditions. This kind of interaction was enormously helped by having a closed-door group of experts carry out a lifecycle analysis over a long time period.

In this way, attention to the lifecycle enabled experts to both contribute at key moments and ask clarifying questions of other members. The closed-door nature of these discussions enabled members to do this work freely and to give their honest assessments about lawfulness or technical feasibility. Finally, it would have been impossible to generate the depth of discussion or understanding our group achieved without meeting iteratively over a long time frame so that experts from different fields could incorporate and respond to the insights of their peers.

I hope that readers interested in the CELL process will look out for our forthcoming whitepaper, “Lifecycle Framework for Autonomy and AI in Defense Systems,” which is soon to be published through the IEEE SA. In it, you’ll see this model at work across the stages of the lifecycle of a hypothetical uncrewed underwater vehicle (UUV).

Beyond Defense Cases of the CELL approach

I think CELL would be useful in many other kinds of applications as well, especially those involving generative AI. For example, perhaps it could help us avoid incidents like one that recently occurred with Google Gemini. When prompted to provide an image of a 1943 German soldier, Gemini produced images that included a black soldier and an Asian woman soldier. This is not only historically inaccurate, but deeply offensive considering the horrific racism of Nazi Germany during World War II. The designers were looking to avoid racial and other biases in image generation generally (for example always producing a picture of a white man when asked for a doctor), which is important, but the result was that Gemini’s image generation feature produced several similar instances of this kind of offensive historical inaccuracy.

To help avoid issues like this in the future, it is important to enable participation from diverse backgrounds and underrepresented groups, as well as experts in the humanities who have deep knowledge of the history, social impact, and manifestations of racism and other kinds of bias. Such experts could more readily anticipate situations that might require nuanced treatment and help frame problems and articulate solutions. But they need to be in long-term, closed-door conversation with engineers and governance experts so that the team can frankly discuss potential concerns and collectively engage in iterative problem-solving. For example, focused attention on the technical details of the use stage of Gemini’s image generator might have led to discussions about ethical issues related to prompt transformation, in which the system takes the user’s prompt and transforms it into a modified one before feeding it into the main model. Prompt transformation processes can make system outputs both more useful and more ethical, but they need to be responsive to contextually determined ethical demands, such as the need for an accurate portrayal of demographic characteristics of Nazi soldiers. A diverse group of experts, meeting over a long period of time in a closed-door environment to discuss ethical and related technical issues over the lifecycle of the system, may very well be able to improve outcomes for Gemini and its users.

This is just one example of how the CELL approach can be applied to challenging ethical issues in the development of technology beyond autonomous weapons systems. I believe it could be adapted for many other technologies as well, and so will prove widely useful. And yet, I think that we are still at the beginning. As companies, governments, and other stakeholders work to keep pace with technological – and especially AI – development, we’ll need to keep innovating our assessment processes alongside our technologies.

#2 Unpacking the concerns around AI and biotechnology

By Jules Palayer

The risks stemming from the convergence of AI and biotechnology received a lot of attention in the international conversation on the governance of AI in 2023. The Bletchley Declaration, issued during the UK AI Safety Summit on 1st November 2023, pointed to the need to address the “substantial risks” posed by AI in the domain of biotechnology. One day before that, the G7 issued a Code of Conduct for actors developing advanced AI that emphasised the importance of identifying, evaluating, and mitigating the risks stemming from the use of AI in biology. In addition, the interest on this topic from the research community reached a new peak in 2023 with the publication of numerous blog posts and reports. Why has the use of AI in biology been elevated to the top of the policy agenda in 2023? 

This blog post offers some keys to understand why this topic recently became central in the policy and expert debate on the risks of misuse and diversion of (civilian) AI. It discusses where these concerns come from; whether they are founded, and how the AI community can help to address them.

What is it all about? Biosecurity risk in the age of AI

All the recent policy declarations and reports published on the topic imply that advances of AI in the bio sector will improve medicine, pharmacology, bioengineering, and our understanding of biological processes. However, they also signal that the same advances could be misused for bioterrorism, biological warfare, and bio-crime.

These worries are not new. In fact, biosecurity pundits have been discussing for years whether AI – along with other technological advances like synthetic biology – could facilitate the development and production of harmful bio-agents and their delivery mechanisms. The Stockholm International Peace Research Institute already took stock of that expert discussion in a 2019 report. So, how come the discussion around this risk (re)gained traction in 2023?

Why now? LLMs and Generative AI

Much of the renewed interest is linked to the recent breakthroughs of generative AI, especially in the form of chatbots powered by Large Language Models (LLMs). These advances are expected to provide great opportunities for medicine, and biological research overall. For instance, although some ethical and technical challenges remain, chatbots could improve healthcare in many ways, from introducing virtual nurses to helping medical diagnosis. In addition, generative AI could boost drug discovery and help create new biological entities and enhance existing ones. However, these advancements could also be misused in ways that would exacerbate existing, or create new, biosecurity risks. Experts worry about two scenarios in particular.

The first is that chatbots powered by LLMs could make the development and dissemination of harmful bio-agents easier. These tools, trained on large textual datasets, could reportedly lower the barrier to access critical knowledge to produce harmful bio-agents and help ill-intended individuals troubleshoot problems during the development of pathogens. The fact that unregulated chatbots are widely available is considered an aggravating factor. It could allow a larger pool of actors to access critical information about the development and dissemination of bio-agents.

The second is that generative AI tools used for pharmaceutical research and bioengineering could be misused to create new harmful bio-agents or make existing ones more lethal, contagious, or antibiotic resistant. If the intent were there, there is a wide range of AI-enabled biological tools that could be misused. For instance, resourceful state or non-state actors could exploit the possibilities AI brings for the comprehension of biological processes for the development of more harmful or more targeted bioweapons.

These misuse scenarios seem generally undisputed, the extent to which the international community should worry about them is, however, subject to more debate.

So, how worried should we be? Low likelihood, high impact

In December 2023, the Responsible Innovation in AI for Peace and Security project organised a multi-stakeholder dialogue on the ‘Risk stemming from the use of AI in Biology’ which gathered a group of experts to reflect on that very question in light of the recent policy interest on the topic. The key takeaway was that the international community should neither overestimate nor underestimate these risks. Here is why:

The use of LLM-powered chatbots to create harmful bio-agents has been the object of recent experiments at MIT and at RAND. In these scenario exercises, participants with limited scientific and technical knowledge were asked to use chatbots to help them plan a bio-attack. Both experiments have pointed at elements that could potentially enhance an actor’s capacity to engage in the production of a bio-agent with harmful purposes. However, they have mostly shed light on the limitations of this technology.

Chatbots do not give access to all technical and tacit knowledge necessary for the development and production of harmful bio-agents. These chatbots are trained on data openly available on the internet, and fortunately not all the information necessary to such ill-intended enterprise is available online. Moreover, the output of such tool may not be fully reliable. Chatbots are known to ‘hallucinate’: they can generate statements that have no scientific foundations. In the context of the production of harmful bio-agents, this unreliability could entail major safety risks and create  insurmountable technical bottlenecks in the development process for non-experts. The malicious use of chatbots by state and non-state actors to develop and produce harmful bio-agents, would, therefore not only require highly skilled individuals but also access to more complex facilities and equipment than just the chatbot.

The importance of expert knowledge is also central in the second scenario. Few people have the expertise, skills, and access to the equipment required to make AI-powered bioengineering work, and fewer will be able and willing to use these resources for harmful purposes. Moreover, many technical bottlenecks in the development of bio-agents remain. For example, one of the most important hurdles in developing a pathogen using AI is to synthesise a viable living organism in the physical world from digital information.

In sum, advances of AI may generate new misuse cases, but they do not by themselves fundamentally increase the likelihood of bio-crime, bioterrorism, and biological warfare. With or without AI, the development and use of harmful bio-agents remains a complex and dangerous endeavour reserved to highly motivated actors who can mobilize significant financial and technical resources.

That said, the fact that AI does not radically transform the risk equation, does not mean that the AI community should turn a blind eye to the potential misuses of AI in biology.

What can the AI community do about the misuse of AI?

The AI community can help to strengthen bio governance and contribute to a culture of biosecurity and biosafety, centered on but not limited to the misuse scenarios explored here. It can take practical steps both in the development and in the deployment of AI models that could pose biosecurity risks.

In the development of models, these steps can take the form of risk assessment processes which may lead to technical fixes. For example, the experiments at RAND and MIT showed how red teaming exercises could help foresee how actors would seek to retrieve harmful information from chatbots. Another avenue to mitigate the risks posed by LLMs chatbots is data curation. This would consist of removing data likely to contain critical information for the development or enhancement of a bio-agent, for example, from peer-reviewed articles in the field of virology or gene editing. Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes that activate if the model is tampered with.

The deployment of the technology offers additional avenues for intervention. Private companies and researchers developing generative AI tools may for instance restrict access through know-your costumer-mechanisms, licensing, and other best practices. Start-ups, small laboratories, and companies with limited biosecurity risk expertise can benefit form resources like biosecurity risk assessment tools or the Common Mechanism for DNA synthesis screening to help mitigate potential risks. In the same vein as the Tianjin Biosecurity Guidelines, AI practitioners should work together to create and implement guidelines that foster a culture of responsibility and protect against the misuse of AI in biology. Additionally, the AI community should proactively engage with national and international institutions and conventions, such as the Biological Weapons Convention (BWC), to ensure that regulatory bodies are aware of potential risks and can take appropriate actions to mitigate them.

Much of the recent focus on the risk stemming from the convergence of AI and biotechnology can be explained by the recent breakthroughs in generative AI. The conversation on the possible misuse of generative AI has contributed to bringing the conversation on biosecurity into a new light. Generative AI has not yet fundamentally transformed the risk of biological weapons use, but its potential misuse in that space is not a risk that should be ignored, all the more as it would not take much from the AI community to help prevent it.

#1 The misuse of (civilian) AI could threaten peace and security, and the AI community can do something about it

By Charles Ovink and Vincent Boulanin

In March 2022, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.

The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.

The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue States, non-State armed groups, criminal organizations or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.

People working in the field of life sciences are already well attuned to the problem of misuse of peaceful research, thanks to decades of engagement between arms-control experts and scientists.

The same cannot be said of the AI community, and it is well past time for it to catch up.

We serve with two organizations that take this cause very seriously, the United Nations Office for Disarmament Affairsand the Stockholm International Peace Research Institute. We’re trying to bring our message to the wider AI community, notably future generations of AI practitioners, through awareness-raising and capacity-building activities.

AI can improve many aspects of society and human life, but like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job lossesalgorithmic discriminationand a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150, by some accounts—which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.

The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as health care, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformationcyberattacksterrorism or military operations is rarely considered, unless very superficially.

This is a major gap in the conversation on responsible AI that must be filled.

Most of the actors engaged in the responsible AI conversation work on AI for purely civilian end uses, so it is perhaps not surprising that they overlook peace and security. There’s already a lot to worry about in the civilian space, from potential infringements of human rights to AI’s growing carbon footprint.

AI practitioners may believe that peace and security risks are not their problem, but rather the concern of States. They might also be reluctant to discuss such risks in relation to their work or products due to reputational concerns, or for fear of inadvertently promoting the potential for misuse.

The diversion and misuse of civilian AI technology are, however, not problems that the AI community can or should shy away from. There are very tangible and immediate risks.

Civilian technologies have long been a go-to for malicious actors, because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There are no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.

The fact that AI is an intangible and widely available technology with great general-use potential makes the risk of misuse particularly acute. In the cases of nuclear power technology or the life sciences, the human expertise and material resources needed to develop and weaponize the technology are generally hard to access. In the AI domain there are no such obstacles. All you need may be just a few clicks away.

As one of the researchers behind the chemical weapon paper explained in an interview: “You can go and download a toxicity data set from anywhere. If you have somebody who knows how to code in Python and has some machine-learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic data sets.”

We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes, for example, demonstrates that the risk is real and the consequences potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyberattacksand disinformation—and now, for the first time, in warfare. During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.

The weaponization of civilian AI innovations is also one of the most likely ways that autonomous weapons systems (AWS) could materialize. Non-State actors could exploit advances in computer vision and autonomous navigation to turn hobby drones into homemade AWS. These could be not only highly lethal and disruptive (as depicted in the Future of Life Institute’s advocacy video Slaughterbots) but also very likely violate international law, ethical principles, and agreed standards of safety and security.

Another reason the AI community should get engaged is that the misuse of civilian products is not a problem that States can easily address on their own, or purely through intergovernmental processes. This is not least because governmental officials might lack the expertise to detect and monitor technological developments of concern. What’s more, the processes through which States introduce regulatory measures are typically highly politicized and may struggle to keep up with the speed at which AI tech is advancing.

Moreover, the tools that States and intergovernmental process have at their disposal to tackle the misuse of civilian technologies, such as stringent export controls and safety and security certification standards, may also jeopardize the openness of the current AI innovation ecosystem. From that standpoint, not only do AI practitioners have a key role to play, but it is strongly in their interest to play it.

AI researchers can be a first line of defence, as they are among the best placed to evaluate how their work may be misused. They can identify and try to mitigate problems before they occur—not only through design choices but also through self-restraint in the diffusion and trade of the products of research and innovation.

AI researchers may, for instance, decide not to share specific details about their research (the researchers that repurposed the drug-testing AI did not disclose the specifics of their experiment), while companies that develop AI products may decide not to develop certain features, restrict access to code that might be used maliciously, or add by-design security measures such as antitamper software, geofencing, and remote switches. Or they may apply the know-your-customer principle through the use of token-based authentication.

Such measures will certainly not eliminate the risks of misuse entirely—and they may also have drawbacks—but they can at least help to reduce them. These measures can also help keep at bay potential governmental restrictions, for example on data sharing, which could undermine the openness of the field and hold back technological progress.

To engage with the risks that the misuse of AI poses to peace and security, AI practitioners do not have to look further than existing recommended practices and tools for responsible innovation. There is no need to develop an entirely new tool kit or set of principles. What matters is that peace and security risks are regularly considered, particularly in technology-impact assessments. The appropriate risk-mitigation measures will flow from there.

Responsible AI innovation is not a silver bullet for all the societal challenges brought by advances in AI. However, it is a useful and much-needed approach, especially when it comes to peace and security risks. It offers a bottom-up approach to risk identification, in a context where the multipurpose nature of AI makes top-down governance approaches difficult to develop and implement, and possibly detrimental to progress in the field.

Certainly, it would be unfair to expect AI practitioners alone to foresee and to address the full spectrum of possibilities through which their work could be harmful. Governmental and intergovernmental processes are absolutely necessary, but peace and security, and thus all our safety, are best served by the AI community getting on board. The steps AI practitioners can take do not need to be very big, but they could make all the difference.

Authors’ note: This post was originally published by IEEE Spectrum in August 2022. All content is the responsibility of the authors, and does not necessarily reflect the views of their organizations.

Funding Statement

This programme was made possible by the generous support from the European Union