Blog

This curated blog series aims to raise the profile of efforts that cross boundaries between the civilian-focused ‘responsible AI’ and arms control and non-proliferation communities. The series provides a platform to disseminate insights, ideas and solutions regarding the identification and addressing of risks associated with the misuse of civilian AI, representing the diversity of thought and perspectives present in the AI field. If you are interested in contributing as a guest author in this series, please contact us at charles.ovink@un.org


#5 Peace and Security as an Ethical Value in the Age of AI: The Place of Civil Society

By Prof. Emma Ruttkamp-Bloem

What would it mean for peace and security to be seen as an ethical value for AI principles? What is necessary for us to have a world in which AI contributes to peace and security? Who is responsible for creating and sustaining that status?

“… since wars begin in the minds of men [sic], it is in the minds of men [sic] that the defenses of peace must be constructed” (UNESCO Constitution[1]).

The use of AI and AI enabled systems in the military spans various sectors, from information and cybersecurity to surveillance and warfare. The threats and related perils that the use of AI in the military domain hold for peace and security are diverse, and worse, not always immediately apparent.[2] Equally, civilian AI can also be misused with adverse implications for peace and security, whether through political disinformation, cyberattacks, terrorism, military operations or other means, and those working on AI in the civilian sector are too often unaware of the risks of civilian AI technology to international peace and security.

There are many multilateral initiatives currently addressing the responsible use of AI technology in service to international peace and security. These can be divided in two broad categories, with the first focusing on military applications of AI. In this category we can count, the Global Commission on Responsible AI in the Military Domain (GC REAIM) and the US Department of State-led ‘Political Declaration on Responsible Military Use of AI and Autonomy’. The second category focuses on responsible development and deployment of AI in the civilian sphere. It includes initiatives such as the EU AI Act, the G7 Hiroshima process on advanced AI systems, the UN Secretary-General’s AI Advisory Body, OECD AI Principles, the ongoing discussions around the Global Digital Compact and the AI Safety Summit.

While the existing efforts focus principally on States as the central actors in addressing responsible use of AI, it is crucial to remember that civil society also plays an important role in this area.. In fact, it is the resilience of civil society to potential harm from AI that determines how effective any efforts at protection against harm will be, as well they type and quality of benefits that could be gained from AI.

Certainly, it is first and foremost the duty of governments to ensure international peace and security, but, as pointed out in the UNESCO Constitution , “… a peace based exclusively upon the political and economic arrangements of governments would not be a peace which could secure the unanimous, lasting, and sincere support of the peoples of the world, and … the peace must therefore be founded, if it is not to fail, upon the intellectual and moral solidarity of [human]kind”. In other words, the argument is that peace is not a status that can solely be provided by a government alone, but a status that comes into being continuously as the result of bottom-up (civil society) reflection and action, as much as being the result of top-down (government) actions. In the context of AI and international peace and security, I argue such resilience of civil society, this ‘social resilience’, should be founded on the “intellectual and moral solidarity” of the kind spoken of in the UNESCO Constitution.

I suggest that international peace and security should be a value in ethical approaches to AI technology. To actualize this principle, intellectual and moral solidarity, in particular,[3] should be demonstrated among members of civil society to enable the kind of resilience needed for peaceful and secure environments in which to pursue lives of wellbeing. That international peace and security has not typically been identified as a value up to now in AI ethics conversations is perhaps the geo-strategic elephant in the room, but nonetheless, the time has come for civil society to take up its role as one of the key actors in ensuring international peace and security in the era of AI.

To illustrate why I see intellectual and moral solidarity as the actualizer of international peace and security, I offer an argument from capability theory in welfare philosophy (e.g., Sen 1999[4], Nussbaum 2011[5]) that demonstrates the interconnectedness of all AI actors[6] when we consider what a life of wellbeing in the era of AI entails. Capabilities are elements that are needed to build lives of value. These are combinations of processes, tools, skills, behaviors, and organization, and include, for instance, aspects of human lives such as practical wisdom, control over one’s environment, and bodily health (Nussbaum 2011). Both Nussbaum and Sen view capabilities as close to human rights. Nussbaum (2003)[7] refers to capabilities such as “political liberties, the freedom of association, the free choice of occupation, and a variety of economic and social rights” (Nussbaum 2003, 36). She adds that “capabilities, like human rights, supply a moral and humanly rich set of goals for development” (ibid.).

It is in this sense that Sen (1999) argues that each person in every society has an ‘entitlement’ to all possible combinations of capabilities required for lives of dignity and freedom. For Nussbaum (2011), the entitlements at issue are political, as they place duties on governments to ensure every person has what they need to achieve a life of dignity.

If we now reflect to the need for intellectual and moral solidarity in ensuring peace, I invite you to think of AI ethics principles, such as the right to privacy, fairness and inclusivity, proportionality, etc., as capabilities that every person requires for a life of meaning and dignity in the era of AI. I invite you to view these capabilities as moral entitlements that place duties on all AI actors (including governments) to ensure that no harm comes to any person while engaging with AI technology during any stage of its lifecycle (research, design, development, deployment, and use).

This means that every member of civil society in the various roles according to which they interact with AI technology– as researcher (e.g., academics), designer or developer (e.g., member of industry), deployer (e.g., as member of a business adopting AI technology, or as member of government), or user (any member of civil society in any role) – should take the responsibility of, and be held accountable for, upholding AI ethics principles as building blocks for lives of wellbeing and actualizers of the values that drive them. If international peace and security is a value, and intellectual and moral solidarity the principle that actualizes it, then this means that civil society, of which every AI actor is a member of in some capacity, has a crucial role in the pursuit of responsible AI that supports international peace and security.

The use of AI technology can both enable peace and threaten it. An approach that views international peace and security to be an ethical value, realized in terms of shared humanity and interconnectedness, promises to mobilize civil society in the quest for responsible AI. If a central responsibility is placed on all those who interact with AI technology to safeguard and protect the rights of all to live lives of value in the era of AI, this might be a very good start to building socially resilient societies. Such societies have the potential to carry the “defenses of peace”[8]  in their minds as they will be the ones directly responsible for ensuring the moral entitlements of AI actors to achieve lives of meaning and wellbeing in support of peace.

[1] https://www.unesco.org/en/legal-affairs/constitution

[2] See, for instance the 2023 Gladstone assessment of security risk from weaponized and misaligned AI technology and the resultant US action plan. https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf

[3] Much can – and should – be written on what is meant by intellectual and moral solidarity. The scope of this essay does not allow for this to happen here, but for my purposes here, I understand this to mean that humans act from a feeling of interconnectedness and shared humanity. This reminds of philosophies such as Ubuntu and Daoism.

[4] Sen, A. (1999). Development as Freedom, Oxford University Press, New York.

[5] Nussbaum, M.C. (2011). Creating Capabilities: The Human Development Approach, Harvard University Press, Cambridge MA.

[6] I define an AI actor as any person who engages with AI technology during any of the stages of an AI system lifecycle, following the UNESCO Recommendation on the Ethics of AI definition.

[7] Nussbaum, M.C. (2003). Capabilities as Fundamental Entitlements: Sen and Social Justice. Feminist Economics 9(2-3): 33-59.

[8]  https://www.unesco.org/en/legal-affairs/constitution


#4 AI Missteps Could Unravel Global Peace and Security

To mitigate risks, developers need more training

By Vincent Boulanin, Charles Ovink, Julia Stoyanovich, Raja Chatila, Abhishek Gupta, Ludovic Righetti, Frank Dignum, Edson Prestes

This post was originally published by IEEE Spectrum’s “The Institute” on 21 July 2024. All content is the responsibility of the authors, and does not necessarily reflect the views of their organizations. Website (link to the original publication)

Many in the civilian artificial intelligence community don’t seem to realize that today’s AI innovations could have serious consequences for international peace and security. Yet AI practitioners—whether researchers, engineers, product developers, or industry managers—can play critical roles in mitigating risks through the decisions they make throughout the life cycle of AI technologies.

There are a host of ways by which civilian advances of AI could threaten peace and security. Some are direct, such as the use of AI-powered chatbots to create disinformation for political-influence operationsLarge language models also can be used to create code for cyberattacks and to facilitate the development and production of biological weapons.

Other ways are more indirect. AI companies’ decisions about whether to make their software open-source and in which conditions, for example, have geopolitical implications. Such decisions determine how states or nonstate actors access critical technology, which they might use to develop military AI applications, potentially including autonomous weapons systems.

AI companies and researchers must become more aware of the challenges, and of their capacity to do something about them.

Change needs to start with AI practitioners’ education and career development. Technically, there are many options in the responsible innovation toolbox that AI researchers could use to identify and mitigate the risks their work presents. They must be given opportunities to learn about such options including IEEE 7010: Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being, IEEE 7007-2021: Ontological Standard for Ethically Driven Robotics and Automation Systems, and the National Institute of Standards and Technology’s AI Risk Management Framework.

If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be better empowered to innovate responsibly and be meaningful designers and implementers of regulations.

What Needs to Change in AI Education

Responsible AI requires a spectrum of capabilities that are typically not covered in AI education. AI should no longer be treated as a pure STEM discipline but rather a transdisciplinary one that requires technical knowledge, yes, but also insights from the social sciences and humanities. There should be mandatory courses on the societal impact of technology and responsible innovation, as well as specific training on AI ethics and governance.

Those subjects should be part of the core curriculum at both the undergraduate and graduate levels at all universities that offer AI degrees.

If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be empowered to innovate responsibly and be meaningful designers and implementers of AI regulations.

Changing the AI education curriculum is no small task. In some countries, modifications to university curricula require approval at the ministry level. Proposed changes can be met with internal resistance due to cultural, bureaucratic, or financial reasons. Meanwhile, the existing instructors’ expertise in the new topics might be limited.

An increasing number of universities now offer the topics as electives, however, including Harvard, New York University, Sorbonne University,Umeå University,and the University of Helsinki.

There’s no need for a one-size-fits-all teaching model, but there’s certainly a need for funding to hire dedicated staff members and train them.

Adding Responsible AI to Lifelong Learning

The AI community must develop continuing education courses on the societal impact of AI research so that practitioners can keep learning about such topics throughout their career.

AI is bound to evolve in unexpected ways. Identifying and mitigating its risks will require ongoing discussions involving not only researchers and developers but also people who might directly or indirectly be impacted by its use. A well-rounded continuing education program would draw insights from all stakeholders.

Some universities and private companies already have ethical review boards and policy teams that assess the impact of AI tools. Although the teams’ mandate usually does not include training, their duties could be expanded to make courses available to everyone within the organization. Training on responsible AI research shouldn’t be a matter of individual interest; it should be encouraged.

Organizations such as IEEE and the Association for Computing Machinery could play important roles in establishing continuing education courses because they’re well placed to pool information and facilitate dialogue, which could result in the establishment of ethical norms.

Engaging With the Wider World

We also need AI practitioners to share knowledge and ignite discussions about potential risks beyond the bounds of the AI research community.

Fortunately, there are already numerous groups on social media that actively debate AI risks including the misuse of civilian technology by state and nonstate actors. There are also niche organizations focused on responsible AI that look at the geopolitical and security implications of AI research and innovation. They include the AI Now Institute, the Centre for the Governance of AI, Data and Societythe Distributed AI Research Institute,the Montreal AI Ethics Institute, and the Partnership on AI.

Those communities, however, are currently too small and not sufficiently diverse, as their most prominent members typically share similar backgrounds. Their lack of diversity could lead the groups to ignore risks that affect underrepresented populations.

What’s more, AI practitioners might need help and tutelage in how to engage with people outside the AI research community—especially with policymakers. Articulating problems or recommendations in ways that nontechnical individuals can understand is a necessary skill.

We must find ways to grow the existing communities, make them more diverse and inclusive, and make them better at engaging with the rest of society. Large professional organizations such as IEEE and ACM could help, perhaps by creating dedicated working groups of experts or setting up tracks at AI conferences.

Universities and the private sector also can help by creating or expanding positions and departments focused on AI’s societal impact and AI governance. Umeå University recently created an AI Policy Lab to address the issues. Companies including AnthropicGoogleMeta, and OpenAI have established divisions or units dedicated to such topics.

There are growing movements around the world to regulate AI. Recent developments include the creation of the U.N. High-Level Advisory Body on Artificial Intelligence and the Global Commission on Responsible Artificial Intelligence in the Military Domain. The G7 leaders issued a statement on the Hiroshima AI process, and the British government hosted the first AI Safety Summit last year.

The central question before regulators is whether AI researchers and companies can be trusted to develop the technology responsibly.

In our view, one of the most effective and sustainable ways to ensure that AI developers take responsibility for the risks is to invest in education. Practitioners of today and tomorrow must have the basic knowledge and means to address the risk stemming from their work if they are to be effective designers and implementers of future AI regulations.

Authors’ note: Authors are listed by level of contributions. The authors were brought together by an initiative of the U.N. Office for Disarmament Affairs and the Stockholm International Peace Research Institute launched with the support of a European Union initiative on Responsible Innovation in AI for International Peace and Security.

About IEEE Spectrum

IEEE Spectrum is an award-winning technology magazine and the flagship publication of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. With roots going back to 1884, the IEEE organizes research conferences, publishes engineering journals, and is responsible for major technology standards, including most famously Ethernet and Wi-Fi. Spectrum‘s mission has remained the same since its founding in 1964: its charter is to keep the public and over 400,000 IEEE Members informed about major trends and developments in technology, engineering, and science. IEEE news stories, features, special reports, podcasts, videos, and infographics engage readers with clear explanations about emerging concepts and developments with details they can not get elsewhere.


#3 The CELL approach: What we can learn from the way a working group on Issues of AI and Autonomy in Defense Systems works.

By Lisa Titus & Ariel Conn

As a society we are rapidly acknowledging the importance of Responsible AI, but putting that commitment into practice can be a fraught endeavour. It can be risky for practitioners, policy makers, and concerned members of the public to ask hard questions or to admit when there is something we don’t understand. Moreover, issues related to the responsible design, development, and use of emerging technologies are often politicized, which can make it difficult to connect and make progress on their regulation and governance. The combination of all of these factors can lead to a collective ignoring of important questions or keeping discussions at too high a level of generality to enable progress towards useful and relevant solutions. It is therefore highly important that we not only try to address ethical questions related to AI use, but that we innovate on our methods for collectively making progress on these questions. This post introduces a new process for groups to use in grappling with ethical questions, which I call CELL, and aims to provide a “lessons-learned” from our experience, that highlights items to consider when addressing these topics. 

Motivation from Autonomous Weapons Systems

The above represent just some of the challenges I experienced while working on one of today’s most pressing issues: the ethics of autonomous weapons systems. In collaboration with colleagues at the University of Pennsylvania, I organized a conference on the topic for the International Conference for Robotics and Automation (ICRA) in 2022, and also did some informal advising, helping leaders in Penn’s General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory to address some student concerns about the ethics of autonomous weapons systems and how it might relate to their work. I am a philosopher of artificial intelligence and robotics, and I was surprised to see how the next generation of engineers were afraid to engage on these issues, especially in a public setting like a conference. Also, working on these issues I saw how legal and policy experts often struggled to understand and apply their expertise to cutting-edge systems.

On the heels of these experiences, I had the opportunity to participate in the kind of working group that I believe can really help us resolve these challenges. This working group was established by the Standards Association of the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest professional organization. Their Standards Association is a global leader in helping to set standards and best practices for technology. Our group was originally convened to help add a technical perspective to issues related to the ethics of increased autonomy in defense systems, but through the integration of expert perspectives from different fields over an extended period of time, I realized that the value-add went far beyond this objective.  

Led by Ariel Conn, Ingvild Bode, and Rachel Azafrani, this Industry Connection Research Group on Issues of Autonomy and AI in Defense Systems first produced a whitepaper in 2021 outlining ethical and technical challenges in the development, use, and governance of autonomous weapons systems. I joined the group in 2022. Our task was to better understand the challenges of the 2021 paper and increase clarity on how to start developing solutions. We have a forthcoming whitepaper that helps us better understand the challenges posed at each stage of the lifecycle of the development, use, refinement, and retirement of an autonomous weapon system. I hope that many stakeholders will find our whitepaper useful. What I want to do here is to help the working group make a different contribution – to articulate the benefits of the CELL process that we employed in action and motivate its adoption in other cases where cutting-edge AI technology meets complex ethical, legal, social, and policy issues.

CELL

This working group used what I call the CELL process: it was CLOSED-door, included EXPERTS and stakeholders from different disciplines, it met for a LONG period of time, and it was structured by a LIFECYCLE analysis, which starts with the design and development of a technology and tracks it through various stages all the way to implementation, refinement, and ultimately retirement.

Why a CLOSED-door environment?

With issues as charged as the ethics of autonomous weapons systems, being able to ask questions and try out ideas in an environment where you are not worried about repercussions is critical, and so ensuring a “safe space” for such discussion is key to this approach. Closed-door in this sense simply means that the meetings are not open to the public, and that participants can speak freely, without worries that ideas will be attributed against their will. In our example, we were able to get clarification on technical aspects of relevant technologies, ask questions about factual history, and get off-the-record assessments of real-world events. We were able to give our unfiltered opinions and get honest feedback about the way they were thinking through the issues. We were able to be honest and vulnerable about what we didn’t know. While transparency in decision-making is of course important, and there are many other valid approaches with a mixture of elements, because of the charged nature of these issues we also need safe spaces of this type, especially in earlier stages of understanding and addressing these issues.

Why an interdisciplinary group of EXPERTS and stakeholders?

Nobody should claim to understand everything about the wave of AI transformation that we are currently riding. The level of expertise we need to understand how these cutting-edge technologies work requires multiple degrees and the time to keep up with their rapid evolution. Similarly, understanding today’s complex social, legal, and political landscape that is being affected by these technologies requires deep knowledge and experience. We need experts in all of these areas, from diverse backgrounds, and we also need experts at helping people communicate with one another, drawing global insights, and developing a shared vocabulary and knowledge base. (By the way, this is something philosophers of artificial intelligence in particular are well-positioned to offer, but that’s a topic for a different blog post.)

Why a LONG term format like a months- or years-long working group?

In order for participants to learn from one another and deepen their understanding of the different facets of the issues, we need repeated meetings over an extended period of time. This develops trust and rapport within the group and also allows time for information to sink in, for people to learn new concepts and vocabulary, and for the members to begin applying what they’ve learned from the group to their own everyday projects within their own domains of expertise.

Why a LIFECYCLE analysis?

To keep conversations relevant to the goal of making progress on ethical, social, and political issues related to the ethics of AI technology (in our case autonomous weapons systems), we need structure that is going to help us get down into the specifics, to see where technical information might be crucial to governance discussions, to understand where opportunities for improving processes and guardrails might be. While perhaps there are many ways to do this, I found the structure of a lifecycle analysis to be particularly useful. Focusing on lifecycle stages helped us look at both the technical and the governance issues that arise with each stage, and it helped us to identify common issues that run throughout the lifecycle. By iteratively drilling down into each stage and zooming out to connect it to the lifecycle as a whole, the process helped experts ask questions that expanded their current understanding.  The process also gave the experts concrete issues and problems to apply their expertise to, helping us all better understand the technical as well as the legal, ethical, and other risks. Because of this, the lifecycle analysis paves the way for action-guiding, relevant policy.

Using the CELL approach

One example of how CELL helped to advance our group’s discussions was in thinking through issues related to Human-System Integration (HSI). Participants who were military and legal experts from different countries helped the group understand that systems need to be designed so that commanders and operators can foresee whether the effects of an attack would be lawful. Attention to the lifecycle helped us identify that this ethical requirement generates other requirements at earlier stages, for example that designers need to understand and take into account the kinds of real-world contexts in which their systems might be used and of psychological and other factors that might affect the operation of the system in a real-time contexts. Once this point was made, engineers and other group members with technical expertise could then weigh in on the feasibility of various requirements for design plans and testing and monitoring protocols. Depending on those assessments, legal experts could again weigh in on whether the conceptualized protocols indeed satisfied legal conditions. This kind of interaction was enormously helped by having a closed-door group of experts carry out a lifecycle analysis over a long time period.

In this way, attention to the lifecycle enabled experts to both contribute at key moments and ask clarifying questions of other members. The closed-door nature of these discussions enabled members to do this work freely and to give their honest assessments about lawfulness or technical feasibility. Finally, it would have been impossible to generate the depth of discussion or understanding our group achieved without meeting iteratively over a long time frame so that experts from different fields could incorporate and respond to the insights of their peers.

I hope that readers interested in the CELL process will look out for our forthcoming whitepaper, “Lifecycle Framework for Autonomy and AI in Defense Systems,” which is soon to be published through the IEEE SA. In it, you’ll see this model at work across the stages of the lifecycle of a hypothetical uncrewed underwater vehicle (UUV).

Beyond Defense Cases of the CELL approach

I think CELL would be useful in many other kinds of applications as well, especially those involving generative AI. For example, perhaps it could help us avoid incidents like one that recently occurred with Google Gemini. When prompted to provide an image of a 1943 German soldier, Gemini produced images that included a black soldier and an Asian woman soldier. This is not only historically inaccurate, but deeply offensive considering the horrific racism of Nazi Germany during World War II. The designers were looking to avoid racial and other biases in image generation generally (for example always producing a picture of a white man when asked for a doctor), which is important, but the result was that Gemini’s image generation feature produced several similar instances of this kind of offensive historical inaccuracy.

To help avoid issues like this in the future, it is important to enable participation from diverse backgrounds and underrepresented groups, as well as experts in the humanities who have deep knowledge of the history, social impact, and manifestations of racism and other kinds of bias. Such experts could more readily anticipate situations that might require nuanced treatment and help frame problems and articulate solutions. But they need to be in long-term, closed-door conversation with engineers and governance experts so that the team can frankly discuss potential concerns and collectively engage in iterative problem-solving. For example, focused attention on the technical details of the use stage of Gemini’s image generator might have led to discussions about ethical issues related to prompt transformation, in which the system takes the user’s prompt and transforms it into a modified one before feeding it into the main model. Prompt transformation processes can make system outputs both more useful and more ethical, but they need to be responsive to contextually determined ethical demands, such as the need for an accurate portrayal of demographic characteristics of Nazi soldiers. A diverse group of experts, meeting over a long period of time in a closed-door environment to discuss ethical and related technical issues over the lifecycle of the system, may very well be able to improve outcomes for Gemini and its users.

This is just one example of how the CELL approach can be applied to challenging ethical issues in the development of technology beyond autonomous weapons systems. I believe it could be adapted for many other technologies as well, and so will prove widely useful. And yet, I think that we are still at the beginning. As companies, governments, and other stakeholders work to keep pace with technological – and especially AI – development, we’ll need to keep innovating our assessment processes alongside our technologies.


#2 Unpacking the concerns around AI and biotechnology

By Jules Palayer

The risks stemming from the convergence of AI and biotechnology received a lot of attention in the international conversation on the governance of AI in 2023. The Bletchley Declaration, issued during the UK AI Safety Summit on 1st November 2023, pointed to the need to address the “substantial risks” posed by AI in the domain of biotechnology. One day before that, the G7 issued a Code of Conduct for actors developing advanced AI that emphasised the importance of identifying, evaluating, and mitigating the risks stemming from the use of AI in biology. In addition, the interest on this topic from the research community reached a new peak in 2023 with the publication of numerous blog posts and reports. Why has the use of AI in biology been elevated to the top of the policy agenda in 2023? 

This blog post offers some keys to understand why this topic recently became central in the policy and expert debate on the risks of misuse and diversion of (civilian) AI. It discusses where these concerns come from; whether they are founded, and how the AI community can help to address them.

What is it all about? Biosecurity risk in the age of AI

All the recent policy declarations and reports published on the topic imply that advances of AI in the bio sector will improve medicine, pharmacology, bioengineering, and our understanding of biological processes. However, they also signal that the same advances could be misused for bioterrorism, biological warfare, and bio-crime.

These worries are not new. In fact, biosecurity pundits have been discussing for years whether AI – along with other technological advances like synthetic biology – could facilitate the development and production of harmful bio-agents and their delivery mechanisms. The Stockholm International Peace Research Institute already took stock of that expert discussion in a 2019 report. So, how come the discussion around this risk (re)gained traction in 2023?

Why now? LLMs and Generative AI

Much of the renewed interest is linked to the recent breakthroughs of generative AI, especially in the form of chatbots powered by Large Language Models (LLMs). These advances are expected to provide great opportunities for medicine, and biological research overall. For instance, although some ethical and technical challenges remain, chatbots could improve healthcare in many ways, from introducing virtual nurses to helping medical diagnosis. In addition, generative AI could boost drug discovery and help create new biological entities and enhance existing ones. However, these advancements could also be misused in ways that would exacerbate existing, or create new, biosecurity risks. Experts worry about two scenarios in particular.

The first is that chatbots powered by LLMs could make the development and dissemination of harmful bio-agents easier. These tools, trained on large textual datasets, could reportedly lower the barrier to access critical knowledge to produce harmful bio-agents and help ill-intended individuals troubleshoot problems during the development of pathogens. The fact that unregulated chatbots are widely available is considered an aggravating factor. It could allow a larger pool of actors to access critical information about the development and dissemination of bio-agents.

The second is that generative AI tools used for pharmaceutical research and bioengineering could be misused to create new harmful bio-agents or make existing ones more lethal, contagious, or antibiotic resistant. If the intent were there, there is a wide range of AI-enabled biological tools that could be misused. For instance, resourceful state or non-state actors could exploit the possibilities AI brings for the comprehension of biological processes for the development of more harmful or more targeted bioweapons.

These misuse scenarios seem generally undisputed, the extent to which the international community should worry about them is, however, subject to more debate.

So, how worried should we be? Low likelihood, high impact

In December 2023, the Responsible Innovation in AI for Peace and Security project organised a multi-stakeholder dialogue on the ‘Risk stemming from the use of AI in Biology’ which gathered a group of experts to reflect on that very question in light of the recent policy interest on the topic. The key takeaway was that the international community should neither overestimate nor underestimate these risks. Here is why:

The use of LLM-powered chatbots to create harmful bio-agents has been the object of recent experiments at MIT and at RAND. In these scenario exercises, participants with limited scientific and technical knowledge were asked to use chatbots to help them plan a bio-attack. Both experiments have pointed at elements that could potentially enhance an actor’s capacity to engage in the production of a bio-agent with harmful purposes. However, they have mostly shed light on the limitations of this technology.

Chatbots do not give access to all technical and tacit knowledge necessary for the development and production of harmful bio-agents. These chatbots are trained on data openly available on the internet, and fortunately not all the information necessary to such ill-intended enterprise is available online. Moreover, the output of such tool may not be fully reliable. Chatbots are known to ‘hallucinate’: they can generate statements that have no scientific foundations. In the context of the production of harmful bio-agents, this unreliability could entail major safety risks and create  insurmountable technical bottlenecks in the development process for non-experts. The malicious use of chatbots by state and non-state actors to develop and produce harmful bio-agents, would, therefore not only require highly skilled individuals but also access to more complex facilities and equipment than just the chatbot.

The importance of expert knowledge is also central in the second scenario. Few people have the expertise, skills, and access to the equipment required to make AI-powered bioengineering work, and fewer will be able and willing to use these resources for harmful purposes. Moreover, many technical bottlenecks in the development of bio-agents remain. For example, one of the most important hurdles in developing a pathogen using AI is to synthesise a viable living organism in the physical world from digital information.

In sum, advances of AI may generate new misuse cases, but they do not by themselves fundamentally increase the likelihood of bio-crime, bioterrorism, and biological warfare. With or without AI, the development and use of harmful bio-agents remains a complex and dangerous endeavour reserved to highly motivated actors who can mobilize significant financial and technical resources.

That said, the fact that AI does not radically transform the risk equation, does not mean that the AI community should turn a blind eye to the potential misuses of AI in biology.

What can the AI community do about the misuse of AI?

The AI community can help to strengthen bio governance and contribute to a culture of biosecurity and biosafety, centered on but not limited to the misuse scenarios explored here. It can take practical steps both in the development and in the deployment of AI models that could pose biosecurity risks.

In the development of models, these steps can take the form of risk assessment processes which may lead to technical fixes. For example, the experiments at RAND and MIT showed how red teaming exercises could help foresee how actors would seek to retrieve harmful information from chatbots. Another avenue to mitigate the risks posed by LLMs chatbots is data curation. This would consist of removing data likely to contain critical information for the development or enhancement of a bio-agent, for example, from peer-reviewed articles in the field of virology or gene editing. Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes that activate if the model is tampered with.

The deployment of the technology offers additional avenues for intervention. Private companies and researchers developing generative AI tools may for instance restrict access through know-your costumer-mechanisms, licensing, and other best practices. Start-ups, small laboratories, and companies with limited biosecurity risk expertise can benefit form resources like biosecurity risk assessment tools or the Common Mechanism for DNA synthesis screening to help mitigate potential risks. In the same vein as the Tianjin Biosecurity Guidelines, AI practitioners should work together to create and implement guidelines that foster a culture of responsibility and protect against the misuse of AI in biology. Additionally, the AI community should proactively engage with national and international institutions and conventions, such as the Biological Weapons Convention (BWC), to ensure that regulatory bodies are aware of potential risks and can take appropriate actions to mitigate them.

Much of the recent focus on the risk stemming from the convergence of AI and biotechnology can be explained by the recent breakthroughs in generative AI. The conversation on the possible misuse of generative AI has contributed to bringing the conversation on biosecurity into a new light. Generative AI has not yet fundamentally transformed the risk of biological weapons use, but its potential misuse in that space is not a risk that should be ignored, all the more as it would not take much from the AI community to help prevent it.


#1 The misuse of (civilian) AI could threaten peace and security, and the AI community can do something about it

By Charles Ovink and Vincent Boulanin

In March 2022, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.

The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.

The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue States, non-State armed groups, criminal organizations or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.

People working in the field of life sciences are already well attuned to the problem of misuse of peaceful research, thanks to decades of engagement between arms-control experts and scientists.

The same cannot be said of the AI community, and it is well past time for it to catch up.


We serve with two organizations that take this cause very seriously, the United Nations Office for Disarmament Affairsand the Stockholm International Peace Research Institute. We’re trying to bring our message to the wider AI community, notably future generations of AI practitioners, through awareness-raising and capacity-building activities.

AI can improve many aspects of society and human life, but like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job lossesalgorithmic discriminationand a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150, by some accounts—which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.

The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as health care, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformationcyberattacksterrorism or military operations is rarely considered, unless very superficially.

This is a major gap in the conversation on responsible AI that must be filled.

Most of the actors engaged in the responsible AI conversation work on AI for purely civilian end uses, so it is perhaps not surprising that they overlook peace and security. There’s already a lot to worry about in the civilian space, from potential infringements of human rights to AI’s growing carbon footprint.

AI practitioners may believe that peace and security risks are not their problem, but rather the concern of States. They might also be reluctant to discuss such risks in relation to their work or products due to reputational concerns, or for fear of inadvertently promoting the potential for misuse.

The diversion and misuse of civilian AI technology are, however, not problems that the AI community can or should shy away from. There are very tangible and immediate risks.

Civilian technologies have long been a go-to for malicious actors, because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There are no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.

The fact that AI is an intangible and widely available technology with great general-use potential makes the risk of misuse particularly acute. In the cases of nuclear power technology or the life sciences, the human expertise and material resources needed to develop and weaponize the technology are generally hard to access. In the AI domain there are no such obstacles. All you need may be just a few clicks away.

As one of the researchers behind the chemical weapon paper explained in an interview: “You can go and download a toxicity data set from anywhere. If you have somebody who knows how to code in Python and has some machine-learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic data sets.”

We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes, for example, demonstrates that the risk is real and the consequences potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyberattacksand disinformation—and now, for the first time, in warfare. During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.

The weaponization of civilian AI innovations is also one of the most likely ways that autonomous weapons systems (AWS) could materialize. Non-State actors could exploit advances in computer vision and autonomous navigation to turn hobby drones into homemade AWS. These could be not only highly lethal and disruptive (as depicted in the Future of Life Institute’s advocacy video Slaughterbots) but also very likely violate international law, ethical principles, and agreed standards of safety and security.

Another reason the AI community should get engaged is that the misuse of civilian products is not a problem that States can easily address on their own, or purely through intergovernmental processes. This is not least because governmental officials might lack the expertise to detect and monitor technological developments of concern. What’s more, the processes through which States introduce regulatory measures are typically highly politicized and may struggle to keep up with the speed at which AI tech is advancing.

Moreover, the tools that States and intergovernmental process have at their disposal to tackle the misuse of civilian technologies, such as stringent export controls and safety and security certification standards, may also jeopardize the openness of the current AI innovation ecosystem. From that standpoint, not only do AI practitioners have a key role to play, but it is strongly in their interest to play it.

AI researchers can be a first line of defence, as they are among the best placed to evaluate how their work may be misused. They can identify and try to mitigate problems before they occur—not only through design choices but also through self-restraint in the diffusion and trade of the products of research and innovation.

AI researchers may, for instance, decide not to share specific details about their research (the researchers that repurposed the drug-testing AI did not disclose the specifics of their experiment), while companies that develop AI products may decide not to develop certain features, restrict access to code that might be used maliciously, or add by-design security measures such as antitamper software, geofencing, and remote switches. Or they may apply the know-your-customer principle through the use of token-based authentication.

Such measures will certainly not eliminate the risks of misuse entirely—and they may also have drawbacks—but they can at least help to reduce them. These measures can also help keep at bay potential governmental restrictions, for example on data sharing, which could undermine the openness of the field and hold back technological progress.

To engage with the risks that the misuse of AI poses to peace and security, AI practitioners do not have to look further than existing recommended practices and tools for responsible innovation. There is no need to develop an entirely new tool kit or set of principles. What matters is that peace and security risks are regularly considered, particularly in technology-impact assessments. The appropriate risk-mitigation measures will flow from there.

Responsible AI innovation is not a silver bullet for all the societal challenges brought by advances in AI. However, it is a useful and much-needed approach, especially when it comes to peace and security risks. It offers a bottom-up approach to risk identification, in a context where the multipurpose nature of AI makes top-down governance approaches difficult to develop and implement, and possibly detrimental to progress in the field.

Certainly, it would be unfair to expect AI practitioners alone to foresee and to address the full spectrum of possibilities through which their work could be harmful. Governmental and intergovernmental processes are absolutely necessary, but peace and security, and thus all our safety, are best served by the AI community getting on board. The steps AI practitioners can take do not need to be very big, but they could make all the difference.

Authors’ note: This post was originally published by IEEE Spectrum in August 2022. All content is the responsibility of the authors, and does not necessarily reflect the views of their organizations.


Funding Statement

This programme was made possible by the generous support from the European Union