Blog

This curated blog series aims to raise the profile of efforts that cross boundaries between the civilian-focused ‘responsible AI’ and arms control and non-proliferation communities. The series provides a platform to disseminate insights, ideas and solutions regarding the identification and addressing of risks associated with the misuse of civilian AI, representing the diversity of thought and perspectives present in the AI field. If you are interested in contributing as a guest author in this series, please contact us at charles.ovink@un.org

#2 Unpacking the concerns around AI and biotechnology

By Jules Palayer

The risks stemming from the convergence of AI and biotechnology received a lot of attention in the international conversation on the governance of AI in 2023. The Bletchley Declaration, issued during the UK AI Safety Summit on 1st November 2023, pointed to the need to address the “substantial risks” posed by AI in the domain of biotechnology. One day before that, the G7 issued a Code of Conduct for actors developing advanced AI that emphasised the importance of identifying, evaluating, and mitigating the risks stemming from the use of AI in biology. In addition, the interest on this topic from the research community reached a new peak in 2023 with the publication of numerous blog posts and reports. Why has the use of AI in biology been elevated to the top of the policy agenda in 2023? 

This blog post offers some keys to understand why this topic recently became central in the policy and expert debate on the risks of misuse and diversion of (civilian) AI. It discusses where these concerns come from; whether they are founded, and how the AI community can help to address them.

What is it all about? Biosecurity risk in the age of AI

All the recent policy declarations and reports published on the topic imply that advances of AI in the bio sector will improve medicine, pharmacology, bioengineering, and our understanding of biological processes. However, they also signal that the same advances could be misused for bioterrorism, biological warfare, and bio-crime.

These worries are not new. In fact, biosecurity pundits have been discussing for years whether AI – along with other technological advances like synthetic biology – could facilitate the development and production of harmful bio-agents and their delivery mechanisms. The Stockholm International Peace Research Institute already took stock of that expert discussion in a 2019 report. So, how come the discussion around this risk (re)gained traction in 2023?

Why now? LLMs and Generative AI

Much of the renewed interest is linked to the recent breakthroughs of generative AI, especially in the form of chatbots powered by Large Language Models (LLMs). These advances are expected to provide great opportunities for medicine, and biological research overall. For instance, although some ethical and technical challenges remain, chatbots could improve healthcare in many ways, from introducing virtual nurses to helping medical diagnosis. In addition, generative AI could boost drug discovery and help create new biological entities and enhance existing ones. However, these advancements could also be misused in ways that would exacerbate existing, or create new, biosecurity risks. Experts worry about two scenarios in particular.

The first is that chatbots powered by LLMs could make the development and dissemination of harmful bio-agents easier. These tools, trained on large textual datasets, could reportedly lower the barrier to access critical knowledge to produce harmful bio-agents and help ill-intended individuals troubleshoot problems during the development of pathogens. The fact that unregulated chatbots are widely available is considered an aggravating factor. It could allow a larger pool of actors to access critical information about the development and dissemination of bio-agents.

The second is that generative AI tools used for pharmaceutical research and bioengineering could be misused to create new harmful bio-agents or make existing ones more lethal, contagious, or antibiotic resistant. If the intent were there, there is a wide range of AI-enabled biological tools that could be misused. For instance, resourceful state or non-state actors could exploit the possibilities AI brings for the comprehension of biological processes for the development of more harmful or more targeted bioweapons.

These misuse scenarios seem generally undisputed, the extent to which the international community should worry about them is, however, subject to more debate.

So, how worried should we be? Low likelihood, high impact

In December 2023, the Responsible Innovation in AI for Peace and Security project organised a multi-stakeholder dialogue on the ‘Risk stemming from the use of AI in Biology’ which gathered a group of experts to reflect on that very question in light of the recent policy interest on the topic. The key takeaway was that the international community should neither overestimate nor underestimate these risks. Here is why:

The use of LLM-powered chatbots to create harmful bio-agents has been the object of recent experiments at MIT and at RAND. In these scenario exercises, participants with limited scientific and technical knowledge were asked to use chatbots to help them plan a bio-attack. Both experiments have pointed at elements that could potentially enhance an actor’s capacity to engage in the production of a bio-agent with harmful purposes. However, they have mostly shed light on the limitations of this technology.

Chatbots do not give access to all technical and tacit knowledge necessary for the development and production of harmful bio-agents. These chatbots are trained on data openly available on the internet, and fortunately not all the information necessary to such ill-intended enterprise is available online. Moreover, the output of such tool may not be fully reliable. Chatbots are known to ‘hallucinate’: they can generate statements that have no scientific foundations. In the context of the production of harmful bio-agents, this unreliability could entail major safety risks and create  insurmountable technical bottlenecks in the development process for non-experts. The malicious use of chatbots by state and non-state actors to develop and produce harmful bio-agents, would, therefore not only require highly skilled individuals but also access to more complex facilities and equipment than just the chatbot.

The importance of expert knowledge is also central in the second scenario. Few people have the expertise, skills, and access to the equipment required to make AI-powered bioengineering work, and fewer will be able and willing to use these resources for harmful purposes. Moreover, many technical bottlenecks in the development of bio-agents remain. For example, one of the most important hurdles in developing a pathogen using AI is to synthesise a viable living organism in the physical world from digital information.

In sum, advances of AI may generate new misuse cases, but they do not by themselves fundamentally increase the likelihood of bio-crime, bioterrorism, and biological warfare. With or without AI, the development and use of harmful bio-agents remains a complex and dangerous endeavour reserved to highly motivated actors who can mobilize significant financial and technical resources.

That said, the fact that AI does not radically transform the risk equation, does not mean that the AI community should turn a blind eye to the potential misuses of AI in biology.

What can the AI community do about the misuse of AI?

The AI community can help to strengthen bio governance and contribute to a culture of biosecurity and biosafety, centered on but not limited to the misuse scenarios explored here. It can take practical steps both in the development and in the deployment of AI models that could pose biosecurity risks.

In the development of models, these steps can take the form of risk assessment processes which may lead to technical fixes. For example, the experiments at RAND and MIT showed how red teaming exercises could help foresee how actors would seek to retrieve harmful information from chatbots. Another avenue to mitigate the risks posed by LLMs chatbots is data curation. This would consist of removing data likely to contain critical information for the development or enhancement of a bio-agent, for example, from peer-reviewed articles in the field of virology or gene editing. Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes that activate if the model is tampered with.

The deployment of the technology offers additional avenues for intervention. Private companies and researchers developing generative AI tools may for instance restrict access through know-your costumer-mechanisms, licensing, and other best practices. Start-ups, small laboratories, and companies with limited biosecurity risk expertise can benefit form resources like biosecurity risk assessment tools or the Common Mechanism for DNA synthesis screening to help mitigate potential risks. In the same vein as the Tianjin Biosecurity Guidelines, AI practitioners should work together to create and implement guidelines that foster a culture of responsibility and protect against the misuse of AI in biology. Additionally, the AI community should proactively engage with national and international institutions and conventions, such as the Biological Weapons Convention (BWC), to ensure that regulatory bodies are aware of potential risks and can take appropriate actions to mitigate them.

Much of the recent focus on the risk stemming from the convergence of AI and biotechnology can be explained by the recent breakthroughs in generative AI. The conversation on the possible misuse of generative AI has contributed to bringing the conversation on biosecurity into a new light. Generative AI has not yet fundamentally transformed the risk of biological weapons use, but its potential misuse in that space is not a risk that should be ignored, all the more as it would not take much from the AI community to help prevent it.


#1 The misuse of (civilian) AI could threaten peace and security, and the AI community can do something about it

By Charles Ovink and Vincent Boulanin

In March 2022, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.

The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.

The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue States, non-State armed groups, criminal organizations or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.

People working in the field of life sciences are already well attuned to the problem of misuse of peaceful research, thanks to decades of engagement between arms-control experts and scientists.

The same cannot be said of the AI community, and it is well past time for it to catch up.


We serve with two organizations that take this cause very seriously, the United Nations Office for Disarmament Affairsand the Stockholm International Peace Research Institute. We’re trying to bring our message to the wider AI community, notably future generations of AI practitioners, through awareness-raising and capacity-building activities.

AI can improve many aspects of society and human life, but like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job lossesalgorithmic discriminationand a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150, by some accounts—which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.

The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as health care, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformationcyberattacksterrorism or military operations is rarely considered, unless very superficially.

This is a major gap in the conversation on responsible AI that must be filled.

Most of the actors engaged in the responsible AI conversation work on AI for purely civilian end uses, so it is perhaps not surprising that they overlook peace and security. There’s already a lot to worry about in the civilian space, from potential infringements of human rights to AI’s growing carbon footprint.

AI practitioners may believe that peace and security risks are not their problem, but rather the concern of States. They might also be reluctant to discuss such risks in relation to their work or products due to reputational concerns, or for fear of inadvertently promoting the potential for misuse.

The diversion and misuse of civilian AI technology are, however, not problems that the AI community can or should shy away from. There are very tangible and immediate risks.

Civilian technologies have long been a go-to for malicious actors, because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There are no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.

The fact that AI is an intangible and widely available technology with great general-use potential makes the risk of misuse particularly acute. In the cases of nuclear power technology or the life sciences, the human expertise and material resources needed to develop and weaponize the technology are generally hard to access. In the AI domain there are no such obstacles. All you need may be just a few clicks away.

As one of the researchers behind the chemical weapon paper explained in an interview: “You can go and download a toxicity data set from anywhere. If you have somebody who knows how to code in Python and has some machine-learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic data sets.”

We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes, for example, demonstrates that the risk is real and the consequences potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyberattacksand disinformation—and now, for the first time, in warfare. During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.

The weaponization of civilian AI innovations is also one of the most likely ways that autonomous weapons systems (AWS) could materialize. Non-State actors could exploit advances in computer vision and autonomous navigation to turn hobby drones into homemade AWS. These could be not only highly lethal and disruptive (as depicted in the Future of Life Institute’s advocacy video Slaughterbots) but also very likely violate international law, ethical principles, and agreed standards of safety and security.

Another reason the AI community should get engaged is that the misuse of civilian products is not a problem that States can easily address on their own, or purely through intergovernmental processes. This is not least because governmental officials might lack the expertise to detect and monitor technological developments of concern. What’s more, the processes through which States introduce regulatory measures are typically highly politicized and may struggle to keep up with the speed at which AI tech is advancing.

Moreover, the tools that States and intergovernmental process have at their disposal to tackle the misuse of civilian technologies, such as stringent export controls and safety and security certification standards, may also jeopardize the openness of the current AI innovation ecosystem. From that standpoint, not only do AI practitioners have a key role to play, but it is strongly in their interest to play it.

AI researchers can be a first line of defence, as they are among the best placed to evaluate how their work may be misused. They can identify and try to mitigate problems before they occur—not only through design choices but also through self-restraint in the diffusion and trade of the products of research and innovation.

AI researchers may, for instance, decide not to share specific details about their research (the researchers that repurposed the drug-testing AI did not disclose the specifics of their experiment), while companies that develop AI products may decide not to develop certain features, restrict access to code that might be used maliciously, or add by-design security measures such as antitamper software, geofencing, and remote switches. Or they may apply the know-your-customer principle through the use of token-based authentication.

Such measures will certainly not eliminate the risks of misuse entirely—and they may also have drawbacks—but they can at least help to reduce them. These measures can also help keep at bay potential governmental restrictions, for example on data sharing, which could undermine the openness of the field and hold back technological progress.

To engage with the risks that the misuse of AI poses to peace and security, AI practitioners do not have to look further than existing recommended practices and tools for responsible innovation. There is no need to develop an entirely new tool kit or set of principles. What matters is that peace and security risks are regularly considered, particularly in technology-impact assessments. The appropriate risk-mitigation measures will flow from there.

Responsible AI innovation is not a silver bullet for all the societal challenges brought by advances in AI. However, it is a useful and much-needed approach, especially when it comes to peace and security risks. It offers a bottom-up approach to risk identification, in a context where the multipurpose nature of AI makes top-down governance approaches difficult to develop and implement, and possibly detrimental to progress in the field.

Certainly, it would be unfair to expect AI practitioners alone to foresee and to address the full spectrum of possibilities through which their work could be harmful. Governmental and intergovernmental processes are absolutely necessary, but peace and security, and thus all our safety, are best served by the AI community getting on board. The steps AI practitioners can take do not need to be very big, but they could make all the difference.

Authors’ note: This post was originally published by IEEE Spectrum in August 2022. All content is the responsibility of the authors, and does not necessarily reflect the views of their organizations.


Funding Statement

This programme was made possible by the generous support from the European Union