LLMs: A Potential Tool for Biological Attacks
A recent RAND Corporation study reveals that large language models can strategize for biological attacks, including budget calculations. Importantly, these AI models can't create bioweapons.
To alleviate any immediate concerns, it's crucial to note that the fear of biological weapons is not currently warranted. The recent RAND study highlights their limited effectiveness compared to other weaponry, often stemming from past failed attempts due to misconceptions about pathogens. For instance, the study mentions Aum Shinrikyo's unsuccessful endeavor in the mid-1990s to contaminate a wide area with botulinum toxin. However, with the advancement of LLMs, these models could potentially bridge knowledge gaps for potential wrongdoers.
First, the AI guided the necessary properties of biological weapons to ensure their effectiveness. They must meet at least three key criteria. First and foremost is the optimal rate of infection; it should outpace the speed at which a government's bureaucratic machinery operates. By the time decisions are made about implementing isolation and social distancing measures, the viral or bacterial infection should have already escalated to an epidemic. Second is the requisite potency of the infection. For instance, a common cold can spread rapidly, infecting a significant portion of the population within mere hours. However, this would predominantly benefit pharmacies, stocked with dozens of effective cold remedies, rather than causing any serious harm. Third is the lack of preparedness for the infection. If pharmaceutical warehouses are filled with vaccines for a particular virus or bacteria, a bio-attack is likely to fail before it even begins. Therefore, biological weapons should ideally involve pathogens for which vaccines are unavailable, treatments are ineffective, or supplies are simply insufficient.
Subsequently, Rand conducted two trial scenarios. In the first scenario, the LLM, whose name was not disclosed in the report, worked with agents causing anthrax, smallpox, and plague, assessing the potential impact of using these agents as biological weapons. The analysis considered possible affected areas and the anticipated number of fatal outcomes. Additionally, the AI evaluated how the introduction of carriers for pathogens—such as infected rodents and insects—could enhance the lethal effects.
In their second scenario, Rand asked for an assessment of various mechanisms for delivering botulinum toxin. Initially, the LLM provided two options: food products and aerosols. Ultimately, it chose aerosols, even concocting a “cover” for acquiring the necessary pathogen under the guise of conducting scientific research. Rand characterized these results as nearly the perfect blueprint for a biological attack.
However, the authors note that these conclusions still need further validation. Now, Rand's experts intend to contrast the LLM-developed plan with information already available on the internet. This step is vital to truly understand the LLM’s capacity to plan a biological weapon attack. Should it turn out that the LLM-generated “recipes” increase the odds of malicious actors causing widespread damage, Rand may decide to fully or partially restrict access to the research findings. Furthermore, the non-profit organization is committed to informing both authorities and AI providers about the outcomes of this study.
The agenda for the global AI security summit, scheduled for November in the United Kingdom, already includes discussions on the threats posed by AI in the creation of biological weapons. Dario Amodei, co-founder and CEO of the AI startup Anthropic, has previously warned that artificial intelligence systems could potentially develop biological weapons within the next two to three years.