AI and Biothreats: Exploring the Potential Risks of AI in Biothreat Creation
OpenAI, the research organization behind the powerful language model GPT-4, has released a new study exploring the possibility of using AI to assist in the creation of biological threats. This study, which involved biologists and students, found that GPT-4 can only provide "slight improvements" in terms of accuracy compared to existing resources on the internet for manufacturing biological threats.
The study is part of OpenAI's preparatory framework, which aims to assess and mitigate the potential risks of advanced AI capabilities, particularly those that may pose "frontier risks" - non-traditional threats that are not yet understood or anticipated by society. One such frontier risk is the ability of AI systems, such as large language models (LLMs), to assist malicious actors in developing and executing biological attacks, such as synthesizing pathogens or toxins.
Methodology and Results:
To evaluate this risk, researchers conducted a human assessment with 100 participants, including 50 biology experts with doctoral degrees and professional wet lab experience, as well as 50 student-level participants who had taken at least one university-level biology course. Each group of participants was randomly assigned to a control group or a treatment group, with the control group having access only to the internet, while the treatment group could also use GPT-4. Each participant was then asked to complete a series of tasks covering the end-to-end process of creating a biological threat, such as conceptualizing, acquiring, amplifying, formulating, and releasing.
Researchers measured participants' performance based on five indicators: accuracy, completeness, innovativeness, time spent, and self-assessed difficulty. They found that, except for a slight improvement in accuracy for the student-level group, GPT-4 did not significantly enhance participants' performance on any of the indicators. Researchers also noted that GPT-4 often produced erroneous or misleading responses, which could hinder the process of creating biological threats.
Conclusion:
The researchers concluded that current-generation LLMs like GPT-4 do not pose substantial risks to the creation of biological threats compared to existing internet resources. However, they cautioned that this finding is not definitive, as future LLMs may become more powerful and dangerous. They also emphasized the need for ongoing research and community deliberation on this topic, as well as the development of improved assessment methods and ethical guidelines for addressing the security risks enabled by AI.
This study aligns with the results of a previous red team exercise conducted by RAND Corporation, which also found no statistically significant difference in the feasibility of using LLM-assisted generated biological attack plans. However, both studies acknowledge the limitations of their approaches and the rapid evolution of AI technology, which may change the risk landscape in the near future.
OpenAI is not the only organization concerned about the potential misuse of AI for biological attacks. Multiple academic and policy experts have also highlighted this issue and called for more research and regulation. As AI becomes more powerful and accessible, the need to remain vigilant and prepared becomes increasingly urgent.