OpenAI Says GPT-4 Poses Little Risk of Helping Create Bioweapons

Artificial intelligence startup carried out tests as part of efforts to understand and prevent any “catastrophic” risks from its technology.

The Open AI logo on a smartphone arranged in Crockett, California, US, on Friday, Dec. 29, 2023. 

Photographer: David Paul Morris/Bloomberg
Lock
This article is for subscribers only.

OpenAI’s most powerful artificial intelligence software, GPT-4, poses “at most” a slight risk of helping people create biological threats, according to early tests the company carried out to better understand and prevent potential “catastrophic” harms from its technology.

For months, lawmakers and even some tech executives have raised concerns about whether AI can make it easier for bad actors to develop biological weapons, such as using chatbots to find information on how to plan an attack. In October, President Joe Biden signed an executive order on AI that directed the Department of Energy to ensure AI systems don’t pose chemical, biological or nuclear risks. That same month, OpenAI formed a “preparedness” team, which is focused on minimizing these and other risks from AI as the fast-developing technology gets more capable.