OpenAI warns that its new ChatGPT Agent has the ability to aid dangerous bioweapon development
Briefly

OpenAI's ChatGPT Agent is a new AI feature designed for automating tasks like data gathering and travel bookings. However, safety researchers warn it could also enable users with no prior knowledge to potentially create biological or chemical threats. This classification of 'high' biorisk highlights increased likelihood of misuse for harmful purposes, possibly leading to biological or chemical terror events by non-state actors. OpenAI has implemented extra safeguards as a precautionary measure, acknowledging that the risks associated with the tool are very real and warrant comprehensive mitigations.
OpenAI's newest product promises to make it easier for someone to automatically gather data, create spreadsheets, book travel, spin up slide decks—and, just maybe, build a biological weapon.
The real-world implications of this could mean that biological or chemical terror events by non-state actors become more likely and frequent, according to OpenAI's 'Preparedness Framework,' which the company uses to track and prepare for new risks of severe harm from its frontier models.
Based on our evaluations and those of our experts, the risk is very real. Some might think that biorisk is not real, and models only provide information that could be found via search.
Keren Gu, a safety researcher at OpenAI, said that while the company did extensive assessments, the model's capability raises significant concerns about potential misuse.
Read at Fortune
[
|
]