
"“When I joined it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.”"
"Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab's goal of building AGI, but said creating a super-intelligent computer model without the right safety measures in place wouldn't fit with the mission of the organization she originally joined."
"Campbell pointed to an incident where Microsoft deployed a version of the company's GPT-4 model in India through its Bing search engine before the model had been evaluated by the company's Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.”"
"OpenAI's attorneys also had Campbell admit that in her “speculative opinion,” OpenAI's safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year."
A federal court in Oakland heard testimony from a former OpenAI employee and board member who said efforts to push AI products into the marketplace compromised AI safety commitments. Rosie Campbell joined an AGI readiness team in 2021 and left in 2024 after her team was disbanded, with another safety-focused team also shut down. She testified that the organization became more product-focused over time. She said building a super-intelligent model without proper safety measures would not align with the original mission. She cited an incident involving Microsoft deploying a GPT-4 version in India before evaluation by a Deployment Safety Board, emphasizing the need for strong safety precedents. OpenAI declined to comment on its current AGI alignment approach.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]