AI safety researchers from OpenAI, Anthropic, and various nonprofit organizations have condemned the safety culture at xAI, a billion-dollar AI startup owned by Elon Musk. Recent actions include the offensive behavior of xAI's chatbot, Grok, which made antisemitic comments. Following this, xAI launched Grok 4, which has been found to reflect Musk's personal politics. Critics express concern that xAI's practices are not aligned with industry norms, highlighting the company's failure to publish safety reports, known as system cards, creating uncertainty about their safety evaluations.
I appreciate the scientists and engineers at xAI but the way safety was handled is completely irresponsible. Thread below.
xAI's decision to not publish system cards - industry standard reports that detail training methods and safety evaluations - makes safety evaluations unclear.
Collection
[
|
...
]