Everything that could go wrong with Trump's AI safety tests, according to experts
Briefly

Everything that could go wrong with Trump's AI safety tests, according to experts
""Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall."
"CAISI frequently gains access to models with 'reduced or removed safeguards,' allowing for a more thorough evaluation of national security-related capabilities and risks."
The Trump administration has reversed its previous stance on AI regulation by signing agreements with Google DeepMind, Microsoft, and xAI for safety checks on AI models. This shift follows concerns raised by Anthropic regarding the risks of releasing its Claude Mythos model. The Center for AI Standards and Innovation (CAISI) acknowledges that these agreements build on Biden's policies. CAISI has conducted around 40 evaluations of AI models, emphasizing the importance of rigorous measurement science for understanding AI's national security implications.
Read at Ars Technica
Unable to calculate read time
[
|
]