
""Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. "These expanded industry collaborations help us scale our work in the public interest at a critical moment.""
"CAISI will study models that have reduced or removed safeguards to better understand their unmitigated capabilities, focusing on national security-related risks and capabilities."
"Prior to evaluating U.S.-based AI models, CAISI recently examined Chinese model DeepSeek, concluding it underperformed in several areas like accuracy, security and cost efficiency."
The Center for Artificial Intelligence Standards and Innovation will conduct security testing on AI models from Google Deepmind, Microsoft, and xAI. This initiative, overseen by CAISI, aims to evaluate national security risks and capabilities of these models in classified environments. The testing builds on previous agreements and aligns with the Trump administration's AI Action plan. CAISI's evaluations will include models with reduced safeguards to understand their capabilities. Initial reactions from industry groups have been supportive of this collaboration.
Read at Nextgov.com
Unable to calculate read time
Collection
[
|
...
]