Google, Microsoft, and xAI allow the government to test AI models in advance
Briefly

Google, Microsoft, and xAI allow the government to test AI models in advance
"The evaluations will be conducted by the Center for AI Standards and Innovation (CAISI), part of the U.S. Department of Commerce, focusing on national security risks, including cybersecurity, biosecurity, and the potential use of AI in chemical weapons."
"According to CAISI Director Chris Fall, independent and technically rigorous evaluation methods are necessary to fully understand the impact of frontier AI on national security, enabling quicker and larger-scale security reviews as AI technology evolves."
"The announcement comes at a time when the Trump administration has taken a cautious stance toward regulating artificial intelligence, aiming to prevent strict oversight from hindering innovation while maintaining a technological lead over China."
Google, Microsoft, and xAI have agreed to share unreleased AI models with the U.S. government for testing by the Center for AI Standards and Innovation. This initiative aims to assess national security risks associated with AI, including cybersecurity and biosecurity. CAISI will serve as a central hub for evaluating these systems. The collaboration follows previous agreements with OpenAI and Anthropic. CAISI Director Chris Fall emphasizes the need for rigorous evaluations to understand AI's impact on national security amid growing concerns about AI risks in Washington.
Read at Techzine Global
Unable to calculate read time
[
|
]