Microsoft, Google, xAI give US access to AI models for security testing
Briefly

Microsoft, Google, xAI give US access to AI models for security testing
"The US government will be allowed to evaluate the models before deployment and conduct research to assess their capabilities and security risks. This agreement fulfills a pledge made in July to partner with technology companies to vet their AI models for national security risks."
"Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials aim to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed."
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications, CAISI Director Chris Fall said in a statement. The move builds on 2024 agreements with OpenAI and Anthropic under President Joe Biden."
The Pentagon has reached an agreement with major tech companies, including Microsoft and Google, to allow the US government access to their AI models for national security testing. This initiative aims to assess the capabilities and security risks of these models, particularly in light of concerns regarding Anthropic's Mythos model. The agreement fulfills a previous commitment by the Trump administration to collaborate with tech firms on AI vetting. The goal is to identify potential threats from advanced AI systems before they are widely deployed.
Read at www.aljazeera.com
Unable to calculate read time
[
|
]