
"The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available."
"According to a release from CAISI, which is part of the department's National Institute of Standards and Technology (NIST), it will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security."
"Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, said the CAISI agreements signal a shift toward proactive security for agentic AI by enabling government-led testing of advanced models before and after deployment. This should help strengthen visibility into autonomous behaviors while accelerating the development of standards to mitigate risks."
The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce under NIST, has established agreements with Google DeepMind, Microsoft, and xAI to evaluate AI models before public deployment. These companies join Anthropic and OpenAI, which signed similar agreements nearly two years ago. CAISI will conduct pre-deployment evaluations and targeted research to assess frontier AI capabilities and advance AI security. Microsoft emphasized that such agreements are essential for building trust in advanced AI systems. The initiative represents a shift toward proactive security for agentic AI, enabling government-led testing of advanced models before and after deployment while strengthening visibility into autonomous behaviors and accelerating development of risk mitigation standards.
#ai-model-vetting #government-regulation #ai-safety-standards #pre-deployment-evaluation #proactive-security
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]