AI agents can't pull off fully autonomous cyberattacks - yet
Briefly

AI agents can't pull off fully autonomous cyberattacks - yet
"AI agents and other systems can't yet conduct cyberattacks fully on their own - but they can help criminals in many stages of the attack chain, according to the International AI Safety report. The second annual report, chaired by the Canadian computer scientist Yoshua Bengio and authored by more than 100 experts across 30 countries, found that over the past year, developers of AI systems have vastly improved their ability to help automate and perpetrate cyberattacks."
"Perhaps the best, and scariest, evidence of that finding appeared in Anthropic's November 2025 report about Chinese cyberspies abusing its Claude Code AI tool to automate most elements of attacks directed at around 30 high-profile companies and government organizations. Those attacks succeeded in "a small number of cases." "At least one real-world incident has involved the use of semi-autonomous cyber capabilities, with humans intervening only at critical decision points," according to the AI safety report. "Fully autonomous end-to-end attacks, however, have not been reported.""
AI systems currently cannot carry out entire cyberattacks autonomously but can assist criminals across many stages of the attack chain. Developers have improved AI capability to automate and perpetrate cyberattacks over the past year. Chinese cyberspies abused an AI coding tool to automate most elements of attacks against about 30 high-profile companies and government organizations, with success in a small number of cases. At least one real-world incident used semi-autonomous cyber capabilities with human intervention only at critical decision points; fully autonomous end-to-end attacks have not been reported. AI is especially useful for scanning software vulnerabilities and writing malicious code, and competition systems autonomously identified 77 percent of synthetic vulnerabilities.
Read at Theregister
Unable to calculate read time
[
|
]