
"AI agents have huge potential, balanced by equally big risks. What's becoming apparent is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions."
"OpenClaw is only as useful as the access it is given to files, accounts, browsers, network devices, and, most significant of all, credentials."
"The test assumed that a user had given OpenClaw full access to their computer, that they regularly controlled the agent over Telegram, and that their Telegram account had been hijacked."
"The testers were able to reset the agent, causing it to forget its guardrails and comply with the request to retrieve an OAuth token."
Agentic platforms, particularly OpenClaw, present hidden risks within enterprises by potentially exposing sensitive information. Tests by Okta Threat Intelligence revealed that AI agents can easily be manipulated to override security protocols. For instance, an attacker could trick OpenClaw into revealing OAuth tokens despite built-in guardrails. The research highlights the vulnerabilities of AI agents when given extensive access and the consequences of their misuse, emphasizing the need for stronger security measures in AI deployment.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]