How Agentic AI Can Break In The Real World | AdExchanger
Briefly

How Agentic AI Can Break In The Real World | AdExchanger
"As enterprise platforms rush to add conversational bots into workflows, they're also inadvertently giving those agents broad access to sensitive information - and, in some cases, letting bots chat freely in a way no privacy or marketing team would ever approve. This is exactly the type of hidden pitfall Aaron Costello, chief of SaaS security research at AppOmni, hunts for."
"There were already more than enough vulnerabilities out there to keep security teams like Costello's quite busy. And now AI is adding to the myriad ways things can go sideways. Teamwork turns toxic One recent and somewhat unsettling example Costello uncovered late last year involves a set of weaponized AI agents within ServiceNow that were designed to collaborate on tasks."
Enterprise platforms are rapidly adding conversational bots that often receive broad access to sensitive data and conversational privileges. Risk-detection tools that plug into cloud platforms can identify when customer settings create security holes. Collaborative AI agents can read tickets, access CRM records, and update systems, which enables normal automation but also allows data-exfiltration pipelines when malicious instructions are embedded in service requests. Simple text added to a ticket can instruct agents to ignore original rules and export confidential information, requiring no exploit or advanced hacking. Security teams must assume agent workflows can be abused and restrict agent permissions and instruction sources accordingly.
Read at AdExchanger
Unable to calculate read time
[
|
]