How Big Four firm KPMG is protecting itself from AI agents going rogue
Briefly

How Big Four firm KPMG is protecting itself from AI agents going rogue
"One of the biggest concerns is probably how do you make sure that you allow them to have the autonomy to do the valuable things we need them to do, but to stop them from going wild or taking over."
"A robust set of controls is really important. Businesses need to clearly define what their agents are allowed to do and ensure monitoring systems can detect when they stray beyond those boundaries."
"Every KPMG agent has its own unique identifier and a systems card, allowing the firm to log and monitor actions, trace decision-making, and track interactions with other agents."
"Red-teaming, running simulated risk scenarios, is another key step in stress-testing systems before things go wrong."
AI agents are set to be widely deployed in 2026, moving beyond simple chatbots to complex autonomous systems. Organizations are concerned about the unpredictability of these agents and the risks they pose. KPMG has developed a framework to mitigate these risks, emphasizing the importance of defining agent boundaries and monitoring their actions. Each agent has a unique identifier for tracking, and oversight is provided by an AI operations center. Stress-testing through simulated scenarios is also crucial to ensure agents operate safely without constant human intervention.
Read at www.businessinsider.com
Unable to calculate read time
[
|
]