
"AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling a program to carry out a broader variety of actions. The approach could be a chatbot, such as ChatGPT, that has access to a corporate database via a technique like retrieval-augmented generation (RAG). An agent could require a more complex arrangement, such as the bot invoking a wide array of function calls to various programs simultaneously"
"As enterprises begin implementing artificial intelligence agents, senior executives are on alert about the technology's risks but also unprepared, according to Nikesh Arora, chief executive of cybersecurity giant Palo Alto Networks. "There is beginning to be a realization that as we start to deploy AI, we're going to need security," said Arora to a media briefing in which I participated. "And I think the most amount of consternation is around the agent part," he said, "because customers are concerned that if they don't have visibility to the agents, if they don't understand what credentials agents have, it's going to be the Wild West in their enterprise platforms.""
Enterprises are beginning to deploy artificial intelligence agents that can access external resources and act like human workers, creating new security and identity management challenges. Executives express concern about lack of visibility into agents and uncertainty over agent credentials, raising risks of uncontrolled access across enterprise platforms. Agents range from chatbots using retrieval-augmented generation to complex orchestrators invoking many function calls across systems. Commercial software increasingly embeds agentic automation that performs human tasks. The expanded agent attack surface demands improved identity, credentialing, and monitoring capabilities. Part of the response will involve leveraging AI agents themselves to automate and strengthen security controls.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]