You need to take responsibility for security when using AI Agents!
DarkReading.com reported that “Agentic AI deployments are becoming an imperative for organizations of all sizes looking to boost productivity and streamline processes, especially as major platforms like Microsoft and Salesforce build agents into their offerings. In the rush to deploy and use these helpers, it's important that businesses understand that there's a shared security responsibility between vendor and customer that will be critical to the success of any agentic AI project.” The October 17, 2025 article entitled " AI Agent Security: Whose Responsibility Is It?” (https://www.darkreading.com/cybersecurity-operations/ai-agent-security-awareness-responsibility) included these comments:
The stakes in ignoring security are potentially high: last month for instance, AI security vendor Noma detailed how it discovered "ForcedLeak," a critical severity vulnerability chain in Salesforce's agentic AI offering Agentforce, which could have allowed a threat actor to exfiltrate sensitive CRM data from a customer with improper security controls through an indirect prompt injection attack. Although Salesforce addressed the issue through updates and access control recommendations, ForcedLeak is but one example of the potential for agents to leak sensitive data, either through improper access controls, ingested secrets, or a prompt injection attack.
It's not an easy task to add agentic AI security to the mix; it's already challenging enough to determine where responsibility and culpability lie with traditional software and cloud deployments. With something like AI, where the technology can be hastily rolled out (by both vendor and customer alike) and is constantly evolving, establishing those barriers can prove even more complex.
Moreover, organizations are addressing other security awareness challenges like phishing, and have had to contend with determining the best way to offload as much risk as possible from the user, rather than relying on said user to catch every single malicious email. For phishing, that may take the form of physical FIDO keys and secure email gateways.
This is similarly relevant for AI agents, which are imperfect autonomous processes that users may rely on to access sensitive information, or grant excessive permissions, or use to route insecure processes without proper oversight. Training users on how to use — or not use — their agent helpers is thus just one more layer of difficulty for security teams.
Good advice!