AI Agents attacking Salesforce is not a surprise!
DarkReading.com reported that “Salesforce Web forms can be manipulated by the company's "Agentforce" autonomous agent into exfiltrating customer relationship management (CRM) data — a concerning development as legacy software-as-a-service (SaaS) providers race to integrate agentic AI into their platforms to zhuzh up the user experience and generate buzz among investors.” The September 25, 2025 article entitled “Salesforce AI Agents Forced to Leak Sensitive Data” (https://www.darkreading.com/vulnerabilities-threats/salesforce-ai-agents-leak-sensitive-data) included these comments:
Agentforce is an agentic AI platform built into the Salesforce ecosystem, which allows users to spin up autonomous agents for most conceivable tasks. As the story often goes though, the autonomous technology appears to be the victim of the complexity of AI prompt training, according to researchers at Noma Security.
To wit: The researchers have identified a critical vulnerability chain in Agentforce, carrying a 9.4 out of 10 score on the CVSS vulnerability-severity scale. In essence it's a cross-site scripting (XSS) play for the AI era — an attacker plants a malicious prompt into an online form, and when an agent later processes it, it leaks internal data. In keeping with all of the other prompt injection proofs-of-concept (PoCs) coming out these days, Noma has named its trick "ForcedLeak."
To mitigate the risk, users should add any additional external URLs that users rely on to the Salesforce Trusted URLs list or to their AI agent's instructions. This includes things like external feedback forms (like forms.google.com), external knowledge bases, or any third-party websites your agents need to link to, according to Salesforce.
Good advice, and be careful of these AI Agents!