Can cybersecurity really protect us from AI driven Non-Human Identities (NHI)?
CSOonline reported that “Machine identities pose a big security risk for enterprises, and that risk will be magnified dramatically as AI agents are deployed. According to a report by cybersecurity vendor CyberArk, machine identities — also known as non-human identities (NHI) — now outnumber humans by 82 to 1, and their number is expected to increase exponentially. By comparison, in 2022, machine identities outnumbered humans by 45 to 1.” The July 2, 2025 article entitled “How cybersecurity leaders can defend against the spur of AI-driven NHI” (https://www.csoonline.com/article/4009316/how-cybersecurity-leaders-can-defend-against-the-spur-of-ai-driven-nhi.html) included these comments about “Generative AI and AI agents increase NHI risks”:
According to the CyberArk survey, AI is expected to be the top source of new identities with privileged and sensitive access in 2025. It’s no surprise that 82% of companies say their use of AI creates access risks. Many generative AI technologies are so easy to deploy that business users can do it without input from IT, and without security oversight. Almost half of all organizations, 47%, say that they aren’t able to secure and manage shadow AI.
AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.”
Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences.
This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. In addition to individual agents, agentic AI systems can also include access to data and tools, as well as security and risk guardrails.
“In old scripts the code is static and you can look at the behavior, look at the code, and you know that this thing should be connecting,” Taylor says. “In AI, the code changes itself… Agentic AI is cutting edge. And sometimes you step over that edge, and it can cut.”
This isn’t a purely theoretical threat. In May, Anthropic released the results of the security testing on its latest Claude models. In one test, Claude was allowed access to company emails, so that it could serve as a useful assistant. In reading the emails, Claude discovered information about its own impending replacement with a newer AI system, and also that the engineer in charge of this replacement was having an affair. In 84% of the tests, Claude attempted to blackmail the engineer so that it wouldn’t be replaced. Anthropic said it put guardrails in place to keep this kind of thing from happening, but it hasn’t released the results of any tests on those guardrails.
This should raise significant concerns for any company giving AI direct access to email systems.
Unanticipated behaviors are just the start. According to CSA, another challenge with agents is the unstructured nature of their communications. Traditional applications communicate through extremely predictable, well-defined channels and formats. AI agents can communicate with other agents and systems using plain language, making it hard to monitor with traditional security techniques.
What do you think about this?