According to EMA's latest survey report on Agentic AI Identities, enterprises have crossed a line that’s easy to miss in the rush of time-to-market. AI agents are being put in the position to act inside production systems, often before guardrails have been put into place.
What usually happens is a team ships an internal agent to help such as pulling context from a ticket, searching a knowledge base, drafting a response, maybe opening a follow-up task. It’s useful. People start trusting it. Then someone asks for “one more capability,” because the real time-saver isn’t just writing the response. The agent is doing the thing it was requested to do. So the agent gets access to a tool. Then another. Then a privileged API.
Somewhere along that path, you’ve introduced a new kind of operator, one that can take steps across systems without the same friction we’ve historically relied on to keep power in check.
The EMA report puts a statistic on the permissions side of that story: 79% of organizations lack written policies for governing AI agents, but have already deployed them into production. The reality is that this isn’t a “future-state” problem, it’s a present one.
When agents don’t fit the identity model, organizations force them to
Once an agent can act, the next question is: what identity is it acting as? EMA found that 60.5% are using a hybrid human/service-account management model for agent identities. This is an open admission that the old categories don’t map cleanly to autonomous behavior.
That hybrid approach can keep projects moving, but it also creates familiar failure modes in a new disguise. This means that permissions can grow because the agent’s tasks grow. Its approvals do not capture intent, and audit trails do not clearly tell the story of why those calls happened. Forcing agents into identity containers designed for humans or static service accounts compromises security and auditability.
The report shows why so many teams feel behind the curve. Majorities say their IAM stack is not robust and ready for agents across security, scale, compliance, and resiliency. And that’s before you add the complexity of modern identity sprawl. Organizations report using an average of three IAM platforms, with 34% using four or more, making unified policy even harder to implement.
The side door: agent tooling that spreads faster than architecture reviews
Even if your official agent program is centrally managed, there’s a second storyline unfolding through developer workflows which are tools that connect AI assistants to enterprise systems.
Clutch Security’s report on Model Context Protocol (MCP) describes “silent, explosive adoption” and backs it with enterprise deployment data. Their numbers are worth paying attention to because it illustrates how quickly agent-adjacent access can proliferate when the installation path is simple and the payoff is immediate. In their enterprise sample, 15.28% of employees were running at least one local MCP server, and 86% of MCP users chose local servers over remote alternatives.
That local preference is the crux. Local servers don’t live in a neatly controlled runtime; they live where developers work. Clutch reports that 95% of local MCP servers run on employee endpoints “where security tools have no visibility” with the remaining 5% showing up in places like CI pipelines and cloud workloads, which they describe as “more dangerous” because they’re persistent and privileged.
In other words: the same forces driving agent adoption (speed, convenience, time-to-value) also drive architecture decisions that reduce oversight, which is exactly the gap EMA warns about when it notes that “human-in-the-loop” oversight often can’t keep up with an agent’s actions.
Even if you don’t use MCP at all, the pattern should feel familiar: when the “connect AI to tools” layer is expanding that fast, governance that relies on committees, quarterly reviews, or manual approvals simply isn’t built for the pace.
The story ends the same way unless you interrupt it
Which brings the story back to EMA’s core finding: organizations are deploying autonomous capability faster than they can define and enforce the identity rules for it and the longer that gap persists, the more likely it is that agent power expands through convenience rather than design.
If you’re only managing the agents you can name, you’re likely missing the bigger control surface: the identities and entitlements those agents (and their tool connectors) use to reach real systems such as service accounts, API tokens, OAuth grants, cloud roles, and delegated privileges.
That “access path” is where IAM has to be explicit and enforceable. It must be able to assess the request, what permissions it has, what conditions must be true (scope, resource, time, network, approval), and what authoritative audit evidence ties each action back to a specific agent, session, and policy decision.
If you’ve been in identity long enough, you know how this movie usually plays out. The early phase is optimism. The middle phase is exceptions. The last phase is an incident review where someone asks, “Why did it have that access?”.
The good news is that the way out of this is not a thousand-page policy. It’s a practical set of patterns that answer questions such as:
- What should an agent identity look like when tasks are dynamic?
- How do you prevent privilege escalation when autonomy is the feature?
- What does a defensible audit trail look like when “intent” matters as much as “action”?
- Where do you draw the line between helpful automation and unbounded authority?
Join our fireside chat webinar, “Your AI Agents are already running, but who’s securing them?” featuring George Fletcher, Identity Standards Architect, and Jeff Hickman, Ory’s Head of Customer Engineering. They’ll share what they’re seeing as agents move into production, and the practical identity and security patterns organizations can implement now.