The hidden Identity crisis in agentic commerce
Agentic commerce is here, but authorization isn't. How AI platforms rushing to integrate with major brands are creating a systemic identity security crisis.


Head of Technical Product Marketing
As AI platforms rush to integrate with major brands, who's ensuring these agents have proper authorization to act on your behalf?
In early October 2025, the AI platform wars hit a new milestone. OpenAI announced that ChatGPT now serves 800 million weekly active users and unveiled partnerships with Uber, Expedia, Walmart, Spotify, and Booking.com. Google's Gemini reached 450 million monthly users, while Meta AI crossed the billion-user mark. The message is clear: AI assistants are becoming the new interface layer between consumers and commerce.
Mark Mahaney, Evercore ISI's Head of Internet Research, captured the moment on CNBC: "This dramatically changes the internet landscape... in the space of the next two or three years." He described this as the arrival of "agentic commerce," where AI systems act autonomously on your behalf, handling everything from ride bookings to travel reservations without requiring you to click through screens.
But beneath the partnership announcements and user growth metrics lies a critical question: When you tell an AI assistant to "book me a ride home," who's verifying that the agent actually has permission to spend your money?
The traditional auth model wasn't built for agentic AI
Traditional e-commerce follows a simple pattern: log in once, then confirm each transaction. You authenticate to Uber, select your destination, review the price, and tap to confirm. Each step requires explicit consent because humans are always in the loop.
Agentic commerce inverts this. As Mahaney explained, AI systems "automate it a lot better, personalize a lot better" without making you "always go through the same prompts." You state intent in natural language, and the agent handles authentication, selection, booking, and payment. The human moves from active participant to passive overseer.
When you connect your Uber account to an AI platform, you're giving that platform's agents standing permission to book rides whenever they interpret your words as requesting one. Multiply this across dozens of services, millions of users, and multiple competing AI platforms, and the scope of the identity challenge becomes clear.
When Identity systems fail at scale
The risks aren't theoretical. Industry analysis suggests agent-related security incidents are becoming one of the fastest-growing threat vectors. If an attacker compromises your session on any AI platform, they inherit your delegated permissions across every integrated service. With static OAuth tokens (still the standard for most integrations), there's no mechanism to say "yes, you can book a $30 ride to work, but no, you can't book a $300 ride to another city at 3 AM."
The problem compounds in enterprise settings. An employee asks their AI assistant to "book the next flight to Chicago for the client meeting." For secure execution, the system must verify corporate travel authorization, check policy tier, confirm Chicago is an approved destination, apply correct expense codes, and potentially route high-value bookings through manager approval. But most current integrations treat the AI agent and employee as the same identity, meaning the agent could theoretically make bookings the employee never authorized.
Traditional identity and access management (IAM) systems weren't designed for autonomous agents. When agents and users share permissions, agents may execute unintended actions even if those actions technically fall within the user's permission scope. The system can't distinguish between "the user clicked this button" and "the agent interpreted ambiguous natural language and decided to take this action."
What Identity infrastructure requires
The solution requires rethinking identity infrastructure from first principles. AI agents need to be treated as a distinct identity class, separate from the users who delegate authority to them.
This starts with moving beyond simple role-based access control with context-aware decisions that evaluate multiple attributes (time of day, transaction amount, location, device trust level) before granting permission. Relationship-based access control (inspired by Google's Zanzibar) allows systems to understand the connections between users, agents, and resources, enabling questions like "does this agent have a valid delegation chain from this user for this specific action?"
A properly designed system enforces policies like: "AI Agent can book Uber rides to addresses in the user's verified locations list, maximum $75 per ride, only during weekday business hours, and only when originating from a trusted device." This represents a dynamic evaluation considering multiple factors before authorizing each action, not a single OAuth scope.
Service providers like Uber and Expedia need policy engines that evaluate complex authorization rules. AI platforms need fine-grained delegation management interfaces that give users visibility and control. Enterprises need to extend Zero Trust architectures to cover AI agents as privileged identities requiring continuous verification.
Users need transparency. Every platform should show exactly what you've delegated, to which agents, with what limitations, and provide one-click revocation. The current state, where most users have no idea what permissions they've granted, won't scale.
The interoperability problem
We're not building one agentic commerce ecosystem; we're building several. ChatGPT, Gemini, and Meta AI are racing to integrate with the same service providers. Users will delegate permissions across multiple platforms. Your employer standardizes on Gemini for work while you use ChatGPT personally and your family coordinates through Meta AI.
Without standardized protocols, this fragments into a security nightmare. Users lose track of authorizations. Service providers must implement authorization correctly with each platform separately. Enterprises struggle to maintain consistent security postures. When something goes wrong, forensic analysis becomes nearly impossible without a unified audit trail.
The platforms that contribute to open standards will enable the safest experiences. The Model Context Protocol (MCP), championed by Anthropic and gaining adoption across the AI ecosystem, represents a critical step forward. MCP provides a standardized way for AI systems to securely connect to external data sources and tools while maintaining proper context and security boundaries. It's designed specifically to address the challenge of AI agents needing controlled access to resources—exactly the problem agentic commerce creates at scale.
However, MCP primarily handles the communication layer. What's still underdeveloped are comprehensive identity and authorization standards that work on top of protocols like MCP. This is where open-source identity infrastructure becomes critical: enabling consistent authorization patterns across platforms, integrating with emerging standards like MCP, and avoiding vendor lock-in.
Why modern IAM is the competitive advantage
Mahaney's point resonates: "If you didn't have a website, eventually you got screwed." Agentic commerce represents a similar inflection point. The companies that build proper identity infrastructure now will own the market. Those that don't will spend years retrofitting authorization into systems designed without it, losing customers to competitors who can safely enable agentic features.
The window to build this correctly is narrow. Once patterns become entrenched—once millions of users have granted broad, static permissions to AI agents—unwinding that technical debt becomes nearly impossible without breaking existing integrations. The time to implement fine-grained authorization, policy-based access control, and relationship-based permissions is before you're managing hundreds of millions of delegated sessions.
For identity providers, modern identity and access management (IAM) isn't defensive infrastructure—it's what enables product differentiation. Imagine being able to tell enterprise customers: "Our AI integration includes granular authorization that respects your corporate policies, step-up authentication for sensitive transactions, and comprehensive audit logs that distinguish between user and agent actions." That's not a compliance checkbox; that's a competitive advantage that wins deals. Especially as the first security incidents hit competitors who rushed to integrate without proper authorization.
For enterprises, the question isn't whether to extend identity governance to AI agents—it's whether you'll do it proactively or reactively. The same principles governing human access must apply to agents: least privilege, separation of duties, continuous verification. The enterprises deploying IAM solutions capable of handling agentic demands today are the ones that will safely enable employee productivity gains tomorrow. Those waiting for "clearer standards" will watch competitors move faster while they're stuck in security review.
For AI platforms, building on modern identity infrastructure from the start means you can offer enterprise customers the deployment models they actually need: integration with their existing identity providers, support for their authorization policies, and audit trails that satisfy their compliance requirements. The platforms treating identity as a first-class concern will win enterprise deals. Those treating it as an afterthought will face procurement blockers.
The identity infrastructure layer for agentic commerce must provide authorization that evaluates context dynamically, relationship-based permissions that understand delegation chains, and comprehensive audit capabilities that track every agent action. Every transaction flowing through an AI agent depends on identity systems that can answer: "Is this agent actually authorized to do what it's trying to do on behalf of this user?"
The solutions being built today aren't proprietary black boxes. They're open, standards-based identity infrastructure that works across platforms, integrates with MCP and emerging protocols, connects to existing enterprise systems, and gives organizations control over their authorization logic. Because the future of agentic commerce depends on identity infrastructure that's as flexible and dynamic as the agents themselves—and the companies that deploy it first will define the market.
The race that matters
The numbers tell the story: 800 million ChatGPT users, 450 million Gemini users, 1 billion Meta AI users. These platforms have the scale to reshape digital commerce. Partnerships with Uber, Expedia, Walmart, and Spotify signal major brands are taking this seriously.
But scale without security creates systemic risk. The 2024 OAuth breaches demonstrated what happens when attackers steal tokens with delegated permissions. Extend that scenario to agents with financial transaction capabilities across dozens of services and multiple AI platforms, and the potential damage becomes ecosystem-wide.
The companies that succeed won't necessarily integrate first or fastest. They'll treat AI agent identity as a first-class security concern, implement fine-grained authorization respecting user intent, give users transparent control over delegated permissions, and work toward open standards enabling safe interoperability.
This moment requires collaboration, not just competition. The identity infrastructure for agentic commerce can't be solved by any single player. It requires service providers, AI platforms, enterprises, and the identity industry working together on standards that protect users while enabling innovation.
The race is on. But cutting corners on authorization isn't a shortcut to market leadership—it's a shortcut to the next major security incident.
Mark Mahaney is Senior Managing Director and Head of Internet Research at Evercore ISI. This analysis draws from his October 15th, 2025 interview on CNBC's Market Alert: Rise of Agentic Commerce segment and incorporates research from leading cybersecurity and authentication platforms.
Further reading

How a redirect broke login with Apple for a full day

How Apple broke "Sign in with Apple" with an unannounced and silent redirect

The future of Identity: How Ory and Cockroach Labs are building infrastructure for agentic AI

Ory and Cockroach Labs announce partnership to deliver the distributed identity and access management infrastructure required for modern identity needs and securing AI agents at global scale.