/

Industry

What Will It Take for Consumers to Trust Agentic Commerce?

Arjun Bhargava

Co-founder and CEO @ Rye

10 minutes read

Learn how trust in agentic commerce is built — from secure agentic transactions and tokenization to fraud detection and what merchants should evaluate before going live.

TL;DR / Key Takeaways

  • 70% of U.S. consumers say they're open to AI agents handling shopping tasks on their behalf (PYMNTS, January 2026) — but only 17% are comfortable completing a purchase through AI (ChannelEngine, January 2026). Closing this trust gap is what unlocks consumer adoption and real transaction volume in the channel.

  • Payment security is the top consumer concern. One-third of respondents in Riskified's global survey cited it as their biggest worry about autonomous AI purchases.

  • In a well-architected system, AI agents never see raw card data. Tokenized payment infrastructure — already deployed by Visa, Mastercard, and Stripe — keeps PCI scope off the agent developer's plate entirely.

  • Merchant fraud systems are blocking legitimate agent purchases because they rely on human-traffic signals that don't exist in agentic transactions. But early production data is encouraging — Stripe reported near-zero fraud rates on agentic transactions with retailers like Coach and Kate Spade. The bigger emerging risk is new fraud vectors: agents manipulated by bad actors, and chargebacks from purchases customers authorized but didn't fully expect.

  • For merchants, the evaluation criteria that matter most are consent models, agent identity verification, order reliability, and accountability when something goes wrong.

The Trust Gap in Agentic Commerce

A January 2026 PYMNTS Intelligence survey of 2,299 U.S. adults found that 70% of consumers are open to AI agents handling shopping tasks on their behalf. That's a dramatic shift from early 2025, when an Omnisend study found two-thirds of consumers wouldn't let AI buy for them even if it meant a better deal. Five months later, Omnisend ran the survey again. The number had dropped to one-third. Part of this shift is generational: according to research from eMarketer, a majority of Gen-Z consumers are already comfortable letting AI make purchases for them. They've grown up with algorithmic recommendations shaping their entertainment, social feeds, and financial decisions. Extending that trust to commerce feels like a natural progression.

But openness isn't trust. Even in Omnisend's more optimistic July 2025 data, 85% of shoppers still reported concerns about AI shopping, with privacy and data security topping the list.

A ChannelEngine survey of 4,500 global shoppers, also from January 2026, found that while 58% have used AI tools to research products, only 17% feel comfortable completing a purchase through AI. Salsify's 2026 consumer research echoed the gap: 22% of shoppers incorporate AI into their buying journeys, but just 14% trust AI recommendations alone to make a purchase. And Riskified's global survey put the top concern plainly — one-third of respondents cited payment security as their biggest worry about autonomous AI purchases.

The pattern is clear. Consumers are interested. They're experimenting. But they're not handing over purchasing authority — not until the infrastructure proves it can handle their money safely.

There's a useful parallel here. Twenty-five years ago, typing a credit card number into a web form felt reckless. PayPal built its early growth on making that feel safe — through buyer protection policies and the simple assurance that a seller would never see your financial data. Agentic commerce faces a version of the same challenge, with an added layer: the buyer isn't just trusting a website. They're trusting an AI agent to act on their behalf.

So what will it take to close this gap? Three things: secure payment architecture, fraud systems that can distinguish good agents from bad ones, and transparency that gives consumers real control.

How Agentic Checkout Security Actually Works

The most common consumer fear about agentic commerce — that an AI agent will mishandle payment data — reflects a misunderstanding of how modern agentic checkout is designed. In a well-architected system, the AI agent never sees your card number. At all.

Here's the flow: when a consumer provides payment information, it goes directly to a PCI-compliant vault operated by a payment provider like Stripe. The vault returns an opaque token — a randomized string with no extractable card data. That token is all the AI agent ever sees or handles. When the agent completes a checkout, it passes the token — not the card number — to the payment layer. Raw card data is decrypted and applied only at the final step, entirely within the PCI-compliant boundary.

This means the AI agent, the LLM powering it, the developer who built it, and the merchant receiving the order all operate without ever touching sensitive payment data. Cardholder data has no place in an LLM — once a secret enters a model context, it can't be guaranteed to stay confined. Tokenization eliminates this risk by design. For developers integrating agentic checkout infrastructure — like Rye's Universal Checkout API — this reduces PCI compliance burden to the lightest self-assessment category. Tokens in, confirmed orders out. No card data in the developer's systems.

The tokenization layer is evolving fast. Visa's Intelligent Commerce framework and Mastercard's Agent Pay both use issuer-backed agentic tokens that tie an AI agent's purchasing authority to a specific consumer, with spending limits and merchant restrictions built into the credential itself. Mastercard rolled Agent Pay out to all U.S. cardholders in late 2025. Both networks are actively integrating with AI platforms. On the protocol side, Stripe's Shared Payment Tokens (SPTs) — the payment primitive behind the Agentic Commerce Protocol — are scoped to a single transaction, time-limited, and revocable, so an agent can never spend more than the buyer authorized or outlive its intended use.

The practical takeaway for consumers: agentic checkout, when built on tokenized infrastructure, is at least as secure as saving your card on any online retailer — and in many cases more secure, because the attack surface is smaller, not larger.

The Fraud Problem — and Why Legitimate Orders Get Blocked

Security architecture solves the payment data question. But there's a second trust problem that operates in the opposite direction: merchant fraud systems that block legitimate agent purchases.

Most online retailers use fraud detection systems designed to stop automated abuse — bots that scalp inventory, scrape prices, test stolen credit cards, drain loyalty programs, or commit payment fraud. Analysts estimate the annual toll of bot-driven fraud in e-commerce at more than $180 billion. These systems are effective and necessary. But they rely on signals tuned to human traffic: browser fingerprinting, mouse movements, device characteristics, navigation patterns. As Stripe noted in their recent lessons from building for the first generation of agentic commerce, those signals vanish in an agentic world where there's no human buyer on the frontend. The systems that were built to protect merchants are now blocking the very transactions merchants should want.

The result: many agent-initiated orders get flagged or canceled outright. When this happens, the fallback is typically a human stepping in to complete the purchase manually — at $1–3 per order — cost that isn't recouped if the buyer gives up on a slow process — which destroys the latency and cost economics that make agentic commerce worthwhile.

The good news: early production data suggests the fraud risk itself is lower than feared. Stripe reported that since launching its Agentic Commerce Suite with retailers like Coach, Kate Spade, and Ashley Furniture, fraud rates on agentic transactions have been near zero. The reason is that even if a purchase is "new" to a given merchant, the underlying customer and their payment method typically aren't new to the payment network — which gives an immediate source of history and risk context that traditional per-site fraud models don't have.

That said, agentic commerce does introduce genuinely new fraud vectors. Agents can be manipulated by bad actors to place risky orders or bypass normal guardrails. And a more subtle problem is emerging: customers disputing charges from agent-initiated purchases they authorized but didn't fully expect — creating chargebacks that are technically legitimate but operationally costly.

The industry is working to close these gaps from multiple directions.

On the standards side, agent identity verification frameworks like Visa's Trusted Agent Protocol (TAP) are designed to let merchants distinguish authenticated AI agents from unknown automated traffic. The Agentic Commerce Consortium, a coalition of 20+ companies led by Basis Theory, is defining standards for agent authorization and merchant opt-in.

On the infrastructure side, providers like Rye route agent transactions through residential, geo-proximal IPs and tune interaction profiles to resemble consumer behavior — ensuring that legitimate purchases by real consumers through their chosen agents aren't rejected by fraud systems that haven't yet adapted to agentic checkout. It's no different from someone asking a friend to buy something on their behalf — these are real people, buying real products, through their chosen agents.

The trajectory here is clear. As fraud systems shift from human-traffic heuristics to network-level identity verification and scoped payment tokens, the false-positive rate should drop and the new vectors will get addressed. But in the near term, reliability of order completion is the most important trust signal for consumers. An agent that actually completes the purchase — every time, without cancellations — earns trust faster than any marketing message.

What Merchants Should Evaluate — and What Consumers Need to See

The merchant side and the consumer side of trust are two halves of the same equation. Merchants need infrastructure that earns consumer confidence; consumers need visibility and control before they'll hand over purchasing authority. Here's what matters on both sides.

Consent and control. How does a consumer authorize an agent to act on their behalf — and how much latitude does the agent get? The emerging standard involves explicit opt-in with defined spending limits, merchant restrictions, and the ability to revoke access at any time. Programmable card controls — like the ones built into Visa Intelligent Commerce and Mastercard Agent Pay — let consumers set maximum transaction amounts, restrict purchasing to specific merchant categories, and require explicit approval above a threshold. The PYMNTS study found that consumer interest rises sharply when systems let users preview actions, approve transactions, and undo decisions. 44% of consumers in the Shift Browser 2026 AI Consumer Survey said they're afraid of AI taking unauthorized autonomous actions. Guardrails aren't a feature. They're a prerequisite.

Agent identity verification. Merchants need a way to verify that an incoming transaction is from an authenticated agent acting on behalf of a real consumer — not an unknown script. Frameworks like Visa TAP and the Agentic Commerce Consortium's authorization standards are building this layer. Merchants should evaluate whether their infrastructure provider supports these verification methods today.

Transparency. Consumers should see what the agent found, why it recommends a product, what the total cost is (including shipping and tax), and what it's about to purchase — before the purchase happens. The Shift Browser survey found that nearly half of respondents are comfortable with autonomous features as long as there's clear oversight. The operative phrase: "as long as."

Reliability and error handling. Consumer trust erodes fast when an agent makes a bad purchase, fails to complete checkout, or presents stale pricing. The questions to ask: what's the order reliability rate? What happens when a checkout fails? How is inventory freshness maintained?

Accountability and recourse. When an AI agent makes a mistake — wrong item, wrong size, unauthorized purchase — who's responsible? Clear refund paths, dispute resolution processes, human escalation, and audit trails that show exactly what the agent did and why. The Rye ChatGPT app, for example, shows real-time pricing and stock availability before purchase, keeps card data fully tokenized through Stripe, and routes through established buyer-protection infrastructure — a pattern that other implementations should follow. Because transactions are completed directly with the merchant, the buyer gets order confirmation and shipping updates from the retailer — not from the agent or the infrastructure layer. The merchant stays the merchant of record, which means consumers retain the same buyer protections they'd have with any direct purchase.

Companies that absorb the early risk of agent errors — rather than pushing liability onto consumers — will build trust fastest.

Frequently Asked Questions

How secure are agentic transactions?

In a well-architected system, AI agents never handle raw card data — payment information goes directly to a PCI-compliant vault, and the agent operates only with opaque tokens. The attack surface is actually smaller than in traditional e-commerce.

What is agentic tokenization?

Agentic tokenization replaces sensitive payment data with a non-sensitive token scoped with spending limits, merchant restrictions, and expiration rules. Visa and Mastercard have both launched agentic tokenization frameworks — Intelligent Commerce and Agent Pay — that tie tokens to specific consumer-agent relationships.

How can trust be built in agentic commerce?

Trust is built on three pillars: security (tokenized payments, PCI compliance at the infrastructure layer), transparency (showing consumers what the agent is doing before it acts), and accountability (clear refund paths and human escalation when something goes wrong).

What's the best PCI-compliant approach for agentic commerce?

Third-party tokenization: the consumer's card data goes directly to a PCI-compliant vault (Stripe, Basis Theory, or similar), and the agent only handles opaque tokens. This keeps PCI scope off the agent developer's plate entirely.

How can merchants verify AI agent identity during checkout?

Emerging standards like Visa's Trusted Agent Protocol (TAP) and the Agentic Commerce Consortium's authorization framework use cryptographic verification to distinguish legitimate agents from unknown automated traffic. Merchant adoption is early but accelerating.

Why is safe agentic shopping important for merchants?

AI agents represent a new, high-intent acquisition channel — when agent-initiated orders are reliable and secure, merchants gain access to buyers they wouldn't have reached otherwise. When fraud systems block legitimate orders or agent errors lead to disputes, the cost falls on the merchant.

What makes agentic checkout solutions essential for digital commerce?

Agentic checkout solves the "last mile" of AI shopping: the gap between recommending a product and actually completing the purchase. Checkout that works across merchants, handles payment security through tokenization, and resolves shipping and tax in real time is what turns AI shopping from a research tool into a transactional channel.

What Comes Next

Consumer trust in agentic commerce is moving fast — faster than most people expected. The shift from two-thirds refusing AI purchases to one-third in just five months tells you where the trajectory points. But trajectory isn't the same as arrival.

The infrastructure layer has to earn this trust. That means tokenized payments that keep card data out of AI systems entirely. Fraud systems that distinguish good agents from bad ones. Transparency and control that let consumers set the rules. And accountability when something goes wrong.

At Rye, we've built the Universal Checkout API to handle the hardest part of this stack: completing real purchases, securely, across any merchant on the internet. If you're building an AI agent that needs to buy things, start with the docs or try the Rye app inside ChatGPT.

This post was originally published on September 8, 2025 and was last updated on March 15, 2026 with new consumer survey data, expanded coverage of tokenization and fraud detection, and an updated FAQ.

Stop the redirect.
Start the revenue.

Stop the redirect.
Start the revenue.

Stop the redirect.
Start the revenue.