Table of Contents
Latency: The invisible barrier between commerce agents and adoption
Sophia Willows
Head of Engineering @ Rye
Aug 27, 2025
Agentic checkouts stall when they lean on humans or over-rely on LLMs. Rye’s Universal Checkout API stays fast with deterministic, self-healing workflows.
Agents are supposed to make things faster and easier. That’s the whole point. Yet when it comes to buying something, many so-called agentic systems take longer and are less reliable than if the shopper just did it themselves.
When automation fails, and it seems to more often than not, humans get tapped to manually complete the checkout. This approach technically fulfills the requirement but fails the expectation by taking far too long. It’s expensive, too, costing up to $1 each, and not every shopper will continue to purchase, especially not if it takes too long.
To clarify, here we’re talking about synchronous checkout experiences. For async, where a user has authorized an agent to decide on their behalf within certain parameters (total cost, shipping speed, etc.), speed matters less.
Agents are supposed to make things faster and easier. That’s the whole point. Yet when it comes to buying something, many so-called agentic systems take longer and are less reliable than if the shopper just did it themselves.
When automation fails, and it seems to more often than not, humans get tapped to manually complete the checkout. This approach technically fulfills the requirement but fails the expectation by taking far too long. It’s expensive, too, costing up to $1 each, and not every shopper will continue to purchase, especially not if it takes too long.
To clarify, here we’re talking about synchronous checkout experiences. For async, where a user has authorized an agent to decide on their behalf within certain parameters (total cost, shipping speed, etc.), speed matters less.
Agents are supposed to make things faster and easier. That’s the whole point. Yet when it comes to buying something, many so-called agentic systems take longer and are less reliable than if the shopper just did it themselves.
When automation fails, and it seems to more often than not, humans get tapped to manually complete the checkout. This approach technically fulfills the requirement but fails the expectation by taking far too long. It’s expensive, too, costing up to $1 each, and not every shopper will continue to purchase, especially not if it takes too long.
To clarify, here we’re talking about synchronous checkout experiences. For async, where a user has authorized an agent to decide on their behalf within certain parameters (total cost, shipping speed, etc.), speed matters less.
Why checkout slows down
Getting to a true total cost means more than quoting a SKU price. An agent has to run the checkout flow end to end, with a real destination address, to fetch shipping and tax calculations. That’s where delays creep in. Popups interrupt. Layouts shift. Extra steps appear for fraud checks. Each hiccup adds seconds, and seconds matter when a shopper’s waiting for a price to approve. (In a true “agentic checkout” situation, where the shopper has authorized the agent to buy within certain parameters asynchronously, such as cost and shipping time, speed generally matters less.)
The minimum standard for synchronous agentic shopping is to be as fast as a human shopper. Faster is better. If an agent takes minutes to return shipping and tax costs for confirmation, users are prone to cancel the process and either buy the item directly or give up entirely.
Rye's approach to checkout latency
We’ve made several technological decisions in the interest of speed. Paradoxically for an agent, that largely means reducing our use of AI: it takes longer for an LLM to figure things out than to run a well-architected deterministic process.
We minimize our reliance on LLMs in two ways. On the first visit to any given e-commerce site, we reduce the DOM to the elements relevant to a transaction, so the LLM is not distracted or derailed by superfluous elements—it’s taken a lot of work to get really good at squashing popups. Then, we generate plans for even faster transactions, so subsequent runs completely avoid LLMs. If a site changes, we pull in the LLM to repair only the failing step. This “self-healing” loop means checkout stays resilient and responsive.
When we do use an LLM, we use proprietary trace data from successful agent executions to distill the relevant strengths of more capable models (such as GPT-4.1) into smaller ones (like Llama 4 Maverick).
On the other hand, we’ve invested in direct integrations where justified by scale. Amazon and Shopify account for a large share of e-commerce volume, so we’ve optimized them to be quite a bit faster than the manual way.
We’ve made several technological decisions in the interest of speed. Paradoxically for an agent, that largely means reducing our use of AI: it takes longer for an LLM to figure things out than to run a well-architected deterministic process.
We minimize our reliance on LLMs in two ways. On the first visit to any given e-commerce site, we reduce the DOM to the elements relevant to a transaction, so the LLM is not distracted or derailed by superfluous elements—it’s taken a lot of work to get really good at squashing popups. Then, we generate plans for even faster transactions, so subsequent runs completely avoid LLMs. If a site changes, we pull in the LLM to repair only the failing step. This “self-healing” loop means checkout stays resilient and responsive.
When we do use an LLM, we use proprietary trace data from successful agent executions to distill the relevant strengths of more capable models (such as GPT-4.1) into smaller ones (like Llama 4 Maverick).
On the other hand, we’ve invested in direct integrations where justified by scale. Amazon and Shopify account for a large share of e-commerce volume, so we’ve optimized them to be quite a bit faster than the manual way.
We’ve made several technological decisions in the interest of speed. Paradoxically for an agent, that largely means reducing our use of AI: it takes longer for an LLM to figure things out than to run a well-architected deterministic process.
We minimize our reliance on LLMs in two ways. On the first visit to any given e-commerce site, we reduce the DOM to the elements relevant to a transaction, so the LLM is not distracted or derailed by superfluous elements—it’s taken a lot of work to get really good at squashing popups. Then, we generate plans for even faster transactions, so subsequent runs completely avoid LLMs. If a site changes, we pull in the LLM to repair only the failing step. This “self-healing” loop means checkout stays resilient and responsive.
When we do use an LLM, we use proprietary trace data from successful agent executions to distill the relevant strengths of more capable models (such as GPT-4.1) into smaller ones (like Llama 4 Maverick).
On the other hand, we’ve invested in direct integrations where justified by scale. Amazon and Shopify account for a large share of e-commerce volume, so we’ve optimized them to be quite a bit faster than the manual way.
Current and future performance for offer latency
Today, that strategy shows up clearly in our SLAs. We prioritize offer latency, because it’s the wait for a final price that has the biggest impact on the shopper’s experience. On Amazon, we average about 5 seconds for both offer latency and checkout, with 99 percent reliability. Shopify holds at 5 seconds for offer latency and 20 seconds for checkout, with 96 percent reliability. In both cases, fewer than 5% of orders require manual fulfillment.
The trajectory is equally important. Within three weeks of launch, we aim to get Shopify checkout latency closer to 15 seconds, and within six weeks we expect it near 12. Reliability across open-web AI flows climbs from 65% today toward 90% in that same window, while offer latency shrinks to well under a minute.
Shoppers are already cautious about trusting agents to get the right product and deliver as promised. If the process is also slower than doing it themselves, they have little reason to give agents a chance. Rye’s Universal Checkout is fast enough to be useful, resilient enough to scale, and reliable enough for developers to confidently integrate.
Our SLA roadmap and benchmarks are in the docs, which can walk through how to integrate Universal Checkout and deliver fast, reliable orders without the wait.
Today, that strategy shows up clearly in our SLAs. We prioritize offer latency, because it’s the wait for a final price that has the biggest impact on the shopper’s experience. On Amazon, we average about 5 seconds for both offer latency and checkout, with 99 percent reliability. Shopify holds at 5 seconds for offer latency and 20 seconds for checkout, with 96 percent reliability. In both cases, fewer than 5% of orders require manual fulfillment.
The trajectory is equally important. Within three weeks of launch, we aim to get Shopify checkout latency closer to 15 seconds, and within six weeks we expect it near 12. Reliability across open-web AI flows climbs from 65% today toward 90% in that same window, while offer latency shrinks to well under a minute.
Shoppers are already cautious about trusting agents to get the right product and deliver as promised. If the process is also slower than doing it themselves, they have little reason to give agents a chance. Rye’s Universal Checkout is fast enough to be useful, resilient enough to scale, and reliable enough for developers to confidently integrate.
Our SLA roadmap and benchmarks are in the docs, which can walk through how to integrate Universal Checkout and deliver fast, reliable orders without the wait.
Today, that strategy shows up clearly in our SLAs. We prioritize offer latency, because it’s the wait for a final price that has the biggest impact on the shopper’s experience. On Amazon, we average about 5 seconds for both offer latency and checkout, with 99 percent reliability. Shopify holds at 5 seconds for offer latency and 20 seconds for checkout, with 96 percent reliability. In both cases, fewer than 5% of orders require manual fulfillment.
The trajectory is equally important. Within three weeks of launch, we aim to get Shopify checkout latency closer to 15 seconds, and within six weeks we expect it near 12. Reliability across open-web AI flows climbs from 65% today toward 90% in that same window, while offer latency shrinks to well under a minute.
Shoppers are already cautious about trusting agents to get the right product and deliver as promised. If the process is also slower than doing it themselves, they have little reason to give agents a chance. Rye’s Universal Checkout is fast enough to be useful, resilient enough to scale, and reliable enough for developers to confidently integrate.
Our SLA roadmap and benchmarks are in the docs, which can walk through how to integrate Universal Checkout and deliver fast, reliable orders without the wait.
Related articles
Related articles
Monetize
your AI platform
with shopping.
Monetize
your AI platform
with shopping.
Monetize
your AI platform
with shopping.
Product
Resources