Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs-test.rye.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Rye applies several rate limits per account. This page documents the limits in effect, what each one covers, the response shape when a limit is hit, and how to handle them in your integration.
Rye’s official SDKs handle rate limits for you out of the box. They read the RateLimit headers, retry 429s with backoff, and surface non-retriable errors. If you’re using an SDK, you generally don’t need to write any of the handling described below.

Limits at a glance

Rate limits are scoped per account. All of an account’s API keys share the same buckets. All limits below are defaults; if your use case needs more headroom, contact us before you launch to production.
BucketDefault limitWhat it covers
Mutations5 requests/secEndpoints that change state: creating/confirming/paying for checkout intents, payment gateway sessions, etc.
Reads10 requests/secGeneral GET endpoints: events, shipments, brands, merchants, developer settings reads.
Product lookup10 requests/secThe product lookup endpoint. Tracked in its own bucket so high-volume product queries don’t compete with checkout traffic.
Checkout intents50 / dayTotal checkout intents created per day (POST /api/v1/checkout-intents and POST /api/v1/checkout-intents/purchase). Applied in addition to the mutations limit.
Concurrent agents10 in flightNumber of in-progress checkout intents whose orders are being placed by Rye’s agent. New intents are rejected while you’re at the cap.
A single request can count against more than one bucket. Creating a checkout intent, for example, draws from the mutations/sec bucket, the intents/day bucket, and the concurrent-agents cap.

Response headers

Every response includes IETF draft-8 RateLimit headers so you can pace your client without trial-and-error:
  • RateLimit-Policy: the configured limit and window for the bucket the request was charged to (e.g. "default";q=10;w=1).
  • RateLimit: your remaining quota and seconds until the window resets (e.g. "default";r=7;t=1).
When a request is rejected, the response is HTTP 429 Too Many Requests with a JSON body:
{ "message": "Too many requests per second, please try again later." }
For the concurrent-agents cap, the message instead reads:
{ "message": "You have reached the limit for running concurrent agents(10). Please try again later. After more agents are available." }

Handling 429s

  1. Read the headers. When RateLimit’s r= (remaining) reaches 0, pause new requests until t= seconds have elapsed.
  2. Back off on 429. Retry with exponential backoff plus jitter. A first retry after 1–2 seconds is usually enough for the per-second buckets.
  3. Treat the daily and concurrent-agent limits as soft ceilings on your throughput, not as transient errors — retrying immediately won’t help. Queue work and drain it as quota frees up, or request a higher limit.

Example

const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));

// Extract the `t=` (seconds-until-reset) parameter from a draft-8
// `RateLimit` header value, e.g. `"default";r=0;t=1` -> 1.
function parseRateLimitReset(headerValue) {
  if (!headerValue) return null;
  const match = /(?:^|;)\s*t=(\d+)/.exec(headerValue);
  return match ? parseInt(match[1], 10) : null;
}

async function callRye(request) {
  const response = await fetch(request);

  if (response.status === 429) {
    const resetSeconds = parseRateLimitReset(response.headers.get("ratelimit")) ?? 1;
    // Add up to 250ms of jitter so concurrent clients don't all retry on the same tick.
    const jitterMs = Math.random() * 250;
    await sleep(resetSeconds * 1000 + jitterMs);
    return callRye(request);
  }

  return response;
}

Increasing your limits

If your integration regularly bumps against any of these, reach out with a rough volume estimate (peak requests/sec, daily intent count, concurrent in-flight orders).