AI Integration Guide
Wire the ASINSpotlight Scraping API into Cursor, Claude Code, Continue, ChatGPT — with prompt recipes that produce idiomatic, quota-aware code.
What this is
A pragmatic guide for building on the ASINSpotlight Scraping API with AI coding tools. The API is REST + JSON with a single header for auth — exactly the shape that LLMs handle well. This page exists because technically correct AI-generated code is often idiomatically off: it ignores quota signals, paginates the wrong way, confuses product details with offers. The prompts and patterns below close that gap.
For the formal contract, point your AI at the OpenAPI spec:
https://www.asinspotlight.com/scraping-api-docs/openapi.jsonCursor, Claude Code, Continue, Windsurf, and ChatGPT all consume OpenAPI natively. Every prompt recipe below names the spec URL up front so the assistant grounds its implementation in the real schemas.
5-minute quickstart
Generate a key at board.asinspotlight.com/dashboard/api, set it in your environment:
export ASINSPOTLIGHT_API_KEY="sk_live_..."Run one request:
curl -H "x-api-key: $ASINSPOTLIGHT_API_KEY" \
"https://api.asinspotlight.com/v1/product?asin=B0B3ZD8QXJ&marketplace=us"You should see something like:
{
"success": true,
"page_type": "product",
"data": {
"asin": "B0B3ZD8QXJ",
"title": "Soundcore Q20i Headphones",
"bb_price": 39.99,
"rating": 4.6,
"reviews": 58800,
"bsr": 12,
"in_stock": true,
"bought_past_month": 20000
},
"meta": {
"marketplace": "us",
"timing_ms": 1842,
"request_url": "https://www.amazon.com/dp/B0B3ZD8QXJ",
"timestamp": "2026-05-03T10:14:22.318Z",
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"usage": {"requests_consumed": 1, "requests_remaining": 49999}
}
}That's the contract. Every successful response carries meta.usage.requests_remaining — read it on every call to make your code quota-aware without a separate check.
Endpoints at a glance
| Job | Path | Use it for |
|---|---|---|
| Get product details | GET /v1/product?asin=…&marketplace=… | Title, Buy Box price, BSR, rating, reviews, brand, category, monthly demand. Sub-second on typical PDPs. |
| Search by keyword | GET /v1/search?keyword=…&marketplace=… | Page 1 of organic search results. Each entry already has price, rating, reviews, monthly demand, ASIN — no follow-up product call needed to shortlist. |
| Get every seller | GET /v1/offers?asin=…&marketplace=… | Full Buy Box panel: every seller, price, shipping, FBA/FBM, rating. Real-time, not cached. |
| Scrape any URL | POST /v1/scrape | apiDocs.ai.endpoints.row4UseFor |
Prompt recipes
Each recipe is ready to paste into Cursor, Claude Code, ChatGPT, or any other coding assistant. They name the spec URL, the env var, and the gotcha the AI typically misses.
1. Cross-marketplace price comparison
Build a script that compares prices for a list of ASINs across multiple Amazon marketplaces using the ASINSpotlight Scraping API.
OpenAPI spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
API base URL: https://api.asinspotlight.com (note: api., not www.)
Auth: API key in the `x-api-key` header (read from env var `ASINSPOTLIGHT_API_KEY`).
Endpoint: GET /v1/product?asin=...&marketplace=...
Inputs:
- List of ASINs (e.g., ["B0B3ZD8QXJ", "B0CQXMXJC5"])
- List of marketplace codes (e.g., ["us", "uk", "de"])
For each (asin, marketplace) pair, fetch product details and collect: asin, marketplace, title, bb_price, in_stock, rating.
Output: CSV with columns asin, marketplace, title, bb_price, in_stock, rating.
Constraints:
- Stop early and surface the error if a request returns 401, 429, or 5xx.
- After each successful response, read `meta.usage.requests_remaining` and abort if it drops below 50.
- Don't run more than 5 requests in parallel — the account has a parallel-request limit.What AI typically gets right: the request loop, env-var auth, JSON parsing.
What it typically misses: reading meta.usage, capping parallelism. Both are spelled out in the prompt for that reason.
2. Daily keyword tracker
Build a daily keyword tracker that records the top 10 ASINs for a list of keywords using the ASINSpotlight Scraping API.
OpenAPI spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
API base URL: https://api.asinspotlight.com (note: api., not www.)
Auth: API key in the `x-api-key` header (env var `ASINSPOTLIGHT_API_KEY`).
Endpoint: GET /v1/search?keyword=...&marketplace=...
For each keyword, fetch the search results page, take the first 10 entries from `data.shallow_parts`, and write one row per (keyword, rank, asin, title, price, rating, reviews, bought_past_month, captured_at).
Storage: append to a SQLite table `keyword_rankings` keyed on (keyword, marketplace, captured_at, rank).
Run cadence: once per day, intended to be invoked by cron.
Don't:
- Don't try to "follow up" with a /v1/product call for each search result — `shallow_parts` already contains everything we need.
- Don't paginate past page 1; we only care about the top 10.
- Don't retry on 4xx errors; only on 5xx and `CAPTCHA_DETECTED` (with backoff).The trap: AI often generates a follow-up /v1/product call for each search result, doubling credit consumption. The shallow_parts payload already includes everything a daily tracker needs.
3. Buy Box ownership monitor
Build a Buy Box ownership monitor for a watchlist of ASINs using the ASINSpotlight Scraping API.
OpenAPI spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
API base URL: https://api.asinspotlight.com (note: api., not www.)
Auth: API key in the `x-api-key` header (env var `ASINSPOTLIGHT_API_KEY`).
Endpoint: GET /v1/offers?asin=...&marketplace=...
For each ASIN, call /v1/offers and identify the current Buy Box owner — the seller in `data.product_sellers_info` whose price equals the lowest `price + shipping_price` and who has `is_fba: true` (the typical Amazon Buy Box rule).
Compare against the previously recorded owner stored in a JSON file `buybox_state.json` keyed by asin. When the owner changes, log a line to stdout with: timestamp, asin, previous owner, new owner, new price.
Run as a loop with a 10-minute interval between full sweeps.
Don't:
- Don't conflate /v1/product and /v1/offers — `/v1/product` returns the Buy Box price but not the full seller list. You need /v1/offers for ownership.
- Don't ignore `stock_qty: 0` sellers; filter those out before picking a winner.
- Don't run all watchlist requests at once; respect the parallel limit (default ~5).The trap: assistants often pull buybox / bb_price off /v1/product and call it the Buy Box owner. That field tells you the price is a Buy Box, not who owns it. Owner data only lives in /v1/offers.
4. ASIN list batch refresh
Build an ASIN-list batch refresh that updates cached product data using the ASINSpotlight Scraping API.
OpenAPI spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
API base URL: https://api.asinspotlight.com (note: api., not www.)
Auth: API key in the `x-api-key` header (env var `ASINSPOTLIGHT_API_KEY`).
Endpoint: GET /v1/product?asin=...&marketplace=...
Inputs: a JSON file `watchlist.json` of `{asin, marketplace, last_refreshed_at}` records.
Refresh strategy:
1. Sort the list by `last_refreshed_at` ascending (oldest first).
2. For each entry, call /v1/product. On a successful 200, write the response data and the new timestamp back to the file.
3. After every response, read `meta.usage.requests_remaining`. If it would dip below the configured floor (default 1000), stop the run and log how many entries were skipped.
4. Run requests in batches of 5 in parallel.
Error handling:
- `PAGE_NOT_FOUND` (404): mark the entry as `status: gone` and continue. PAGE_NOT_FOUND consumes 1 credit — that's expected.
- `CAPTCHA_DETECTED` (503): retry up to 3 times with exponential backoff.
- `RATE_LIMIT_EXCEEDED` (429): wait 30 seconds and retry the same entry.
- Any other error: log and stop.The trap: naive batch jobs ignore the parallel-request limit and the quota floor. Both surface clearly via 429s and meta.usage — the prompt tells the AI to consult them.
5. Search-to-database pipeline (with pagination)
Build a search-to-database pipeline that captures the full first 5 pages of results for a keyword using the ASINSpotlight Scraping API.
OpenAPI spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
API base URL: https://api.asinspotlight.com (note: api., not www.)
Auth: API key in the `x-api-key` header (env var `ASINSPOTLIGHT_API_KEY`).
The /v1/search endpoint returns page 1 by default. To walk further pages, use POST /v1/scrape with an explicit Amazon search URL containing `&page=N`:
POST /v1/scrape
Body: {"url": "https://www.amazon.com/s?k=wireless+headphones&page=2", "marketplace": "us"}
Steps:
1. Call GET /v1/search?keyword=...&marketplace=us — this is page 1. Read `data.last_page_number` to know the upper bound.
2. For pages 2..min(5, last_page_number), call POST /v1/scrape with the page-N URL.
3. Append every entry to a Postgres table `search_results` with columns (keyword, marketplace, page, rank, asin, title, price, rating, reviews, captured_at).
Don't:
- Don't try `?page=2` on the typed /v1/search endpoint — that endpoint doesn't accept a `page` parameter.
- Don't fetch past `last_page_number` — Amazon will return an empty result set and you'll waste credits.The trap: AI tries /v1/search?keyword=...&page=2 because that's the obvious shape. The typed search endpoint doesn't accept page. To walk pages, use POST /v1/scrape with an explicit Amazon URL — the prompt makes that swap explicit.
Wiring it into AI tools
Cursor
Add a project rule referencing the OpenAPI spec. Create .cursor/rules/asinspotlight.md:
---
description: ASINSpotlight Amazon Scraping API
globs: ["**/*.py", "**/*.ts", "**/*.js"]
alwaysApply: false
---
When working with the ASINSpotlight Scraping API:
- Spec: https://www.asinspotlight.com/scraping-api-docs/openapi.json
- Auth: `x-api-key` header, value from env var `ASINSPOTLIGHT_API_KEY`
- Base URL: https://api.asinspotlight.com/v1
- Endpoints: /product (asin, marketplace), /search (keyword, marketplace), /offers (asin, marketplace), POST /scrape (body: {url, marketplace})
- Read `meta.usage.requests_remaining` after every successful response
- Cap parallelism at 5 unless told otherwise
- For search pagination beyond page 1, use POST /scrape with an explicit Amazon URL containing `&page=N`
Claude Code
Pass the spec URL in your prompt or add it to CLAUDE.md at the project root. Claude Code will fetch and reason over it on first reference. Same instructions as the Cursor rule above work as a CLAUDE.md section.
Continue.dev
Add the spec to ~/.continue/config.json as a documentation source:
{
"docs": [
{
"title": "ASINSpotlight Scraping API",
"startUrl": "https://www.asinspotlight.com/scraping-api-docs/openapi.json",
"rootUrl": "https://www.asinspotlight.com/scraping-api-docs/openapi.json"
}
]
}Then reference it in chat with @docs ASINSpotlight Scraping API.
ChatGPT (web)
Paste the OpenAPI URL into the conversation and ask ChatGPT to fetch it. With browsing enabled it pulls the spec and uses it as the contract for any code it writes. Works the same in Claude (claude.ai) and Gemini.
Other tools (Windsurf, Cline, codex CLI, …)
Any agent that can fetch URLs or accept an OpenAPI document will integrate the same way: point it at https://www.asinspotlight.com/scraping-api-docs/openapi.json and reuse the prompt recipes above.
Patterns to imitate
These are the idioms that separate quota-aware production code from naive scripts.
Quota-aware loops
Every successful response carries meta.usage.requests_remaining. Don't poll a separate endpoint to decide whether to keep going — read it on each call:
for (const asin of asins) {
const res = await fetchProduct(asin);
if (!res.success) handleError(res.error.code);
if (res.meta.usage.requests_remaining < FLOOR) break;
store(res.data);
}Marketplace switching
marketplace is the single param that controls which Amazon storefront is queried. Pass it explicitly on every call — don't rely on the us default unless you mean it. The valid codes are listed in the spec's Marketplace schema.
Error handling that doesn't burn credits
Most error responses don't consume credits. The exception is PAGE_NOT_FOUND (404) — you're billed because the parser still ran. Plan for it: a refresh job hitting deleted ASINs slowly drains quota at 1 credit each, so flag and stop refreshing those entries.
Pagination via /scrape
/v1/search returns page 1. For pages 2+, use POST /v1/scrape with the explicit Amazon URL https://www.amazon.com/s?k=…&page=N. The response shape matches SearchData.
Pitfalls AI-generated code commonly hits
- Confusing
/v1/productwith/v1/offers. Product gives you Buy Box price; offers gives you the Buy Box owner and every other seller. Don't mix them up. - Trying
?page=Non /v1/search. The typed search endpoint accepts onlykeywordandmarketplace. Use/scrapefor further pages. - Polling instead of reading
meta.usage. The remaining credit count is on every successful response. There's nothing else to call. - Ignoring 429 vs 503. 429 means too many in-flight requests or no credits — back off or stop. 503 with code
CAPTCHA_DETECTEDis transient — retry with exponential backoff. - Forgetting the
marketplaceparam. Without it, every call hits amazon.com — silent UX bug for European or Japanese workflows. Always pass it explicitly. - Treating
PAGE_NOT_FOUNDas free. It costs 1 credit (the parser ran). Refresh jobs against stale watchlists can quietly drain quota. - Blindly retrying 4xx. Validation errors (400) and auth errors (401) won't fix themselves. Only 429, 502, and 503 are worth retrying.
Where to go next
- OpenAPI spec — formal contract, machine-readable
- Narrative documentation — human-readable reference with response shapes and examples
- Generate an API key