Paste a URL. Describe the data in plain English. Scrapr generates a stable, parameterised REST endpoint you can call from any language.
Free tier · 50 credits (5 runs) · no credit card required
from scrapr import Scrapr
s = Scrapr(api_key="sk_live_...")
scraper = s.scrapers.create(
url="https://amazon.in/s?k=shoes",
description="All products: title, price, rating",
)
data = scraper.run(query="nike", page=1)
print(data["count"], "products"){
"success": true,
"count": 48,
"source": "api",
"duration_ms": 312,
"data": [
{"title": "Nike Air Zoom", "price": 89.99, "rating": 4.6},
{"title": "Adidas Ultra", "price": 129.99}
]
}Trusted by teams at
Scrapr cascades through API endpoints, embedded JSON, DOM selectors, and LLM extraction.
Full browser with fingerprint spoofing, navigator override, WebGL noise, canvas randomisation, and consent popup dismissal.
Auto-detects the best strategy: direct API endpoint, embedded JSON, shadow DOM, or CSS selectors.
Each scraper becomes a versioned REST endpoint. Parameters, retries, and auto-pagination are handled.
URL, path, query, body, headers, and cookies are all exposed as typed parameters. Schema is inferred from your description.
Built-in retries with exponential backoff, jitter, and per-domain rate-limit tracking. Distributed queues via Celery + Redis.
Create, rotate, and revoke keys from the dashboard. Per-key usage tracking and limit enforcement.
Any public URL — product listings, news feeds, dashboards, SPAs, or APIs behind auth walls.
Plain English. The LLM infers selectors, pagination logic, and parameter schemas.
A stable versioned REST endpoint with typed params. Call it from any language.
“We were spending 3-4 hours a week maintaining brittle XPath selectors for a price-monitoring pipeline. Scrapr replaced it with a single API call. It's been rock solid for six months.”
“I built a real estate listings scraper in literally 30 seconds. The LLM figured out pagination, session cookies, and the JSON schema on its own. I still don't fully understand how it works.”
“The stealth mode is the real differentiator. We were getting blocked every other day with Puppeteer. Switched to Scrapr and haven't seen a CAPTCHA in three months.”
“As a solo dev I can't afford a scraping infra team. Scrapr gives me enterprise-grade extraction with a 5-line Python script. The ROI is absurd.”
“We migrated our entire data pipeline from Scrapy + ScrapingBee to Scrapr in a weekend. Fewer moving parts, better success rates, and the API is just cleaner.”
“I've tried literally every scraping tool on the market. Scrapr is the only one where I can hand it a URL and a sentence describing what I want, and it just works.”
Extraction requests return a 402 error. You can top up instantly from the dashboard — credits never expire.
Yes. Every request runs through a full Playwright browser with fingerprint spoofing, consent-dialog dismissal, and automatic waiting for lazy-loaded content.
It receives the page HTML and your plain-English description, then generates CSS selectors or XPaths. If the page changes, the LLM adapts on the next run.
Absolutely. We do not train models on your data, sell it, or share it with third parties. Scraped results are encrypted at rest and purged after 30 days.
All paid packs include automatic proxy rotation across residential and datacenter pools. You can also bring your own proxies.
All packs are priced in USD. We accept cards, UPI, and net banking via Razorpay (with PayPal for international). Enterprise invoicing available on Max packs.

50 free credits. No credit card. Live in 30 seconds.