Quickstart
Scrapr turns any URL into a REST API. Under a minute end to end:
- Sign in to the dashboard.
- Click + New scraper. Paste a URL and describe the data.
- Create an API key in the Keys tab.
- Call your endpoint from anywhere.
Base URL
https://api.scrapr.dev
Authentication
All execution endpoints require a Scrapr API key. Send it as a bearer token, or as x-api-key.
Authorization: Bearer sk_live_xxxxxxxxxxxxxxxxxxxx # or x-api-key: sk_live_xxxxxxxxxxxxxxxxxxxx
Dashboard endpoints (/v1/scrapers POST, /v1/keys, /v1/me, /v1/billing) use the Supabase session JWT. The official frontend handles this automatically.
Generate a scraper
POST /v1/scrapers
Authorization: Bearer <supabase_jwt>
Content-Type: application/json
{
"url": "https://news.ycombinator.com",
"description": "All post titles, points, authors and URLs",
"name": "HN front page"
}Returns a scraper id, the inferred extraction spec, and the executable endpoint URL.
Execute a scraper
This is the main API — call it from your backend, frontend, AI agent, or cron.
cURL
curl -X POST https://api.scrapr.dev/v1/scrapers/<id>/run \ -H "Authorization: Bearer $SCRAPR_API_KEY"
JavaScript
const res = await fetch(
`https://api.scrapr.dev/v1/scrapers/${id}/run`,
{ method: "POST", headers: { Authorization: `Bearer ${key}` } }
);
const { data } = await res.json();Python
import os, requests
r = requests.post(
f"https://api.scrapr.dev/v1/scrapers/{scraper_id}/run",
headers={"Authorization": f"Bearer {os.environ['SCRAPR_API_KEY']}"},
timeout=60,
)
data = r.json()["data"]Override URL
Pass url in the body to run the same spec against a different page (e.g. pagination):
{ "url": "https://news.ycombinator.com/news?p=2" }Response schema
{
"success": true,
"scraper_id": "...",
"duration_ms": 412,
"source": "html", // or "api" | "json_embedded"
"count": 30,
"data": [ { /* your fields */ } ]
}On failure
{
"success": false,
"scraper_id": "...",
"error": "ExtractionError: items_selector matched 0 nodes",
"duration_ms": 820
}Errors
| Status | Meaning |
|---|---|
| 401 | Missing / invalid API key |
| 402 | Plan limit reached — upgrade in dashboard |
| 404 | Scraper not found or not yours |
| 422 | Could not generate a scraper for that input |
| 502 | Target website failed to load |
Rate limits
Each plan has a monthly execution budget; scraper count is capped per plan. Free tier: 20 executions, 2 scrapers. See pricing.
Examples
Product listing
// description: "All product titles, prices, images and ratings"
{ "title": "Sneakers X1", "price": 2499, "image": "https://...", "rating": 4.6 }News articles
{ "title": "...", "author": "...", "published_at": "...", "url": "..." }Job board
{ "title": "...", "company": "...", "location": "...", "salary": "...", "url": "..." }Pricing
| Pack | Price | Credits | Runs | Scrapers |
|---|---|---|---|---|
| Free | ₹0 | 50 credits | 5 runs | Unlimited |
| Nano | ₹749 | 500 credits | ~50 runs | Unlimited |
| Starter | ₹2,399 | 2,000 credits | ~200 runs | Unlimited |
| Pro | ₹6,599 | 7,000 credits | ~700 runs | Unlimited |
| Max | ₹9,899 | 12,000 credits | ~1,200 runs | Unlimited |
Questions? hello@scrapr.dev