[ 01 / 01 ]·Documentation

Quickstart

Scrapr turns any URL into a REST API. Under a minute end to end:

  1. Sign in to the dashboard.
  2. Click + New scraper. Paste a URL and describe the data.
  3. Create an API key in the Keys tab.
  4. Call your endpoint from anywhere.

Base URL

https://api.scrapr.dev

Authentication

All execution endpoints require a Scrapr API key. Send it as a bearer token, or as x-api-key.

Authorization: Bearer sk_live_xxxxxxxxxxxxxxxxxxxx
# or
x-api-key: sk_live_xxxxxxxxxxxxxxxxxxxx

Dashboard endpoints (/v1/scrapers POST, /v1/keys, /v1/me, /v1/billing) use the Supabase session JWT. The official frontend handles this automatically.

Generate a scraper

POST /v1/scrapers
Authorization: Bearer <supabase_jwt>
Content-Type: application/json

{
  "url": "https://news.ycombinator.com",
  "description": "All post titles, points, authors and URLs",
  "name": "HN front page"
}

Returns a scraper id, the inferred extraction spec, and the executable endpoint URL.

Execute a scraper

This is the main API — call it from your backend, frontend, AI agent, or cron.

cURL

curl -X POST https://api.scrapr.dev/v1/scrapers/<id>/run \
  -H "Authorization: Bearer $SCRAPR_API_KEY"

JavaScript

const res = await fetch(
  `https://api.scrapr.dev/v1/scrapers/${id}/run`,
  { method: "POST", headers: { Authorization: `Bearer ${key}` } }
);
const { data } = await res.json();

Python

import os, requests

r = requests.post(
    f"https://api.scrapr.dev/v1/scrapers/{scraper_id}/run",
    headers={"Authorization": f"Bearer {os.environ['SCRAPR_API_KEY']}"},
    timeout=60,
)
data = r.json()["data"]

Override URL

Pass url in the body to run the same spec against a different page (e.g. pagination):

{ "url": "https://news.ycombinator.com/news?p=2" }

Response schema

{
  "success": true,
  "scraper_id": "...",
  "duration_ms": 412,
  "source": "html",       // or "api" | "json_embedded"
  "count": 30,
  "data": [ { /* your fields */ } ]
}

On failure

{
  "success": false,
  "scraper_id": "...",
  "error": "ExtractionError: items_selector matched 0 nodes",
  "duration_ms": 820
}

Errors

StatusMeaning
401Missing / invalid API key
402Plan limit reached — upgrade in dashboard
404Scraper not found or not yours
422Could not generate a scraper for that input
502Target website failed to load

Rate limits

Each plan has a monthly execution budget; scraper count is capped per plan. Free tier: 20 executions, 2 scrapers. See pricing.

Examples

Product listing

// description: "All product titles, prices, images and ratings"
{ "title": "Sneakers X1", "price": 2499, "image": "https://...", "rating": 4.6 }

News articles

{ "title": "...", "author": "...", "published_at": "...", "url": "..." }

Job board

{ "title": "...", "company": "...", "location": "...", "salary": "...", "url": "..." }

Pricing

PackPriceCreditsRunsScrapers
Free₹050 credits5 runsUnlimited
Nano₹749500 credits~50 runsUnlimited
Starter₹2,3992,000 credits~200 runsUnlimited
Pro₹6,5997,000 credits~700 runsUnlimited
Max₹9,89912,000 credits~1,200 runsUnlimited

Questions? hello@scrapr.dev