Available endpoints (2)

Setup and exact call parameters for AI Agents
I want to use the Cheerio Scraper service on Fast Marketplace.

## Setup (skip if you already have Fast Marketplace set up)
1. Open the marketplace skill: https://marketplace.fast.xyz/skill.md
2. Fund a Fast wallet on mainnet.
3. Provide start URLs and a Cheerio-compatible pageFunction to extract the fields you need.
4. Keep the returned job token and poll the marketplace job route for completion.

## Available Endpoints

### Crawl HTML ($0.0001 USDC)
curl -X POST "https://api.marketplace.fast.xyz/api/apify-cheerio/crawl-html" \
  -H "Content-Type: application/json" \
  -d '{
  "startUrls": [
    {
      "url": "https://example.com/blog"
    }
  ],
  "linkSelector": "a.article-link",
  "pageFunction": "async function pageFunction(context) { const { $, request } = context; return { url: request.url, title: $('title').text().trim() }; }",
  "maxCrawlPages": 50
}'

Use this instead of the browser-based Web Scraper when the target site is static enough for raw HTTP and you want lower cost and higher throughput.

### Extract Page ($0.0001 USDC)
curl -X POST "https://api.marketplace.fast.xyz/api/apify-cheerio/extract-page" \
  -H "Content-Type: application/json" \
  -d '{
  "startUrls": [
    {
      "url": "https://example.com/blog/post-1"
    }
  ],
  "pageFunction": "async function pageFunction(context) { const { $, request } = context; return { url: request.url, title: $('title').text().trim(), author: $('.author').text().trim() }; }",
  "maxCrawlPages": 1
}'

Use this when you have a small set of known static URLs and want fast targeted extraction.

For paid endpoints: the first call returns 402. Authorize payment with your Fast wallet and retry with the payment signature header.
Have a request?

Suggest marketplace coverage