How SerpBear works
SerpBear uses third-party website scrapers like ScrapingAnt, ScrapingRobot, or your own proxy IPs to scrape Google search results and check if your domain appears in the results for a given keyword — and at what position.
SERP Scraper
The scraper fetches Google Search results for each tracked keyword and checks whether your domain is present and at what ranking position. SerpBear supports a wide range of scraping backends including ScrapingAnt, ScrapingRobot, SerpAPI, SearchAPI, Serper, ValueSerp, HasData, Serply, CrazySerp, and custom proxy IP lists.
Each scraper service is configured globally from the Settings → Scraper panel. Once configured, scraping runs automatically on your chosen schedule.
Scrape Strategy (Google num=100 Change)
In 2025, Google disabled the num=100 query parameter in SERP requests, which had previously allowed fetching up to 100 results in a single request. With this change, Google now returns a maximum of 10 results per request (one page). This meant keywords ranked beyond position 10 would always show as "Not in First 100", even if they actually ranked on pages 2–10.
To work around this, SerpBear introduced a flexible Scrape Strategy system that controls how many pages are scraped per keyword refresh.
The Three Strategies
| Strategy | Behaviour |
|---|---|
| Basic (default) | Scrapes only the first page (10 results). Fastest, uses fewest API credits. Best if most of your keywords rank on page 1. |
| Custom | Scrapes a fixed number of pages you configure (1–10 pages, up to 100 results). Predictable credit usage. |
| Smart | Scrapes the page where the keyword was last seen, plus one neighboring page on each side. Optionally falls back to all 10 pages if the keyword is not found near its last known position. Most efficient for keywords scattered across multiple pages. |
Note: The scraper services SerpAPI and SearchAPI natively support fetching 100 results in a single request and are not affected by this change — they continue to use their own
num=100behavior and bypass the pagination system entirely.
Configuration Levels
The scrape strategy can be set at two levels:
- Global — configured in Settings → Scraper → Scrape Strategy. Applies to all domains by default.
- Per-domain — configured in each domain's Domain Settings → Scraping tab. Overrides the global strategy for that specific domain. Set to "Use Global Setting" to defer to the global configuration.
Skipped Results in the UI
When a strategy limits how many pages are scraped, positions that were not checked are marked as skipped. In the keyword details view:
- A summary banner shows how many results were scraped vs. skipped (e.g. "10 results scraped • 90 positions skipped").
- Consecutive skipped positions are collapsed into a single dashed row (e.g. "Pages 2–10: 90 results skipped").
- The "not found" badge dynamically reflects the actual number of results checked — for example, "Not in First 10" for Basic strategy instead of the old hardcoded "Not in First 100".
CRON Jobs
SerpBear runs 3 cron jobs.
- A Scraper Cron that runs every day at midnight or your given interval (daily, weekly, or monthly) and updates the SERP of all the keywords.
- A Retry Cron that retries failed SERP scrapes. It runs every hour. Can be enabled from the app settings panel.
- An Email Notification Cron that sends you the current keyword position to your email. It runs at your given interval (daily, weekly, or monthly).