🚀 Grab 10% off on all plans. Apply code WELCOME2026 at checkout. 🚀

Add Your Heading Text Here

Build a Keyword Rank Tracker in Python Using Google SERP API (Free Tool)

Rank Tracker Using Python

Table of Contents

TL;DR

  • Build a Python rank tracker using Scrapingdog’s Google Search API.
  • Setup: install requests & pandas; use a keyword list; fetch top 100 US results per keyword.
  • Process: extract your domain’s position and URL, then save to rank.csv; optionally schedule daily runs and email alerts.
  • Automates SEO tracking and competitor checks; scalable and cost-effective.

If you’re into SEO or part of a marketing team, you’re likely familiar with rank-tracking tools. I am sure you might use tools like Semrush, Ahref, or something similar, but they are too expensive.

In this tutorial, we will create our rank-tracking tool with Scrapingdog’s SERP API and Python.

Why Rank Tracking Matters for SEO

Most SEO tools tell you what keywords you could rank for. Rank tracking tells you what’s actually happening, and there’s a big difference.

When you publish a page, optimize a title tag, or build backlinks, you have no way of knowing if any of it worked unless you’re watching your rankings. Rank tracking is that feedback loop. Without it, you’re doing SEO blind.

Rankings Change Without Warning

Search rankings are not static. A keyword you ranked #4 for last month might be #11 today because a competitor published a better article or Google rolled out an update. Organic traffic lags behind ranking changes by days or weeks; by the time you notice a drop in Analytics, you’ve already lost ground. Daily rank tracking lets you catch these shifts early.

You Need to Track Competitors Too

Your ranking doesn’t exist in isolation. If a competitor jumps from #8 to #3 on a keyword you care about, you need to know that. A rank tracker lets you monitor any domain on any keyword, so you always know where you’re winning and where you’re losing ground.

Validate What’s Actually Working

Every time you make an on-page change, rewriting a title, improving content depth, adding internal links, and rank tracking are how you confirm whether it worked. Without it, every SEO decision is based on instinct. With it, you start building a playbook of what actually moves the needle for your site.

Why Build Your Own?

Tools like Semrush and Ahrefs are great, but expensive, $100–$500/month is hard to justify if you’re tracking a handful of keywords. Building your own tracker in Python using Scrapingdog’s Google SERP API gives you the same core data at a fraction of the cost, with full control over scheduling, storage, and logic.

Logic for Building a Keyword Position Tracker

To build a rank-tracking tool, we will need access to LIVE organic Google search results data. We can get this data from the Google Search API. Once we have the data, we can get the rank of any website for any keyword with a simple GET request.

However, a target website might not always rank on the first page of Google results. This is where pagination becomes essential. Google typically displays 10 results per page, so if a website ranks beyond position 10, we need to paginate through multiple pages to find it. By incrementing the page parameter in our API request, we can crawl deeper into the search results. Checking page 2, page 3, and beyond until we either find the target domain or reach a predefined search depth limit (e.g., top 100 results or 10 pages).

This approach ensures accurate rank tracking even for keywords where the website ranks lower, while also keeping API usage efficient by stopping the search as soon as the target URL is found.

Prerequisite

  • We will need a trial account on Scrapingdog.
  • Python should be pre-installed on your machine. If it is not installed, then you can download it from here.

Now, start by creating a project folder; you can name it anything you like. This is where we’ll store our Python script and the scraped data.

				
					mkdir tracker
cd tracker
				
			

Next, install the required libraries. We’ll use requests to make GET requests to ScrapingDog’s Google Search API, and pandas to organize and export the results (more on that shortly).

				
					pip install requests pandas
				
			

Once the dependencies are installed, create a new Python file inside the folder. For this tutorial, we’ll call it rank.py.

				
					touch rank.py
				
			

Now let’s start coding.

How to build a SERP rank checker in Python?

Before writing any code, make sure you have your Scrapingdog API key ready. If you don’t have one yet, you can grab the key from the dashboard.

				
					import requests
import pandas as pd

keywords = ['web scraping api', 'web scraping tool', 'google web scraping', 'best serp api', 'google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"

for keyword in keywords:
    for page in range(0, 10):  # pages 0-9 = top 100 results
        params = {
            "api_key": api_key,
            "query": keyword,
            "country": "us",
            "page": page
        }
        response = requests.get(url, params=params)

        if response.status_code == 200:
            data = response.json()
            print(data)
        else:
            print(f"Request failed for '{keyword}' page {page}: {response.status_code}")
				
			

We store our target keywords in a list and loop through each one. Since Google now returns 10 results per page, we loop through pages 0–9 to cover the top 100 results. The page parameter is zero-indexed, so page=0 is the first page, page=1 is the second, and so on.

Note: The country parameter lets you track rankings for any specific market. Change "us" to "gb", "in", "au" etc. to track rankings in different countries.

Now let’s actually extract the ranking data from the response:

				
					import requests
import pandas as pd

results = []
target_domain = "scrapingdog.com"  # Change this to your domain

keywords = ['web scraping api', 'web scraping tool', 'google web scraping', 'best serp api', 'google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"

for keyword in keywords:
    found = False

    for page in range(0, 10):  # pages 0-9 = top 100 results
        if found:
            break  # No need to check further pages once domain is found

        params = {
            "api_key": api_key,
            "query": keyword,
            "country": "us",
            "page": page
        }
        response = requests.get(url, params=params)

        if response.status_code == 200:
            data = response.json()
            organic = data.get('organic_results', [])

            for result in organic:
                if target_domain in result.get('link', ''):
                    results.append({
                        "keyword": keyword,
                        "position": result["rank"],
                        "page": result["link"]
                    })
                    found = True
                    break
        else:
            print(f"Request failed for '{keyword}' page {page}: {response.status_code}")
            break

    if not found:
        results.append({
            "keyword": keyword,
            "position": "Not in top 100",
            "page": "-"
        })
				
			

The key difference from before is the nested loop, we now paginate through up to 10 pages to cover 100 results. But notice the found flag at the outer loop level: the moment your domain is found on any page, we break out of both loops immediately and stop spending API credits on that keyword.

This is important for cost efficiency. If your domain ranks #3 for a keyword, the script finds it on page 0 and moves on; it doesn’t burn 9 more API calls unnecessarily.

Finally, we can use pandas in order to store the data in a CSV file.

				
					import requests
import pandas as pd
from datetime import date

results = []
target_domain = "scrapingdog.com"

keywords = ['web scraping api', 'web scraping tool', 'google web scraping', 'best serp api', 'google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"

for keyword in keywords:
    found = False

    for page in range(0, 10):
        if found:
            break

        params = {
            "api_key": api_key,
            "query": keyword,
            "country": "us",
            "page": page
        }
        response = requests.get(url, params=params)

        if response.status_code == 200:
            data = response.json()
            organic = data.get('organic_results', [])

            for result in organic:
                if target_domain in result.get('link', ''):
                    results.append({
                        "date": date.today(),
                        "keyword": keyword,
                        "position": result["rank"],
                        "page": result["link"]
                    })
                    found = True
                    break
        else:
            print(f"Request failed for '{keyword}' page {page}: {response.status_code}")
            break

    if not found:
        results.append({
            "date": date.today(),
            "keyword": keyword,
            "position": "Not in top 100",
            "page": "-"
        })

df = pd.DataFrame(results)
df.to_csv('rank.csv', index=False, encoding='utf-8')
print(f"Done! Tracked {len(results)} keywords.")
				
			

We’ve added a date field to every row. When you run this script daily and append results over time, you’ll have a full historical record of exactly when your rankings changed. Without dates, your CSV is just a snapshot. With dates, it becomes a trend you can act on.

Rank Tracking Result

Schedule Daily Runs with Email Alerts

Running the script manually every day defeats the purpose. Let’s automate it using the schedule library and send yourself an email summary each morning.

First, install the dependency:

				
					pip install schedule
				
			

Then update your script:

				
					import requests
import pandas as pd
import schedule
import time
import smtplib
import os
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email import encoders
from datetime import date

target_domain = "scrapingdog.com"
keywords = ['web scraping api', 'web scraping tool', 'google web scraping', 'best serp api', 'google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"

EMAIL_SENDER = "your@gmail.com"
EMAIL_PASSWORD = "your-app-password"  # Use an app password, not your main Gmail password
EMAIL_RECEIVER = "your@gmail.com"


def track_rankings():
    results = []

    for keyword in keywords:
        found = False

        for page in range(0, 10):
            if found:
                break

            params = {
                "api_key": api_key,
                "query": keyword,
                "country": "us",
                "page": page
            }
            response = requests.get(url, params=params)

            if response.status_code == 200:
                data = response.json()
                organic = data.get('organic_results', [])

                for result in organic:
                    if target_domain in result.get('link', ''):
                        results.append({
                            "date": date.today(),
                            "keyword": keyword,
                            "position": result["rank"],
                            "page": result["link"]
                        })
                        found = True
                        break
            else:
                print(f"Request failed for '{keyword}' page {page}: {response.status_code}")
                break

        if not found:
            results.append({
                "date": date.today(),
                "keyword": keyword,
                "position": "Not in top 100",
                "page": "-"
            })

    # Append mode for historical tracking
    df = pd.DataFrame(results)
    file_exists = os.path.isfile('rank.csv')
    df.to_csv('rank.csv', mode='a', header=not file_exists, index=False, encoding='utf-8')

    send_email(results)
    print(f"[{date.today()}] Tracking complete.")


def send_email(results):
    body = f"Rank Tracking Report — {date.today()}\n\n"
    for r in results:
        body += f"{r['keyword']}: Position {r['position']}\n"

    msg = MIMEMultipart()
    msg['From'] = EMAIL_SENDER
    msg['To'] = EMAIL_RECEIVER
    msg['Subject'] = f"Daily Rank Report — {date.today()}"
    msg.attach(MIMEText(body, 'plain'))

    with open('rank.csv', 'rb') as f:
        part = MIMEBase('application', 'octet-stream')
        part.set_payload(f.read())
        encoders.encode_base64(part)
        part.add_header('Content-Disposition', 'attachment; filename="rank.csv"')
        msg.attach(part)

    with smtplib.SMTP_SSL('smtp.gmail.com', 465) as server:
        server.login(EMAIL_SENDER, EMAIL_PASSWORD)
        server.sendmail(EMAIL_SENDER, EMAIL_RECEIVER, msg.as_string())

    print("Email sent!")


# Run every day at 8:00 AM
schedule.every().day.at("08:00").do(track_rankings)

print("Scheduler running... Press Ctrl+C to stop.")
while True:
    schedule.run_pending()
    time.sleep(60)
				
			

The CSV uses append mode (mode='a') so every daily run adds new rows rather than overwriting yesterday’s data. The email sends the full CSV as an attachment so you have everything in your inbox each morning.

Gmail Tip: Don’t use your regular Gmail password in the script.

Go to your Google Account → Security → App Passwords and generate a dedicated app password for this script.

This keeps your main account secure and prevents authentication issues while sending emails programmatically.

That’s the complete rank tracker. With pagination handling, early exit on match, and historical CSV logging, you now have a production-ready script that tracks your keyword positions daily, that too for a fraction of what Semrush or Ahrefs would charge.

Rank Tracking using No-Code Tools (n8n)

After the removal of the num=100 parameter, which basically allowed us to get 100 Google search results in one go, we decided to build one using n8n. 

If you happen to be a no-code tool enthusiast & rank tracking is something you are looking to do at an economical cost, this tutorial can help. 

If you would like to read along the text & understand the whole workflow, you can check out this blog. You will also find the blueprint for this automation there. 

Key Takeaways:

  • The tutorial explains how to build an automated keyword rank tracker in Python using a Google SERP API.
  • It shows how to send search queries, parse the response, and extract the ranking position of your target domain.
  • The script tracks organic search results and helps monitor keyword position changes over time.
  • You can schedule the script to run daily and store results to create historical ranking data.
  • This method eliminates manual rank checking and provides a scalable way to monitor SEO performance.

Conclusion

Keyword rank tracking is a crucial part of any search engine optimization (SEO) strategy, enabling businesses to monitor their performance and refine their approaches to stay ahead of the competition.

Using Python and the Google Search API, you can automate the process and efficiently gather accurate ranking data.

This not only saves time but also provides insights that empower data-driven decisions for improving visibility and driving organic traffic.

You can monitor your rankings or analyze competitors, this combination provides a scalable and cost-effective solution.

Frequently Asked Questions (FAQs)

1. Can I track competitor domains too?

Yes. You can replace target_domain with any competitor’s domain and track their rankings for the same keywords. The script doesn’t care whose domain it checks — it simply scans the SERP results and reports the position if found.

2. Is scraping Google search results legal?

Google search results contain publicly accessible data. However, you should always follow platform terms and use scraping responsibly. Using an API reduces technical risks like aggressive scraping or server overload.

3. Can I track rankings for any country?

Yes. The country parameter lets you track rankings in specific markets such as “us”, “gb”, “in”, “au”, etc. This makes it easy to monitor performance across different geographic regions.

4. How many API credits does this script use per day?

It depends on how many keywords you track and how deep you paginate. In the worst case (10 pages per keyword), it uses up to 10 API calls per keyword per day. But if your domain ranks on page 1, it stops early and uses fewer credits.

5. Is rank tracking with an API accurate?

Yes. The script pulls live organic Google results directly from the API, so the ranking positions reflect real-time search data. The accuracy depends on the selected country and query parameters, just like any professional SEO tool.

Additional Sources

My name is Manthan Koolwal and I am the founder of scrapingdog.com. I love creating scraper and seamless data pipelines.
Manthan Koolwal
Manthan Koolwal

Web Scraping with Scrapingdog

Scrape the web without the hassle of getting blocked

Recent Blogs

Rank Tracker Using Python

Build a Keyword Rank Tracker in Python Using Google SERP API (Free Tool)

Learn how to build a free Python keyword rank tracker using Scrapingdog's Google SERP API. Track your domain's position for any keyword and save results to CSV.
Scraping with Scrapy

Web Scraping with Scrapy: Complete Guide

Learn how to build a Scrapy spider in Python, integrate web scraping api to bypass CAPTCHAs, and export scraped data to CSV, JSON, or XML. Complete 2026 tutorial.