Zoopla is one of the UK’s leading property portals, packed with real estate listings, rental data, and housing market insights. Whether you’re a developer building a property comparison tool or a researcher tracking market trends, accessing this data programmatically can be incredibly useful.
In this guide, we’ll show you how to scrape property listings from Zoopla using Python, covering everything from navigating pagination to extracting key details like price, location, and property type, all while being mindful of best scraping practices. If you’re just starting out with web scraping, I highly recommend reading Web Scraping with Python. It’s a great resource for building a strong foundation.
Requirements To Scrape Data From Zoopla
I assume that you have already installed Python on your machine. If not, you can download it from here. Now, create a folder by any name you like. I am naming the folder as zoopla
.
mkdir tut
Install these 3 libraries inside this folder.
requests
 for making an HTTP connection with the target website.BeautifulSoup
 for parsing the raw data.Pandas
 for storing data in a CSV file.
Final step before moving to the coding part would be to sign up for a web scraping API. In this tutorial, we are going to use Scrapingdog.
Now, we should create a Python file where we can write the scraping code. I am naming the file as zoopla.py
.
Let's Start Scraping Zoopla with Python
Before we start coding the scraper, take a moment to read Scrapingdog’s documentation; it’ll give you a clear idea of how we can use the API to scrape Zoopla at scale.
Once you have read the documentation, it is better to decide what information you want to extract from Zoopla in advance.
We are going to scrape the price, address, and description of the property. Let’s find the location of each data point in the DOM.
The pricing is stored inside the p tag with the classÂ
_64if862
.
The description is stored inside the p tag with the class m6hnz63
.
The address is located inside the address
tag itself.
Each property is located inside a div tag with the classÂ
dkr2t86
.
Let’s code now.
Â
Downloading raw HTML
import requests
from bs4 import BeautifulSoup
import pandas as pd
l=[]
obj={}
params={
'api_key': 'your-api-key',
'url': 'https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home&pn=2',
'dynamic': 'false',
}
response = requests.get("https://api.scrapingdog.com/scrape", params=params)
print(response.status_code)
This is the basic Python code where we are making a GET request to the Scrapingdog API. Remember to use your own API key in the above code. Once you run the code, you should see 200
 status code indicating a successful scrape.
Â
Parsing the data with BeautifulSoup
import requests
from bs4 import BeautifulSoup
import pandas as pd
l=[]
obj={}
params={
'api_key': 'your-api-key',
'url': 'https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home&pn=2',
'dynamic': 'false',
}
response = requests.get("https://api.scrapingdog.com/scrape", params=params)
soup = BeautifulSoup(response.text, 'html.parser')
allData = soup.find_all("div",{"class":"dkr2t86"})
for data in allData:
try:
obj["price"]=data.find("p",{"class":"_64if862"}).text
except:
obj["price"]=None
try:
obj["address"]=data.find("address",{"class":"m6hnz62"}).text
except:
obj["address"]=None
try:
obj["description"]=data.find("p",{"class":"m6hnz63"}).text
except:
obj["description"]=None
l.append(obj)
obj={}
print(l)
Handling Pagination
When you click on the II page from the bottom of the page, the URL of the page becomes https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home&pn=2. As you can see, a page parameter by the name pn
 appears within the URL, which can be used for changing the page number.
We have to run a for loop that can automatically change the page number within the URL and make a separate API call on every iteration.
import requests
from bs4 import BeautifulSoup
import pandas as pd
l=[]
obj={}
for i in range(1,11):
params={
'api_key': 'your-api-key',
'url': 'https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home&pn={}'.format(i),
'dynamic': 'false',
}
response = requests.get("https://api.scrapingdog.com/scrape", params=params)
soup = BeautifulSoup(response.text, 'html.parser')
allData = soup.find_all("div",{"class":"dkr2t86"})
for data in allData:
try:
obj["price"]=data.find("p",{"class":"_64if862"}).text
except:
obj["price"]=None
try:
obj["address"]=data.find("address",{"class":"m6hnz62"}).text
except:
obj["address"]=None
try:
obj["description"]=data.find("p",{"class":"m6hnz63"}).text
except:
obj["description"]=None
l.append(obj)
obj={}
print(l)
Saving data to CSV
Using the pandas library, we can save this data to a CSV file. Let’s see how it can be done.
df = pd.DataFrame(l)
df.to_csv('zoopla.csv', index=False, encoding='utf-8')
- Creates a DataFrameÂ
df
 from the listÂl
. - Saves the DataFrame to a CSV file namedÂ
zoopla.csv
. - Disables the index column in the CSV usingÂ
index=False
.
Once you run it, you will find a CSV file by the name zoopla.csv
.
Â
Complete Code
You can, of course, scrape other details as well, but as of now, the code will look like this.
import requests
from bs4 import BeautifulSoup
import pandas as pd
l=[]
obj={}
params={
'api_key': 'your-api-key',
'url': 'https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home',
'dynamic': 'false',
}
response = requests.get("https://api.scrapingdog.com/scrape", params=params)
soup = BeautifulSoup(response.text, 'html.parser')
allData = soup.find_all("div",{"class":"dkr2t86"})
for data in allData:
try:
obj["price"]=data.find("p",{"class":"_64if862"}).text
except:
obj["price"]=None
try:
obj["address"]=data.find("address",{"class":"m6hnz62"}).text
except:
obj["address"]=None
try:
obj["description"]=data.find("p",{"class":"m6hnz63"}).text
except:
obj["description"]=None
l.append(obj)
obj={}
df = pd.DataFrame(l)
df.to_csv('zoopla.csv', index=False, encoding='utf-8')
Get Structured Data without Parsing using Scrapingdog AI Scraper
If you want structured data without writing parsing logic or using BeautifulSoup, you can use AI parsing instead. Just submit a simple query, and the data gets parsed for you, no custom code required. You can read more about this parameter over here.
import requests
response = requests.get("https://api.scrapingdog.com/scrape", params={
'api_key': 'your-api-key',
'url': 'https://www.zoopla.co.uk/for-sale/property/london/?q=london&search_source=home&pn=1',
'dynamic': 'false',
'ai_query': 'Give me the price of each property in json format'
'ai_extract_rules': '{"price":"$xyz"}'
})
print(response.text)
This will generate a JSON response that looks like this.
{"price":"£425,000"},{"price":"£800,000"},{"price":"£575,000"},{"price":"£650,000"},{"price":"£1,100,000"},{"price":"£3,600,000"},{"price":"£450,000"},{"price":"£625,000"},{"price":"£1,350,000"},{"price":"£475,000"},{"price":"£950,000"},{"price":"£360,000"},{"price":"£685,000"},{"price":"£435,000"},{"price":"£1,695,000"},{"price":"£320,000"},{"price":"£450,000"},{"price":"£6
Advantages of this approach
- No code required — Easily extract structured data without writing any parsing logic.
- Faster implementation — Skip HTML structure analysis and get results instantly.
- Handles messy HTML — AI can intelligently extract data even from inconsistent or nested layouts.
- Reduces maintenance — No need to update scrapers when websites change minor HTML tags or classes.
- Ideal for non-devs — Makes web scraping accessible to users without programming experience.
- Cleaner output — Directly get structured JSON or CSV-ready data.
Conclusion
Scraping property listings from Zoopla with Python and Scrapingdog opens up a world of possibilities for real estate analysis, price tracking, and market research. With libraries like requests
, BeautifulSoup
, or even automation tools like Puppeteer
You can extract valuable data at scale. Whether you’re building a real estate dashboard or feeding data into an ML model, Python makes the process efficient and flexible.
Additional Resources
