< Back to Blog Overview

Web Scraping Google Search Results using Python

03-05-2021

Python is a versatile language that can be used for many different things. One neat little trick it can do is scrape Google search results.

Python can be used for scraping various data, such as conducting market research, scraping prices for business needs, or for scraping real estate websites like Zillow, the list is endless!!

In this blog post, we’ll take a look at Python libraries to make this process simple.

scrape google search results using python
Scraping Google Search Results Using Python

Why Python for google scraping?

Being a very simple language it is also flexible and easy to understand even if you are a beginner. The Python community is too big and it helps when you face any error while coding.

Many forums like StackOverflow, GitHub, etc already have the answers to the errors that you might face while coding when you scrape google search results.

You can do countless things with Python but for now, we will learn web scraping google search results with it.

Read More: Web scraping 101 with Python (A beginner-friendly tutorial)

Let’s Start Scraping Google Search Results with Python

In this post, we will be scraping google search results for any specific country using Python and a free residential proxy. But first, we will focus on creating a basic python script that can scrape the first 10 results.

The end result will be JSON data that will consist of a link, title, description, and position. You can use this data for SEO, product verifications, etc.

Prerequisite to scrape google search results

Generally, google scraping with python is divided into two parts:

  1. Fetching data by making an HTTP request.
  2. Extracting essential data by parsing the HTML DOM.

Libraries & Tools

  1. Beautiful Soup is a Python library for pulling data out of HTML and XML files.
  2. Requests allow you to send HTTP requests very easily.
  3. Residential Proxy to extract the HTML code of the target URL.

Setup

Our setup is pretty simple. Just create a folder and install Beautiful Soup & requests. For creating a folder and installing libraries type below given commands. I am assuming that you have already installed Python 3.x.

mkdir scraper <br>pip install beautifulsoup4 <br>pip install requests

Now, create a file inside that folder by any name you like. I am using google.py.

Import the libraries we just installed in that file.

from bs4 import BeautifulSoup<br>import requests

Preparing the Food

Now, since we have all the ingredients to prepare the scraper, we should make a GET request to the target URL to get the raw HTML data. Now we will scrape Google Search results using requests library as shown below.

We will first try to scrape 10 search results and then we will focus on country-specific results.

headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'}

url='https://www.google.com/search?q=pizza&amp;ie=utf-8&amp;oe=utf-8&amp;num=10'
html = requests.get(url,headers=headers)

this will provide you with an HTML code of that target URL. Now, you have to use BeautifulSoup to parse HTML.

soup = BeautifulSoup(html.text, 'html.parser')

When you inspect the google page you will find that all the results come under a class “g”. Of course, this name will change after some time because google doesn’t like scrapers. You have to keep this in check.

scraping html

We will extract all the classes with the name “g”.

allData = soup.find_all("div",{"class":"g"})

Now, we will run a for loop to reach each and every item in the allData list.

g=0
Data = [ ]
l={}
for i in range(0,len(allData)):
                    link = allData[i].find('a').get('href')

                    if(link is not None):
                        if(link.find('https') != -1 and link.find('http') == 0 and link.find('aclk') == -1):
                            g=g+1
                            l["link"]=link
                            try:
                                l["title"]=allData[i].find('h3').text
                            except:
                                l["title"]=None  

                          try:
                                l["description"]=allData[i].find("span",{"class":"aCOpRe"}).text
                            except:
                                l["description"]=None 

                           l["position"]=g 

                           Data.append(l)  

                          l={}   

                     else:
                            continue 

                   else:
                        continue

                print(Data)

Inside for loop, we have to find the website link, title, and description. We can find the link inside the tagtitle in h3 tag, and the description in a span tag with class aCOpRe.

using CSS identifier

We have to filter out the legit google links from the raw data. Therefore we have used find() method to filter out the garbage and ad links. You can filter out ad links just by checking whether they contain ‘aclk’ within the URL string. Then we will add all the data inside a dictionary l and then append it to list Data.

On printing the list Data the output will look like this.

printing the data in JSON format

This method is not reliable because google will block you after certain requests. We need some advanced tools to overcome this problem.

Know more: 10 Tips for Web Scraping To Avoid Getting Block

Scraping Google Search Results from Different Countries using Python

Now, since we know to scrape google search results using python (as we did in the previous section) we will move on to advanced techniques. As you know google shows different results in different countries for the same keyword.

So, we will now scrape the google results according to country of origin. We will use a residential proxy to achieve our results.

There are plenty of tools out there to web scrape, also you can use them to scrape google search results.

All you need to do is enter the URL you want to scrape, and it will do the rest.

First, we will create a list of user agents so that we can rotate them on every request. For this tutorial, we will create a list of 10 user agents. If you want more, then you can find them here.

userAgents=['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36','Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36','Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1']

Now, we need a residential proxy provider via which we can rotate proxies and change the origin of the request. When you signup to Scrapingdog you get 1000 free requests. You can find the proxy documentation here.

You will find your proxy URL on the dashboard. We will create a proxy object to pass it on to the requests method.

http_proxy  = "http://scrapingdog:[email protected]:8081"
https_proxy = "http://scrapingdog:[email protected]:8081"

proxyDict = {"http"  : http_proxy,"https" : https_proxy}

We have used -country=us as a param in our proxy to use USA proxies. Similarly, you can use ‘ca’ for Canada, ‘gb’ for England, ‘in’ for India, etc.

We will use the random library to rotate user agents.

from random import randrange
headers={'User-Agent':userAgents[randrange(10)]}

html = requests.get(url,proxies=proxyDict,headers=headers)

And that’s it. All the rest of the code will remain the same as earlier.

As earlier, we will create a Beautifulsoup object and then extract the same classes. But this time google won’t be able to block you as you are using a new IP on every request.

For the USA, the results will look like this.

google SERP in USA

For the United Kingdom, the google search result will look like this.

Google SERP in UK

Similarly, you can check for other countries.

But if you want to avoid handling all this hassle, then you can use our Web Scraping API to scrape google search results in just one single GET request.

Limitations of scraping google search results with python

Although python is a great language for scraping google search results still there are some limitations to it. Since it is a dynamic language it can lead to runtime errors and it cannot handle multiple threads as well as other languages.

Further, a slow response rate is observed while using python for scraping google search results. 

Other than that you cannot continue using just python for scraping google at a large scale because then it will ultimately block your script for such a large amount of traffic from just one single IP.

With Scrapingdog’s API, you don’t have to maintain a web scraping script. Scrapingdog will handle all the hassle and deliver the data in a seamless manner. You can take a trial where the first 1000 requests are on us.

Forget about getting blocked while scraping the Web

Try out Scrapingdog Web Scraping API & Scrape Google Search Results at Scale without Getting Blocked

Using Google’s API to Scrape Google Search Results

Google offers its own API to extract data from its search engine. It is available at this link for anyone who wants to use it. However, the usage of this API is very limited due to the following reason: –

  • The API is very costly – For every 1000 requests you make, it will cost you around $5, which doesn’t make sense as you can do it for free with web scraping tools.
  • The API has limited functionality – It is made to scrape only a small group of websites, although by doing changes to it you can scrape the whole web but again it would cost you time.
  • Limited Information – The API is made to provide you with little information, thus, any data extracted may not be useful.

Benefits of Scraping Google Search Results

Web Scraping Google search results can provide a variety of benefits, including:

  1. Data collection: It can be used to collect large amounts of data on a particular topic or industry, which can be used for research, analysis, or business intelligence purposes.
  2. Competitive analysis: It can be used to gather information on a company’s competitors, such as their search rankings, ad spending, and keywords.
  3. SEO: It can be used to gather information on the keywords and phrases that are driving traffic to a website, which can be used to improve search engine optimization (SEO) and increase visibility in search results.
  4. Lead generation: It can be used to gather contact information for potential customers, such as email addresses and phone numbers.
  5. Pricing intelligence: It can be used to gather information on prices for products and services, which can be used to inform pricing decisions and stay competitive in the marketplace.
  6. Market research: It can be used to gather data on market trends, consumer sentiment, and industry developments, which can be used to inform strategic business decisions.

Know More: Read More Web Scraping Use Cases

Conclusion

In this article, we learned how we can scrape data from Google using Python & Residential Proxy. Feel free to comment and ask me anything.

Thanks for reading!!

Frequently Asked Questions

Scrapingdog offers 1000 requests which you can use in their free account. Check out the pricing plan for more options.

In this tutorial we have used Beautifulsoup library to extract google search results.

Additional Resources

Here are a few additional resources that you may find helpful during your web scraping journey:

Manthan Koolwal

My name is Manthan Koolwal and I am the founder of scrapingdog.com. I love creating scraper and seamless data pipelines.
Scrapingdog Logo

Try Scrapingdog for Free!

Free 1000 API calls of testing.

No credit card required!