< Back to Blog Overview

Web Scraping Glassdoor To Extract Name, Title, Location & Salary Details with Python

03-08-2023

Web Scraping Glassdoor can provide you with some insights like what salary should one expect when applying for a job.

Employers can do Glassdoor scraping to improve their hiring strategy by comparing data with their competition. The use cases for web scraping Glassdoor are endless and here in this article, we will extract data from jobs.

scraping glassdoor
Web Scraping Glassdoor

In this article, we are going to use Python & design a glassdoor scraper. At the end of this tutorial, you will be able to save this data in a CSV file too.

What You Need To Scrape Glassdoor

For this article, we will need python 3.x, and I am assuming that you have already installed it on your machine. Along with this, we have to download III party libraries like BeautifulSoupSelenium, and a chromium driver.

Here is how you will install them

pip install selenium
pip install beautifulsoup4

Along with this, you will have to create a dedicated folder where you will keep the script file. Also, create a python file inside that folder. I am going to name it glassdoor.py

mkdir glass

What Job Details Will We Scrape from Glassdoor?

It is always recommended to decide what data are you going to scrape before even writing a single line of code. As you can see in the above image, we are going to scrape four items from each posted job.

Data Which We Will Be Scraping From Glassdoor
Data Which We Will Be Scraping From Glassdoor
  1. Name of the Company
  2. Job Title
  3. Location
  4. Salary

First, we are going to extract the raw HTML using Selenium from the website and then we are going to use .find() and .find_all() methods of BS4 to parse this data out of the raw HTML.

Chromium will be used in coordination with Selenium to load the website.

Let’s Start Scraping Glassdoor Job Data

Before we start we are going to run a small test to check if our setup works or not.

from bs4 import BeautifulSoup
from selenium import webdriver
import time

PATH = 'C:\Program Files (x86)\chromedriver.exe'


l=list()
o={}

target_url = "https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15.htm?clickSource=searchBox"

driver=webdriver.Chrome(PATH)

driver.get(target_url)

driver.maximize_window()
time.sleep(2)

resp = driver.page_source

driver.close()

Now, let me explain to you what we have done here step by step.

  1. We have imported the libraries that were installed earlier in this article.
  2. Then we have declared PATH where our chromium driver is installed.
  3. An empty list and an empty object to store job data are also declared.
  4. target_url holds the target page URL of glassdoor.
  5. Then we created an instance using .Chrome method.
  6. Using .get() method we are trying to connect with the target webpage. Chromium will load this page.
  7. Using .maximize_window() we are increasing the size of the chrome window to its maximum size.
  8. Then we are using .sleep() method to wait before we close down the chrome instance. This will help us to load the website completely.
  9. Then using .page_source we are collecting all the raw HTML of the page.
  10. Then finally we are closing down the chromium instance using the .close() method provided by the Selenium API.

Once you run this code, it should open a chrome instance, load the page and then close the browser. If this too happens with your script then we can move ahead. Our main setup is ready.

One thing you will notice is that all these jobs are under ul tag with class hover p-0 my-0 css-7ry9k1 exy0tjh5. So, we have to find this class first. We will use .find() method of BS4 to find this tag inside our DOM.

allJobsContainer = soup.find("ul",{"class":"css-7ry9k1"})

allJobs = allJobsContainer.find_all("li")

Using .find() method we are searching for the ul tag and then using .find_all() method we are searching for all the li tags inside the ul tag.

Now, we can use a for loop to access all the 30 jobs available on the page.

Sometimes Glassdoor will show you a page with no jobs. All you have to do is clear the cookies and try again.

Now, let’s find the location of each target element, one by one.

Inspecting the name of the company in source code
Inspecting the name of the company in source code

As you can see the name of the company can be found under the div tag with class d-flex justify-content-between align-items-start. Let’s parse it out from the raw HTML using BS4.

for job in allJobs:
    try:
        o["name-of-company"]=job.find("div",{"class":"d-flex justify-content-between align-items-start"}).text
    except:
        o["name-of-company"]=None

    l.append(o)

    o={}

Now let’s find the name of the job.

Inspecting name of the job in source code
Inspecting the name of the job in source code

You can find this tag in a tag with the class jobLink css-1rd3saf eigr9kq2. Let’s use the same technique to find this.

try:
    o["name-of-job"]=job.find("a",{"class":"jobLink css-1rd3saf eigr9kq2"}).text
except:
    o["name-of-job"]=None
Inspecting the location of the Job in the source code

In the above image, you can see the location is stored under div tag with class d-flex flex-wrap css-11d3uq0 e1rrn5ka2.

try:
    o["location"]=job.find("div",{"class":"d-flex flex-wrap css-11d3uq0 e1rrn5ka2"}).text
except:
    o["location"]=None

The last thing left is the salary and this is the most important part for obvious reasons.

Inspecting salary in source code
Inspecting salary in source code

You can see in the above image that the salary information can be found under the div tag with class css-3g3psg pr-xxsm.

try:
    o["salary"]=job.find("div",{"class":"css-3g3psg pr-xxsm"}).text
except:
    o["salary"]=None


l.append(o)
o={}

In the end, we have pushed the object o inside the list l. Then we declared the object o empty.

Once you run and print the list l, you will get these results.

You will get the name of the company, the name of the job, the location, and the salary in a list.

Let’s save this data to a CSV file

For saving this data to a CSV file all we have to do is install pandas. This is just a two-line code and we will be able to create a CSV file and store this data in that file.

First, let’s install pandas.

pip install pandas

Then import this into our main script glassdoor.py file.

import pandas as pd

Now using DataFrame method we are going to convert our list l into a row and column format. Then using .to_csv() method we are going to convert a DataFrame to a CSV file.

df = pd.DataFrame(l)
df.to_csv('jobs.csv', index=False, encoding='utf-8')

You can add these two lines once your list l is ready with all the data. Once the program is executed you will get a CSV file by the name jobs.csv in your root folder.

Isn’t that simple? You were able to scrape and save the data to a CSV file in no time.

Complete Code

You can make more changes to scrape a little more information. But for now, the complete code will look like this.

from bs4 import BeautifulSoup
from selenium import webdriver
import time
import pandas as pd

PATH = 'C:\Program Files (x86)\chromedriver.exe'


l=list()
o={}

target_url = "https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15.htm?clickSource=searchBox"

driver=webdriver.Chrome(PATH)

driver.get(target_url)

driver.maximize_window()
time.sleep(2)

resp = driver.page_source
driver.close()

soup=BeautifulSoup(resp,'html.parser')

allJobsContainer = soup.find("ul",{"class":"css-7ry9k1"})

allJobs = allJobsContainer.find_all("li")

for job in allJobs:
    try:
        o["name-of-company"]=job.find("div",{"class":"d-flex justify-content-between align-items-start"}).text
    except:
        o["name-of-company"]=None

    try:
        o["name-of-job"]=job.find("a",{"class":"jobLink css-1rd3saf eigr9kq2"}).text
    except:
        o["name-of-job"]=None


    try:
        o["location"]=job.find("div",{"class":"d-flex flex-wrap css-11d3uq0 e1rrn5ka2"}).text
    except:
        o["location"]=None


    try:
        o["salary"]=job.find("div",{"class":"css-3g3psg pr-xxsm"}).text
    except:
        o["salary"]=None

    l.append(o)

    o={}

print(l)

df = pd.DataFrame(l)
df.to_csv('jobs.csv', index=False, encoding='utf-8')

The code is simple, neat, and straightforward.

What if you want to scrape all the pages from Glassdoor?

If you want to scrape all the jobs from all the pages from Glassdoor then you will first notice URL patterns.

First Page — https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15.htm?includeNoSalaryJobs=true

Second Page — https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15_IP2.htm?includeNoSalaryJobs=true&pgc=AB4AAYEAHgAAAAAAAAAAAAAAAfkQ90AAUgEBAQgW%2Fr3vuIzCm5wwBSiI3WKjWOqbueSQvnI%2BGizAAsjV8NiAL80nAjkvw3vucgztbs4IIrkoqerQ462C14jLJVNRIV0ihlakU7p20hMXIG4AAA%3D%3D

Third Page — https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15_IP3.htm?includeNoSalaryJobs=true&pgc=AB4AAoEAPAAAAAAAAAAAAAAAAfkQ90AAdwEBAQtEzo8VunEQLF8uBoWr%2BRnCsnMFj0JNOLbRUXIkLkFAzjjZlKDW1axVwiTVV%2BbXo8%2BX471WNF8IEWPMdAwCPhbzQe1T1HHMEVPYFwQLM8h1NnGMDPcEwo7tpQ7XL65R7DMDR26n0NhBU7lFGCODAwxNTsJRAAA%3D

As you can see there is no common pattern in the above URLs apart from this part — https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15. But this is not helpful if you want to scrape other pages. So, the only solution is to use the .click() method provided by the Selenium API. Using .click() method we are going to click the next button by scrolling down.

First, scroll and then click.

So, this is how you are going to scroll down the page of any Glassdoor page.

scrolling_element= driver.find_element_by_xpath("//*[@id='MainCol']")
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', scrolling_element)

With .find_element_by_xpath() we are finding the column where all the jobs are stored.

Once you scroll down this element you have to find the button and click it.

You have to find it using the same method of selenium .find_element_by_xpath(). And finally, you have to use .click() method to click it. This will take you to the next page.

time.sleep(3)
driver.find_element_by_xpath('//*[@id="MainCol"]/div[2]/div/div[1]/button[7]').click()
time.sleep(3)

Now, you have to use it in a loop to extract all the jobs from the particular location. I know it is a bit lengthy process, but unfortunately, this is the only way to scrape Glassdoor.

But while scraping Glassdoor might limit your search and restrict your IP. In this case, you have to use a Web Scraping API. Let’s see how you can avoid getting blocked with a Web Scraper API like Scrapingdog.

Avoid Getting Blocked While Scraping Glassdoor at Scale with Scrapingdog

You can use Scrapingdog’s web scraping API to avoid getting blocked while scraping Glassdoor at scale.

You have to sign up from here for the free account to start using it. It will take just 10 seconds to get you started with Scrapingdog. In a free account, Scrapingdog offers 1000 free API calls.

scrapingdog homepage
Scrapingdog HomePage

Once you sign up, you will be redirected to your dashboard. The dashboard will look somewhat like this.

Scrapingdog dashboard
Scrapingdog Dashboard

You have to use your own API key.

Now, you can paste your target Glassdoor target page link to the left and then select JS Rendering as No. After this click on Copy Code from the right. Now use this API in your python script to scrape Glassdoor.

You will notice the code will remain somewhat the same as above. We just have to change one thing and that is our target URL.

from bs4 import BeautifulSoup
from selenium import webdriver
import time
import pandas as pd

PATH = 'C:\Program Files (x86)\chromedriver.exe'

l=list()
o={}

target_url = "https://api.scrapingdog.com/scrape?api_key=xxxxxxxxxxxxxxxxxxxxxxxx&url=https://www.glassdoor.com/Job/new-york-python-jobs-SRCH_IL.0,8_IC1132348_KO9,15_IP3.htm?includeNoSalaryJobs=true&pgc=AB4AAoEAPAAAAAAAAAAAAAAAAfkQ90AAdwEBAQtEzo8VunEQLF8uBoWr%2BRnCsnMFj0JNOLbRUXIkLkFAzjjZlKDW1axVwiTVV%2BbXo8%2BX471WNF8IEWPMdAwCPhbzQe1T1HHMEVPYFwQLM8h1NnGMDPcEwo7tpQ7XL65R7DMDR26n0NhBU7lFGCODAwxNTsJRAAA%3D&dynamic=false"

driver=webdriver.Chrome(PATH)

driver.get(target_url)

driver.maximize_window()
time.sleep(2)

resp = driver.page_source
driver.close()

soup=BeautifulSoup(resp,'html.parser')

allJobsContainer = soup.find("ul",{"class":"css-7ry9k1"})

allJobs = allJobsContainer.find_all("li")

for job in allJobs:
    try:
        o["name-of-company"]=job.find("div",{"class":"d-flex justify-content-between align-items-start"}).text
    except:
        o["name-of-company"]=None

    try:
        o["name-of-job"]=job.find("a",{"class":"jobLink css-1rd3saf eigr9kq2"}).text
    except:
        o["name-of-job"]=None

    try:
        o["location"]=job.find("div",{"class":"d-flex flex-wrap css-11d3uq0 e1rrn5ka2"}).text
    except:
        o["location"]=None

    try:
        o["salary"]=job.find("div",{"class":"css-3g3psg pr-xxsm"}).text
    except:
        o["salary"]=None

    l.append(o)

    o={}

print(l)

df = pd.DataFrame(l)
df.to_csv('jobs.csv', index=False, encoding='utf-8')

As you can see we have replaced the target URL of Glassdoor with the API URL of Scrapingdog. You have to use your own API Key in order to successfully run this script.

With this script, you will be able to scrape Glassdoor with a lightning-fast speed that too without getting blocked.

Forget about getting blocked while scraping the Web

Try out Scrapingdog Web Scraping API with thousands of proxy servers and an entire headless Chrome cluster

Frequently Asked Questions

Conclusion

In this post, we learned to scrape Glassdoor and store the data in a CSV file. We later discovered a way to scrape all the pages for any given location. Now, you can obviously create your own logic and scrape Glassdoor but this was a pretty straightforward way to scrape it.

I have two more tutorials on Scraping Indeed Job Portal with Python & Scraping LinkedIn Jobs using Python. You can check it out too!!

Of course, I would recommend a Web Scraping API if you are planning to scrape it at scale. With a normal script without proxy rotation, you will be blocked in no time and your data pipeline will be stuck. For scraping millions of such postings you can always use Scrapingdog.

I hope you like this little tutorial and if you do then please do not forget to share it with your friends and on your social media.

Additional Resources

Here are a few additional resources that you may find helpful during your web scraping journey:

Manthan Koolwal

My name is Manthan Koolwal and I am the founder of scrapingdog.com. I love creating scraper and seamless data pipelines.
Scrapingdog Logo

Try Scrapingdog for Free!

Free 1000 API calls of testing.

No credit card required!

DMCA.com Protection Status