# Install Chrome Browser and Chromedriver Ubuntu 20.04

This guide will show you how to set up an Ubuntu Virtual Private Server (VPS) for web scrapping with Selenium. Python and Selenium are really useful for scrapping JS based websites that load dynamically. Traditional scrapping does not work with dynamic sites that load content on the fly. Selenium uses a chrome browser and goes through the website like a normal person would, clicking on buttons and links.

This type of coding is also generally used for performance testing, stress testing a website by simulating multiple instances of real-users visiting the site. These expensive stress testing SaaS applications, that charge an arm and a leg typically use this type of coding at some point in the back-end to simulate website traffic.

# Pre-Requisites

All these steps assume that you are already inside of a virtual server. We are working with an Ubuntu Virtual Server 20.04 from Digital Ocean. This type of code needs to run from a server for many practical reasons, but if you are in-doubt - Web scrapping is a borderline risky task that can get you banned from some sites, so you dont want to get your personal IP banned, user a virtual server for scrapping. Get one here from digital ocean (opens new window)

# Step 1 - Download Chrome

Update your packages.

sudo apt update

Download and install chrome

wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb

If you get 'wget' command not found, that means you do not have wget installed on your machine. Simply install it by running:

sudo apt install wget

Then you can install chrome from the downloaded file.

sudo dpkg -i google-chrome-stable_current_amd64.deb
sudo apt-get install -f

Check Chrome is installed correctly.

google-chrome --version

This version is important, you will need it to get the chromedriver.

# Step 2 - Install Chromedriver

You can download the chromedriver from this location Download Chromedriver (opens new window) But you need the correct version, so remember which version of chrome you have from step 1 above and download the correct chromedriver.

In my case, I had version 92.xx.xxx.xx.xx. So I need to click on the version that supports 92. Install the correct chromedriver

Click on the link, and it will take you to this page. Download the version that is best suited for your operating system, for me (Running Ubuntu 20.04 on Digital Ocean VPS) - the correct version is chromedriver_linux64.zip. You need to right-click on the link and copy link. Install the correct chromedriver

Download the chromedriver in your VPS, make sure you replace this link with your link to match your vversion of chrome.

wget https://chromedriver.storage.googleapis.com/92.0.4515.107/chromedriver_linux64.zip

You will get a zip file, you need to unzip it:

unzip chromedriver_linux64.zip

Make sure you unzip the file in your current directory, you can ls to copy and paste the correct file to unzip.

You then need to move the file to the correct location, so you can find when you need it.

sudo mv chromedriver /usr/bin/chromedriver
sudo chown root:root /usr/bin/chromedriver
sudo chmod +x /usr/bin/chromedriver

# Step 3 - Test Installation

Run the command

chromedriver --url-base=/wd/hub

You should get feedback that chrome is working properly, but the best way to test this is with python. The following code will scrap google and return the title of the website. Create a python file and add the following code:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

options = Options()

options.headless = True

driver = webdriver.Chrome("/usr/bin/chromedriver", options=options)

driver.get("https://google.com/")
print(driver.title)
driver.quit()

Run the file - and you should get a print out on the terminal of the website title. You are now ready to scrap websites.