Web Scraping with Scrapy: Advanced Examples

Contributed by: Zac Clancykite.com.
You can also read the originally published version of this article over at Kite’s blog.

Introduction to web scraping

Web scraping is one of the tools at a developer’s disposal when looking to gather data from the internet. While consuming data via an API has become commonplace, most of the websites online don’t have an API for delivering data to consumers. In order to access the data they’re looking for, web scrapers and crawlers read a website’s pages and feeds, analyzing the site’s structure and markup language for clues. Generally speaking, information collected from scraping is fed into other programs for validation, cleaning, and input into a datastore or its fed onto other processes such as natural language processing (NLP) toolchains or machine learning (ML) models. There are a few Python packages we could use to illustrate with, but we’ll focus on Scrapy for these examples. Scrapy makes it very easy for us to quickly prototype and develop web scrapers with Python.

Scrapy concepts

Before we start looking at specific examples and use cases, let’s brush up a bit on Scrapy and how it works.

  • Spiders: Scrapy uses Spiders to define how a site (or a bunch of sites) should be scraped for information. Scrapy lets us determine how we want the spider to crawl, what information we want to extract, and how we can extract it. Specifically, Spiders are Python classes where we’ll put all of our custom logic and behavior.
import scrapy

class NewsSpider(scrapy.Spider):
	name = 'news'
	...
  • Selectors: Selectors are Scrapy’s mechanisms for finding data within the website’s pages. They’re called selectors because they provide an interface for “selecting” certain parts of the HTML page, and these selectors can be in either CSS or XPath expressions.
  • Items: Items are the data that is extracted from selectors in a common data model. Since our goal is a structured result from unstructured inputs, Scrapy provides an Item class which we can use to define how our scraped data should be structured and what fields it should have.
import scrapy

class Article(scrapy.Item): 
    headline = scrapy.Field() 
    ...

Reddit-less front page

Suppose we love the images posted to Reddit, but don’t want any of the comments or self posts. We can use Scrapy to make a Reddit Spider that will fetch all the photos from the front page and put them on our own HTML page which we can then browse instead of Reddit.

To start, we’ll create a RedditSpider which we can use traverse the front page and handle custom behavior.

import scrapy

class RedditSpider(scrapy.Spider):
	name = 'reddit'
	start_urls = [
    	    'https://www.reddit.com'
	]

Above, we’ve defined a RedditSpider, inheriting Scrapy’s Spider. We’ve named it reddit and have populated the class’ start_urls attribute with a URL to Reddit from which we’ll extract the images.

At this point, we’ll need to begin defining our parsing logic. We need to figure out an expression that the RedditSpider can use to determine whether it’s found an image. If we look at Reddit’s robots.txt file, we can see that our spider can’t crawl any comment pages without being in violation of the robots.txt file, so we’ll need to grab our image URLs without following through to the comment pages.

By looking at Reddit, we can see that external links are included on the homepage directly next to the post’s title. We’ll update RedditSpider to include a parser to grab this URL. Reddit includes the external URL as a link on the page, so we should be able to just loop through the links on the page and find URLs that are for images.

class RedditSpider(scrapy.Spider):
    ...
    def parse(self, response):
       links = response.xpath('//a/@href')
    	 for link in links:
           ...

In a parse method on our RedditSpider class, I’ve started to define how we’ll be parsing our response for results. To start, we grab all of the href attributes from the page’s links using a basic XPath selector. Now that we’re enumerating the page’s links, we can start to analyze the links for images.

def parse(self, response):
    links = response.xpath('//a/@href')
    for link in links:
        # Extract the URL text from the element
        url = link.get()
        # Check if the URL contains an image extension
        if any(extension in url for extension in ['.jpg', '.gif', '.png']):
            ...

To actually access the text information from the link’s href attribute, we use Scrapy’s .get() function which will return the link destination as a string. Next, we check to see if the URL contains an image file extension. We use Python’s any() built-in function for this. This isn’t all-encompassing for all image file extensions, but it’s a start. From here we can push our images into a local HTML file for viewing.

def parse(self, response):
    links = response.xpath('//img/@src')
    html = ''

    for link in links:
        # Extract the URL text from the element
        url = link.get()
        # Check if the URL contains an image extension
        if any(extension in url for extension in ['.jpg', '.gif', '.png']):
            html += '''
            < a href="{url}" target="_blank">
                < img src="{url}" height="33%" width="33%" />
            < /a>
            '''.format(url=url)

    	# Open an HTML file, save the results
    	    with open('frontpage.html', 'a') as page:
            page.write(html)
    	    # Close the file
    	    page.close()

To start, we begin collecting the HTML file contents as a string which will be written to a file called frontpage.html at the end of the process. You’ll notice that instead of pulling the image location from the ‘//a/@href/‘, we’ve updated our links selector to use the image’s src attribute: ‘//img/@src’. This will give us more consistent results, and select only images.

As our RedditSpider’s parser finds images it builds a link with a preview image and dumps the string to our html variable. Once we’ve collected all of the images and generated the HTML, we open the local HTML file (or create it) and overwrite it with our new HTML content before closing the file again with page.close(). If we run scrapy runspider reddit.py, we can see that this file is built properly and contains images from Reddit’s front page.

But, it looks like it contains all of the images from Reddit’s front page – not just user-posted content. Let’s update our parse command a bit to blacklist certain domains from our results.

If we look at frontpage.html, we can see that most of Reddit’s assets come from redditstatic.com and redditmedia.com. We’ll just filter those results out and retain everything else. With these updates, our RedditSpider class now looks like the below:

import scrapy

class RedditSpider(scrapy.Spider):
    name = 'reddit'
    start_urls = [
        'https://www.reddit.com'
    ]

    def parse(self, response):
        links = response.xpath('//img/@src')
    	  html = ''

    	  for link in links:
            # Extract the URL text from the element
        	url = link.get()
        	# Check if the URL contains an image extension
        	if any(extension in url for extension in ['.jpg', '.gif', '.png'])
               and not any(domain in url for domain in ['redditstatic.com', 'redditmedia.com']):
                html += '''
                < a href="{url}" target="_blank">
                    < img src="{url}" height="33%" width="33%" />
                < /a>
                '''.format(url=url)

    	   # Open an HTML file, save the results
    	       with open('frontpage.html', 'w') as page:
               page.write(html)

    	   # Close the file
    	   page.close()

We’re simply adding our domain whitelist to an exclusionary any()expression. These statements could be tweaked to read from a separate configuration file, local database, or cache – if need be.

Extracting Amazon price data

If you’re running an ecommerce website, intelligence is key. With Scrapy we can easily automate the process of collecting information about our competitors, our market, or our listings.

For this task, we’ll extract pricing data from search listings on Amazon and use the results to provide some basic insights. If we visit Amazon’s search results page and inspect it, we notice that Amazon stores the price in a series of divs, most notably using a class called .a-offscreen. We can formulate a CSS selector that extracts the price off the page:

prices = response.css('.a-price .a-offscreen::text').getall()

With this CSS selector in mind, let’s build our AmazonSpider.

import scrapy

from re import sub
from decimal import Decimal

def convert_money(money):
	return Decimal(sub(r'[^d.]', '', money))

class AmazonSpider(scrapy.Spider):
	name = 'amazon'
	start_urls = [
    	    'https://www.amazon.com/s?k=paint'
	]

	def parse(self, response):
    	    # Find the Amazon price element
    	    prices = response.css('.a-price .a-offscreen::text').getall()

    	    # Initialize some counters and stats objects
    	    stats = dict()
    	    values = []

    	    for price in prices:
        	  value = convert_money(price)
        	  values.append(value)

    	    # Sort our values before calculating
    	    values.sort()

    	    # Calculate price statistics
    	    stats['average_price'] = round(sum(values) / len(values), 2)
    	    stats['lowest_price'] = values[0]
    	    stats['highest_price'] = values[-1]
    	    Stats['total_prices'] = len(values)

    	    print(stats)

A few things to note about our AmazonSpider class: convert_money(): This helper simply converts strings formatted like ‘$45.67’ and casts them to a Python Decimal type which can be used for computations and avoids issues with locale by not including a ‘$’ anywhere in the regular expression. getall(): The .getall() function is a Scrapy function that works similar to the .get() function we used before, but this returns all the extracted values as a list which we can work with. Running the command scrapy runspider amazon.py in the project folder will dump output resembling the following:

{'average_price': Decimal('38.23'), 'lowest_price': Decimal('3.63'), 'highest_price': Decimal('689.95'), 'total_prices': 58}

It’s easy to imagine building a dashboard that allows you to store scraped values in a datastore and visualize data as you see fit.

Considerations at scale

As you build more web crawlers and you continue to follow more advanced scraping workflows you’ll likely notice a few things:

  1. Sites change, now more than ever.
  2. Getting consistent results across thousands of pages is tricky.
  3. Performance considerations can be crucial.

Sites change, now more than ever

On occasion, AliExpress for example, will return a login page rather than search listings. Sometimes Amazon will decide to raise a Captcha, or Twitter will return an error. While these errors can sometimes simply be flickers, others will require a complete re-architecture of your web scrapers. Nowadays, modern front-end frameworks are oftentimes pre-compiled for the browser which can mangle class names and ID strings, sometimes a designer or developer will change an HTML class name during a redesign. It’s important that our Scrapy crawlers are resilient, but keep in mind that changes will occur over time.

Getting consistent results across thousands of pages is tricky

Slight variations of user-inputted text can really add up. Think of all of the different spellings and capitalizations you may encounter in just usernames. Pre-processing text, normalizing text, and standardizing text before performing an action or storing the value is best practice before most NLP or ML software processes for best results.

Performance considerations can be crucial

You’ll want to make sure you’re operating at least moderately efficiently before attempting to process 10,000 websites from your laptop one night. As your dataset grows it becomes more and more costly to manipulate it in terms of memory or processing power. In a similar regard, you may want to extract the text from one news article at a time, rather than downloading all 10,000 articles at once. As we’ve seen in this tutorial, performing advanced scraping operations is actually quite easy using Scrapy’s framework. Some advanced next steps might include loading selectors from a database and scraping using very generic Spider classes, or by using proxies or modified user-agents to see if the HTML changes based on location or device type. Scraping in the real world becomes complicated because of all the edge cases, Scrapy provides an easy way to build this logic in Python.

Do you also wish to contribute to Data Science Briefings? Shoot us an e-mail (just reply to this one) and let’s get in touch!