Archive

Posts Tagged ‘webscraping’

Scraping AJAX web pages (Part 4)

December 27, 2012 9 comments

Don’t forget to check out the rest of the series too!

I managed to solve a problem that bugged me for a long time. Namely, (1) I want to download the generated source of an AJAX-powered webpage; (2) I want a headless solution, i.e. I want no browser window; and (3) I want to wait until the AJAX-content is fully loaded.

During the past 1.5 years I got quite close :) I could solve everything except issue #3. Now I’m proud to present a complete solution that satisfies all the criteria above.

#!/usr/bin/env python

import os
import sys

from PySide.QtCore import *
from PySide.QtGui import *
from PySide.QtWebKit import QWebPage

SEC = 1000 # 1 sec. is 1000 msec.
USER_AGENT = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0'

class JabbaWebkit(QWebPage):
    # 'html' is a class variable
    def __init__(self, url, wait, app, parent=None):
        super(JabbaWebkit, self).__init__(parent)
        JabbaWebkit.html = ''

        if wait:
            QTimer.singleShot(wait * SEC, app.quit)
        else:
            self.loadFinished.connect(app.quit)

        self.mainFrame().load(QUrl(url))

    def save(self):
        JabbaWebkit.html = self.mainFrame().toHtml()

    def userAgentForUrl(self, url):
        return USER_AGENT

def get_page(url, wait=None):
    # here is the trick how to call it several times
    app = QApplication.instance() # checks if QApplication already exists
    if not app: # create QApplication if it doesnt exist
        app = QApplication(sys.argv)
    #
    form = JabbaWebkit(url, wait, app)
    app.aboutToQuit.connect(form.save)
    app.exec_()
    return JabbaWebkit.html

#############################################################################

if __name__ == "__main__":
    url = 'http://simile.mit.edu/crowbar/test.html'
    print get_html(url)

It’s also on GitHub. The GitHub version contains more documentation and more examples.

[ reddit comments ]

Update (20121228)
Jabba-Webkit got included in Pycoder’s Weekly #46. Awesome.

Scraping AJAX web pages (Part 1)

April 15, 2011 7 comments

Don’t forget to check out the rest of the series too!

Problem

You want to download a web page whose source if full of AJAX calls. You want the result that is shown in your browser, i.e. you want the generated (post-AJAX) source.

Example
Consider this simple page: test.html. If you open it in your browser, you’ll see the text “Hi Crowbar!”. However, if you download this page with wget for instance, in the source code you’ll see the text “Hi lame crawler”. Explanation: your browser downloads and then interprets the page. The executed JavaScript code updates the DOM of the page. A simple downloader like wget doesn’t interpret the source of a page just grabs it.

Solution #1

One way for getting the post-AJAX source of a web page is to use Crowbar. “Crowbar is a web scraping environment based on the use of a server-side headless mozilla-based browser. Its purpose is to allow running javascript scrapers against a DOM to automate web sites scraping…

When you launch Crowbar, it offers a RESTful web service listening by default on port 10000. Just open the page http://127.0.0.1:10000/. The trick behind Crowbar is that it turns a web browser into a web server.

Now we can download AJAX pages with wget the following way. Let’s get the previous test page:

wget "http://127.0.0.1:10000/?url=http://simile.mit.edu/crowbar/test.html" -O tricky.html

If you check the source of the saved file, you will see the post-AJAX source that you would normally see in a web browser. You can also pass some other parameters to the Crowbar web service, they are detailed here. The most important parameter is “delay” that tells Crowbar how much it should wait after the page has terminated loading before attempting to serialize its DOM. By default its value is 3000 msec, i.e. 3 sec. If the page you want to download contains lots of AJAX calls then consider increasing the delay, otherwise you will get an HTML source that is not fully expanded yet.

Use case:
I wanted to download the following page from the NCBI database: CP002059.1. The page is quite big (about 5 MB), thus I had to wait about 10 sec. to get it in my browser. From the command-line I could fetch it this way (I gave it some extra time to be sure):

wget "http://127.0.0.1:10000/?url=http://www.ncbi.nlm.nih.gov/nuccore/CP002059.1&delay=15000" -O CP002059.1.html

Notes: If you want to download data from NCBI, there is a better way.

Did you know?
In Firefox, if you look at the source of a page (View -> Page Source), you will see the downloaded (pre-AJAX) source. If you want to see the generated (post-AJAX) source, you can use the Web Developer add-on (View Source -> View Generated Source).

Also, still in Firefox, if you save a web page with File -> Save Page As… and you choose “Web Page, HTML only”, Firefox will save the original (pre-AJAX) source. If you want the fully expanded (generated) source, choose the option “Web Page, complete”.

Solution #2
Another solution is to write a program/script that uses the webkit open source browser engine. In an upcoming post I will show you how to do it with Python.

Appendix
Crowbar launch script for Linux:

#!/usr/bin/bash

# my crowbar is installed here: /opt/crowbar
# location of this file: /opt/crowbar/start.sh
xulrunner --install-app xulapp
xulrunner xulapp/application.ini

Crowbar launch script for Windows (update of 20110601):

rem My crowbar is installed here: c:\Program Files\crowbar
rem Location of this file: c:\Program Files\crowbar\start.bat

"%XULRUNNER_HOME%\xulrunner.exe" --install-app xulapp
"%XULRUNNER_HOME%\xulrunner.exe" xulapp\application.ini

XULRunner for Windows can be downloaded from here.

/ discussion /