Archive

Posts Tagged ‘ajax’

Scraping AJAX web pages

December 27, 2012 Leave a comment
Categories: Uncategorized Tags: ,

Scraping AJAX web pages (Part 4)

December 27, 2012 8 comments

Don’t forget to check out the rest of the series too!

I managed to solve a problem that bugged me for a long time. Namely, (1) I want to download the generated source of an AJAX-powered webpage; (2) I want a headless solution, i.e. I want no browser window; and (3) I want to wait until the AJAX-content is fully loaded.

During the past 1.5 years I got quite close :) I could solve everything except issue #3. Now I’m proud to present a complete solution that satisfies all the criteria above.

#!/usr/bin/env python

import os
import sys

from PySide.QtCore import *
from PySide.QtGui import *
from PySide.QtWebKit import QWebPage

SEC = 1000 # 1 sec. is 1000 msec.
USER_AGENT = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0'

class JabbaWebkit(QWebPage):
    # 'html' is a class variable
    def __init__(self, url, wait, app, parent=None):
        super(JabbaWebkit, self).__init__(parent)
        JabbaWebkit.html = ''

        if wait:
            QTimer.singleShot(wait * SEC, app.quit)
        else:
            self.loadFinished.connect(app.quit)

        self.mainFrame().load(QUrl(url))

    def save(self):
        JabbaWebkit.html = self.mainFrame().toHtml()

    def userAgentForUrl(self, url):
        return USER_AGENT

def get_page(url, wait=None):
    # here is the trick how to call it several times
    app = QApplication.instance() # checks if QApplication already exists
    if not app: # create QApplication if it doesnt exist
        app = QApplication(sys.argv)
    #
    form = JabbaWebkit(url, wait, app)
    app.aboutToQuit.connect(form.save)
    app.exec_()
    return JabbaWebkit.html

#############################################################################

if __name__ == "__main__":
    url = 'http://simile.mit.edu/crowbar/test.html'
    print get_html(url)

It’s also on GitHub. The GitHub version contains more documentation and more examples.

[ reddit comments ]

Update (20121228)
Jabba-Webkit got included in Pycoder’s Weekly #46. Awesome.

Scraping AJAX web pages (Part 3)

November 8, 2011 3 comments

Don’t forget to check out the rest of the series too!

In Part 2 we saw how to download an Ajax-powered webpage. However, there was a problem with that approach: sometimes it terminated too quickly, thus it fetched just part of a page. The problem with Ajax is that we cannot tell for sure when a page is completely downloaded.

So, the solution is to integrate some waiting mechanism in the script. That is, we need the following: “open a given page, wait X seconds, then get the HTML source”. Hopefully all Ajax calls will be finished in X seconds. It is you who decides how many seconds to wait. Or, you can analyze the partially downloaded HTML and if something is missing, wait some more.

Here I will use Splinter for this task. It opens a browser window that you can control from Python. Thanks to the browser, it can interpret Javascript. The only disadvantage is that the browser window is visible.

Example
Let’s see how to fetch the page CP002059.1. If you open it in a browser, you’ll see a status bar at the bottom that indicates the download progress. For me it takes about 20 seconds to fully get this page. By analyzing the content of the page, we can notice that the string “ORIGIN” appears just once, at the end of the page. So we’ll check its presence in a loop and wait until it arrives.

#!/usr/bin/env python

from time import sleep
from splinter.browser import Browser

url = 'http://www.ncbi.nlm.nih.gov/nuccore/CP002059.1'

def main():
    browser = Browser()
    browser.visit(url)

    # variation A:
    while 'ORIGIN' not in browser.html:
        sleep(5)

    # variation B:
    # sleep(30)   # if you think everything arrives in 30 seconds

    f = open("/tmp/source.html", "w")   # save the source in a file
    print >>f, browser.html
    f.close()

    browser.quit()
    print '__END__'

#############################################################################

if __name__ == "__main__":
    main()

You might be tempted to check the presence of ‘</html>’. However, don’t forget that the browser downloads a plain source first starting with ‘<html><body>…’ until ‘</body></html>’. Then it starts to interpret the source and if it finds some Ajax calls, they will be called, and these calls will expand something in the body of the HTML. So you’ll have ‘</html>’ right at the beginning.

Future work
This is not bad but I’m still not fully satisfied. I’d like something like this but without any browser window. If you have a headless solution, let me know. I think it’s possible with PhantomJS and/or Zombie.js but I had no time yet to investigate them.

Scraping AJAX web pages (Part 2)

September 20, 2011 5 comments

Don’t forget to check out the rest of the series too!

In this post we’ll see how to get the generated source of an HTML page. That is, we want to get the source with embedded Javascript calls evaluated.

Here is my solution:

#!/usr/bin/env python

"""
Simple webkit.
"""

import sys
from PyQt4 import QtGui, QtCore, QtWebKit

class SimpleWebkit():
    def __init__(self, url):
        self.url = url
        self.webView = QtWebKit.QWebView()

    def save(self):
        print self.webView.page().mainFrame().toHtml()
        sys.exit(0)

    def process(self):
        self.webView.load(QtCore.QUrl(self.url))
        QtCore.QObject.connect(self.webView, QtCore.SIGNAL("loadFinished(bool)"), self.save)

def process(url):
    app = QtGui.QApplication(sys.argv)
    s = SimpleWebkit(url)
    s.process()
    sys.exit(app.exec_())

#############################################################################

if __name__ == "__main__":
    #url = 'http://simile.mit.edu/crowbar/test.html'
    if len(sys.argv) > 1:
        process(sys.argv[1])
    else:
        print >>sys.stderr, "{0}: error: specify a URL.".format(sys.argv[0])
        sys.exit(1)

You can also find this script in my jabbapylib library.

Usage:

./simple_webkit.py 'http://dl.dropbox.com/u/144888/hello_js.html'

That is, just specify the URL of the page to be fetched. The generated HTML is printed to the standard output but you can easily redirect that to a file.

Pros
As you can see, it’s hyper simple. It uses a webkit instance to get and evaluate the page, which means that Javascript (and AJAX) calls will be executed. Also, the webkit instance is not visible in a window (headless browsing).

Cons
This solution is not yet perfect. The biggest problem is that AJAX calls can take some time and this script doesn’t wait for them. Actually, it cannot be known when all AJAX calls are terminated, so we cannot know for sure when the page is completely loaded :( The best way could be to integrate a waiting mechanism in the script, say “wait 5 seconds before printing the source”. Unfortunately I didn’t manage to add this feature. It should be done with QTimer somehow. If someone could add this functionality to this script, please let me know.

Challenge:
Try to download this page: CP002059.1. If you open it in Firefox for instance, at the bottom you’ll see a progress bar. For me the complete download takes about 10 sec. The script above will only fetch the beginning of the page :( Some help: the end of the downloaded sequence is this:

ORIGIN
//

If you can modify the script above to work correctly with this particular page, let me know.

Another difficulty is how to integrate this downloader in a larger project. At the end, “app.exec_()” must be called, otherwise no output is produced. But if you call it, it terminates the script. My current workaround is to call this script as an external command and catch its output on stdout. If you have a better idea, let me know.

Resources used

Update (20110921)
I just found an even simpler solution here. And this one doesn’t exit(), so it can be integrated in another project easily (without the need for calling it as an external command). However, the “waiting problem” is still there.

What’s next
In the next part of this series we will see another way to download an AJAX page. In Part 3 we will address the problem of waiting X seconds for AJAX calls. Stay tuned.

Troubleshooting
If you get the following error message:

Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap",

Then install this package:

sudo apt-get install gtk2-engines-pixbuf

This tip is from here.

Scraping AJAX web pages (Part 1)

April 15, 2011 7 comments

Don’t forget to check out the rest of the series too!

Problem

You want to download a web page whose source if full of AJAX calls. You want the result that is shown in your browser, i.e. you want the generated (post-AJAX) source.

Example
Consider this simple page: test.html. If you open it in your browser, you’ll see the text “Hi Crowbar!”. However, if you download this page with wget for instance, in the source code you’ll see the text “Hi lame crawler”. Explanation: your browser downloads and then interprets the page. The executed JavaScript code updates the DOM of the page. A simple downloader like wget doesn’t interpret the source of a page just grabs it.

Solution #1

One way for getting the post-AJAX source of a web page is to use Crowbar. “Crowbar is a web scraping environment based on the use of a server-side headless mozilla-based browser. Its purpose is to allow running javascript scrapers against a DOM to automate web sites scraping…

When you launch Crowbar, it offers a RESTful web service listening by default on port 10000. Just open the page http://127.0.0.1:10000/. The trick behind Crowbar is that it turns a web browser into a web server.

Now we can download AJAX pages with wget the following way. Let’s get the previous test page:

wget "http://127.0.0.1:10000/?url=http://simile.mit.edu/crowbar/test.html" -O tricky.html

If you check the source of the saved file, you will see the post-AJAX source that you would normally see in a web browser. You can also pass some other parameters to the Crowbar web service, they are detailed here. The most important parameter is “delay” that tells Crowbar how much it should wait after the page has terminated loading before attempting to serialize its DOM. By default its value is 3000 msec, i.e. 3 sec. If the page you want to download contains lots of AJAX calls then consider increasing the delay, otherwise you will get an HTML source that is not fully expanded yet.

Use case:
I wanted to download the following page from the NCBI database: CP002059.1. The page is quite big (about 5 MB), thus I had to wait about 10 sec. to get it in my browser. From the command-line I could fetch it this way (I gave it some extra time to be sure):

wget "http://127.0.0.1:10000/?url=http://www.ncbi.nlm.nih.gov/nuccore/CP002059.1&delay=15000" -O CP002059.1.html

Notes: If you want to download data from NCBI, there is a better way.

Did you know?
In Firefox, if you look at the source of a page (View -> Page Source), you will see the downloaded (pre-AJAX) source. If you want to see the generated (post-AJAX) source, you can use the Web Developer add-on (View Source -> View Generated Source).

Also, still in Firefox, if you save a web page with File -> Save Page As… and you choose “Web Page, HTML only”, Firefox will save the original (pre-AJAX) source. If you want the fully expanded (generated) source, choose the option “Web Page, complete”.

Solution #2
Another solution is to write a program/script that uses the webkit open source browser engine. In an upcoming post I will show you how to do it with Python.

Appendix
Crowbar launch script for Linux:

#!/usr/bin/bash

# my crowbar is installed here: /opt/crowbar
# location of this file: /opt/crowbar/start.sh
xulrunner --install-app xulapp
xulrunner xulapp/application.ini

Crowbar launch script for Windows (update of 20110601):

rem My crowbar is installed here: c:\Program Files\crowbar
rem Location of this file: c:\Program Files\crowbar\start.bat

"%XULRUNNER_HOME%\xulrunner.exe" --install-app xulapp
"%XULRUNNER_HOME%\xulrunner.exe" xulapp\application.ini

XULRunner for Windows can be downloaded from here.

/ discussion /

Follow

Get every new post delivered to your Inbox.

Join 72 other followers