Posts Tagged ‘scraping’

[manjaro] ping needs special privileges

September 5, 2015 Leave a comment


$ ping -c 1
ping: icmp open socket: Operation not permitted


$ sudo chmod u+s `which ping`
$ ping -c 1
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=45 time=38.6 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 38.681/38.681/38.681/0.000 ms
Categories: bash Tags: , ,

Scraping AJAX web pages

December 27, 2012 Leave a comment
Categories: Uncategorized Tags: ,

Scraping AJAX web pages (Part 4)

December 27, 2012 9 comments

Don’t forget to check out the rest of the series too!

I managed to solve a problem that bugged me for a long time. Namely, (1) I want to download the generated source of an AJAX-powered webpage; (2) I want a headless solution, i.e. I want no browser window; and (3) I want to wait until the AJAX-content is fully loaded.

During the past 1.5 years I got quite close :) I could solve everything except issue #3. Now I’m proud to present a complete solution that satisfies all the criteria above.

#!/usr/bin/env python

import os
import sys

from PySide.QtCore import *
from PySide.QtGui import *
from PySide.QtWebKit import QWebPage

SEC = 1000 # 1 sec. is 1000 msec.
USER_AGENT = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0'

class JabbaWebkit(QWebPage):
    # 'html' is a class variable
    def __init__(self, url, wait, app, parent=None):
        super(JabbaWebkit, self).__init__(parent)
        JabbaWebkit.html = ''

        if wait:
            QTimer.singleShot(wait * SEC, app.quit)


    def save(self):
        JabbaWebkit.html = self.mainFrame().toHtml()

    def userAgentForUrl(self, url):
        return USER_AGENT

def get_page(url, wait=None):
    # here is the trick how to call it several times
    app = QApplication.instance() # checks if QApplication already exists
    if not app: # create QApplication if it doesnt exist
        app = QApplication(sys.argv)
    form = JabbaWebkit(url, wait, app)
    return JabbaWebkit.html


if __name__ == "__main__":
    url = ''
    print get_html(url)

It’s also on GitHub. The GitHub version contains more documentation and more examples.

[ reddit comments ]

Update (20121228)
Jabba-Webkit got included in Pycoder’s Weekly #46. Awesome.

Get IMDB ratings without any scraping

February 12, 2012 Leave a comment

Update (20150712): If you know Python, check out the awesome IMDbPY library. It does the hard work for you, you just need to call some simple functions. Here is the docs.

Update (20130130): seems to have moved to Links below are updated accordingly.

You want to get some data (e.g. rating) of a movie from IMDB. How to do it without any web scraping?

Solution #1
Someone made a simple API for this task, available at You can search by ID or title.


The result is a JSON string that contains basic movie info, rating included.

Solution #2
IMDB has a secret API too, made for mobile applications (available at Here they say “For use only by clients authorized in writing by IMDb. Authors and users of unauthorized clients accept full legal exposure/liability for their actions.” So what comes below is strictly for educational purposes.


The result is a JSON string. Find more info about this API here.

Related posts

Thanks reddit.

Update (20120222)
Python code for solution #1 is here.

Firebug: sitescraper’s best friend

November 23, 2011 Leave a comment

When you do sitescraping, usually you know exactly what part of a webpage you want to extract. The naive way is to download and analyze the source code of the page trying to identify the interesting part(s). But there is a better way: use Firebug.

Firebug is a Firefox add-on for web developers. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. The interesting part for us is the feature that you can point on any element of a webpage and Firebug shows you its exact location in the source. You can also get the CSS Path and/or the XPath of the given element.

First, install Firebug and restart the browser. In the top right corner of the browser you’ll see a little bug (part A on the figure below). Clicking on this will call the Firebug console. On the console, click on the 2nd icon from the left in the console’s top (part B). Then click on an element in the browser that you want to inspect (part C). The relevant HTML source code will be highlighted in the console (part D). Right click on it and choose the CSS Path / XPath from the popup menu. Now you only have to write a script that extracts this part of the page.

Categories: firefox Tags: ,

Scraping AJAX web pages (Part 1)

April 15, 2011 7 comments

Don’t forget to check out the rest of the series too!


You want to download a web page whose source if full of AJAX calls. You want the result that is shown in your browser, i.e. you want the generated (post-AJAX) source.

Consider this simple page: test.html. If you open it in your browser, you’ll see the text “Hi Crowbar!”. However, if you download this page with wget for instance, in the source code you’ll see the text “Hi lame crawler”. Explanation: your browser downloads and then interprets the page. The executed JavaScript code updates the DOM of the page. A simple downloader like wget doesn’t interpret the source of a page just grabs it.

Solution #1

One way for getting the post-AJAX source of a web page is to use Crowbar. “Crowbar is a web scraping environment based on the use of a server-side headless mozilla-based browser. Its purpose is to allow running javascript scrapers against a DOM to automate web sites scraping…

When you launch Crowbar, it offers a RESTful web service listening by default on port 10000. Just open the page The trick behind Crowbar is that it turns a web browser into a web server.

Now we can download AJAX pages with wget the following way. Let’s get the previous test page:

wget "" -O tricky.html

If you check the source of the saved file, you will see the post-AJAX source that you would normally see in a web browser. You can also pass some other parameters to the Crowbar web service, they are detailed here. The most important parameter is “delay” that tells Crowbar how much it should wait after the page has terminated loading before attempting to serialize its DOM. By default its value is 3000 msec, i.e. 3 sec. If the page you want to download contains lots of AJAX calls then consider increasing the delay, otherwise you will get an HTML source that is not fully expanded yet.

Use case:
I wanted to download the following page from the NCBI database: CP002059.1. The page is quite big (about 5 MB), thus I had to wait about 10 sec. to get it in my browser. From the command-line I could fetch it this way (I gave it some extra time to be sure):

wget "" -O CP002059.1.html

Notes: If you want to download data from NCBI, there is a better way.

Did you know?
In Firefox, if you look at the source of a page (View -> Page Source), you will see the downloaded (pre-AJAX) source. If you want to see the generated (post-AJAX) source, you can use the Web Developer add-on (View Source -> View Generated Source).

Also, still in Firefox, if you save a web page with File -> Save Page As… and you choose “Web Page, HTML only”, Firefox will save the original (pre-AJAX) source. If you want the fully expanded (generated) source, choose the option “Web Page, complete”.

Solution #2
Another solution is to write a program/script that uses the webkit open source browser engine. In an upcoming post I will show you how to do it with Python.

Crowbar launch script for Linux:


# my crowbar is installed here: /opt/crowbar
# location of this file: /opt/crowbar/
xulrunner --install-app xulapp
xulrunner xulapp/application.ini

Crowbar launch script for Windows (update of 20110601):

rem My crowbar is installed here: c:\Program Files\crowbar
rem Location of this file: c:\Program Files\crowbar\start.bat

"%XULRUNNER_HOME%\xulrunner.exe" --install-app xulapp
"%XULRUNNER_HOME%\xulrunner.exe" xulapp\application.ini

XULRunner for Windows can be downloaded from here.

/ discussion /


Get every new post delivered to your Inbox.

Join 88 other followers