$ ping -c 1 www.google.com ping: icmp open socket: Operation not permitted
$ sudo chmod u+s `which ping` $ ping -c 1 www.google.com PING www.google.com (22.214.171.124) 56(84) bytes of data. 64 bytes from ee-in-f104.1e100.net (126.96.36.199): icmp_seq=1 ttl=45 time=38.6 ms --- www.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 38.681/38.681/38.681/0.000 ms
Here I collect my posts that I wrote on this topic:
I managed to solve a problem that bugged me for a long time. Namely, (1) I want to download the generated source of an AJAX-powered webpage; (2) I want a headless solution, i.e. I want no browser window; and (3) I want to wait until the AJAX-content is fully loaded.
During the past 1.5 years I got quite close :) I could solve everything except issue #3. Now I’m proud to present a complete solution that satisfies all the criteria above.
#!/usr/bin/env python import os import sys from PySide.QtCore import * from PySide.QtGui import * from PySide.QtWebKit import QWebPage SEC = 1000 # 1 sec. is 1000 msec. USER_AGENT = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0' class JabbaWebkit(QWebPage): # 'html' is a class variable def __init__(self, url, wait, app, parent=None): super(JabbaWebkit, self).__init__(parent) JabbaWebkit.html = '' if wait: QTimer.singleShot(wait * SEC, app.quit) else: self.loadFinished.connect(app.quit) self.mainFrame().load(QUrl(url)) def save(self): JabbaWebkit.html = self.mainFrame().toHtml() def userAgentForUrl(self, url): return USER_AGENT def get_page(url, wait=None): # here is the trick how to call it several times app = QApplication.instance() # checks if QApplication already exists if not app: # create QApplication if it doesnt exist app = QApplication(sys.argv) # form = JabbaWebkit(url, wait, app) app.aboutToQuit.connect(form.save) app.exec_() return JabbaWebkit.html ############################################################################# if __name__ == "__main__": url = 'http://simile.mit.edu/crowbar/test.html' print get_html(url)
It’s also on GitHub. The GitHub version contains more documentation and more examples.
[ reddit comments ]
Jabba-Webkit got included in Pycoder’s Weekly #46. Awesome.
When you do sitescraping, usually you know exactly what part of a webpage you want to extract. The naive way is to download and analyze the source code of the page trying to identify the interesting part(s). But there is a better way: use Firebug.
First, install Firebug and restart the browser. In the top right corner of the browser you’ll see a little bug (part A on the figure below). Clicking on this will call the Firebug console. On the console, click on the 2nd icon from the left in the console’s top (part B). Then click on an element in the browser that you want to inspect (part C). The relevant HTML source code will be highlighted in the console (part D). Right click on it and choose the CSS Path / XPath from the popup menu. Now you only have to write a script that extracts this part of the page.
You want to download a web page whose source if full of AJAX calls. You want the result that is shown in your browser, i.e. you want the generated (post-AJAX) source.
When you launch Crowbar, it offers a RESTful web service listening by default on port 10000. Just open the page http://127.0.0.1:10000/. The trick behind Crowbar is that it turns a web browser into a web server.
Now we can download AJAX pages with wget the following way. Let’s get the previous test page:
wget "http://127.0.0.1:10000/?url=http://simile.mit.edu/crowbar/test.html" -O tricky.html
If you check the source of the saved file, you will see the post-AJAX source that you would normally see in a web browser. You can also pass some other parameters to the Crowbar web service, they are detailed here. The most important parameter is “delay” that tells Crowbar how much it should wait after the page has terminated loading before attempting to serialize its DOM. By default its value is 3000 msec, i.e. 3 sec. If the page you want to download contains lots of AJAX calls then consider increasing the delay, otherwise you will get an HTML source that is not fully expanded yet.
I wanted to download the following page from the NCBI database: CP002059.1. The page is quite big (about 5 MB), thus I had to wait about 10 sec. to get it in my browser. From the command-line I could fetch it this way (I gave it some extra time to be sure):
wget "http://127.0.0.1:10000/?url=http://www.ncbi.nlm.nih.gov/nuccore/CP002059.1&delay=15000" -O CP002059.1.html
Notes: If you want to download data from NCBI, there is a better way.
Did you know?
In Firefox, if you look at the source of a page (View -> Page Source), you will see the downloaded (pre-AJAX) source. If you want to see the generated (post-AJAX) source, you can use the Web Developer add-on (View Source -> View Generated Source).
Also, still in Firefox, if you save a web page with File -> Save Page As… and you choose “Web Page, HTML only”, Firefox will save the original (pre-AJAX) source. If you want the fully expanded (generated) source, choose the option “Web Page, complete”.
Another solution is to write a program/script that uses the webkit open source browser engine. In an upcoming post I will show you how to do it with Python.
Crowbar launch script for Linux:
#!/usr/bin/bash # my crowbar is installed here: /opt/crowbar # location of this file: /opt/crowbar/start.sh xulrunner --install-app xulapp xulrunner xulapp/application.ini
Crowbar launch script for Windows (update of 20110601):
rem My crowbar is installed here: c:\Program Files\crowbar rem Location of this file: c:\Program Files\crowbar\start.bat "%XULRUNNER_HOME%\xulrunner.exe" --install-app xulapp "%XULRUNNER_HOME%\xulrunner.exe" xulapp\application.ini
XULRunner for Windows can be downloaded from here.
/ discussion /