You want to download a web page whose source if full of AJAX calls. You want the result that is shown in your browser, i.e. you want the generated (post-AJAX) source.
When you launch Crowbar, it offers a RESTful web service listening by default on port 10000. Just open the page http://127.0.0.1:10000/. The trick behind Crowbar is that it turns a web browser into a web server.
Now we can download AJAX pages with wget the following way. Let’s get the previous test page:
wget "http://127.0.0.1:10000/?url=http://simile.mit.edu/crowbar/test.html" -O tricky.html
If you check the source of the saved file, you will see the post-AJAX source that you would normally see in a web browser. You can also pass some other parameters to the Crowbar web service, they are detailed here. The most important parameter is “delay” that tells Crowbar how much it should wait after the page has terminated loading before attempting to serialize its DOM. By default its value is 3000 msec, i.e. 3 sec. If the page you want to download contains lots of AJAX calls then consider increasing the delay, otherwise you will get an HTML source that is not fully expanded yet.
I wanted to download the following page from the NCBI database: CP002059.1. The page is quite big (about 5 MB), thus I had to wait about 10 sec. to get it in my browser. From the command-line I could fetch it this way (I gave it some extra time to be sure):
wget "http://127.0.0.1:10000/?url=http://www.ncbi.nlm.nih.gov/nuccore/CP002059.1&delay=15000" -O CP002059.1.html
Notes: If you want to download data from NCBI, there is a better way.
Did you know?
In Firefox, if you look at the source of a page (View -> Page Source), you will see the downloaded (pre-AJAX) source. If you want to see the generated (post-AJAX) source, you can use the Web Developer add-on (View Source -> View Generated Source).
Also, still in Firefox, if you save a web page with File -> Save Page As… and you choose “Web Page, HTML only”, Firefox will save the original (pre-AJAX) source. If you want the fully expanded (generated) source, choose the option “Web Page, complete”.
Another solution is to write a program/script that uses the webkit open source browser engine. In an upcoming post I will show you how to do it with Python.
Crowbar launch script for Linux:
#!/usr/bin/bash # my crowbar is installed here: /opt/crowbar # location of this file: /opt/crowbar/start.sh xulrunner --install-app xulapp xulrunner xulapp/application.ini
Crowbar launch script for Windows (update of 20110601):
rem My crowbar is installed here: c:\Program Files\crowbar rem Location of this file: c:\Program Files\crowbar\start.bat "%XULRUNNER_HOME%\xulrunner.exe" --install-app xulapp "%XULRUNNER_HOME%\xulrunner.exe" xulapp\application.ini
XULRunner for Windows can be downloaded from here.
/ discussion /
In this post I explain how I managed to download the stock list of Yahoo.
Problem: I wanted to have a list of Yahoo stocks. More precisely, I only needed the stock IDs and the corresponding company names, e.g. “MSFT => Microsoft Corporation”. I was looking for such a list, but I didn’t find anything useful. So after a few hours I said to myself: “I will have to solve this problem by myself :(“.
Fortunately, Yahoo has a page that lists the industries. So the problem is actually extracting data from a bunch of HTML pages. Let’s see the steps that lead to the solution.
ffmpeg -i input.avi -f mp3 output.mp3
This thing stopped working for me under Ubuntu 12.04. However, I had luck with “soundconverter” to extract mp3 from flv files.
If you have problems with ffmpeg, try to compile it yourself.