Archive for the ‘html’ Category

indent HTML

July 13, 2016 Leave a comment

You have an ugly HTML and you want to indent it nicely. For instance you want to scrape something from it, but first it would be a good idea to indent the source.

The program “tidy” can do that. Create the following config file (tidy_config.txt):

indent: auto
indent-spaces: 2
quiet: yes
tidy-mark: no

Then call tidy the following way:

$ tidy -config tidy_config.txt ugly.html > nice.html

Tip from here.

Categories: html Tags: , ,

HTML Best Practices for Beginners

October 28, 2013 Leave a comment

If you need to create HTML pages from time to time, here are some great tips:

30 HTML Best Practices for Beginners

Categories: html Tags:

HTML: add syntax-highlight to textarea

September 26, 2013 1 comment

You have an HTML form with a textarea where you want to accept some source code. You want to turn this simple textarea into a fancy input area that adds syntax highlighting.

I tried EditArea, and it suits my needs. See this SO page for more alternatives.


EditArea is a free javascript editor for source code. This editor is designed to edit souce code files in a textarea. The main goal is to allow text formatting, search and replace and real-time syntax highlight (for not too heavy text).” (source)

For Python support, I had to add these lines to the HTML source:

<script language="javascript" type="text/javascript" src="../editarea/edit_area/edit_area_full.js"></script>
<script language="javascript" type="text/javascript">
    id : "src_input"       // textarea id
    ,syntax: "css"          // syntax to be uses for highgliting
    ,start_highlight: true      // to display with highlight mode on start-up
    ,syntax: "python"
    ,replace_tab_by_spaces: 4

Related work

  • Ace (it seems a more professional solution)

Image Gallery from a list of URLs

December 18, 2012 Leave a comment

I have several scrapers that extract images. How to visualize them? One way is to open each one in a new browser tab but it’s slow and who wants to have several hundreds of tabs? Is there a way to browse these images in one single tab?

A primitive solution would be to create an HTML page that lists all the images one below the other. But again, what if you have lots of images?

A better way is to organize the images in a gallery. There are tons of image gallery generators out there but most of them work with local images. I want to browse remote images when only their URLs are available. So I made my own image gallery generator that works with URLs. Available on github.

There is also a live demo, check it out.

The software is written in Python. See the README file for usage examples.

Categories: html, python Tags: , , ,

Put a text on the clipboard from your webpage

December 18, 2012 Leave a comment

From an HTML page you want to copy some text on the clipboard by pressing a button.

Example: on a page you present a list of URLs. Next to each URL there is a button. If the user clicks on the button, the corresponding URL is copied to his/her clipboard.

You can use clippy for this task. “Clippy is a very simple Flash widget that makes it possible to place arbitrary text onto the client’s clipboard.

Here is an HTML template that you must paste in your HTML: clippy.html. Simply replace “{{ clippy_text }}” and “{{ bgcolor }}” with the values you want.

Update (20130103)
GitHub also used clippy but recently they switched to ZeroClipboard. Here is their announcement.

Categories: html Tags: , ,

Youtube audio player

November 20, 2012 Leave a comment

At I found a nice trick to embed just a part of the Youtube Flash player, thus the player looks like an audio player. All you need is a little CSS trick:

Categories: firefox, google, html Tags: , , ,

Codecademy – learn HTML, CSS, Javascript

April 19, 2012 Leave a comment

Codecademy is the easiest way to learn how to code. It’s interactive, fun, and you can do it with your friends.

Categories: html, javascript Tags: ,

Scraping AJAX web pages (Part 3)

November 8, 2011 3 comments

Don’t forget to check out the rest of the series too!

In Part 2 we saw how to download an Ajax-powered webpage. However, there was a problem with that approach: sometimes it terminated too quickly, thus it fetched just part of a page. The problem with Ajax is that we cannot tell for sure when a page is completely downloaded.

So, the solution is to integrate some waiting mechanism in the script. That is, we need the following: “open a given page, wait X seconds, then get the HTML source”. Hopefully all Ajax calls will be finished in X seconds. It is you who decides how many seconds to wait. Or, you can analyze the partially downloaded HTML and if something is missing, wait some more.

Here I will use Splinter for this task. It opens a browser window that you can control from Python. Thanks to the browser, it can interpret Javascript. The only disadvantage is that the browser window is visible.

Let’s see how to fetch the page CP002059.1. If you open it in a browser, you’ll see a status bar at the bottom that indicates the download progress. For me it takes about 20 seconds to fully get this page. By analyzing the content of the page, we can notice that the string “ORIGIN” appears just once, at the end of the page. So we’ll check its presence in a loop and wait until it arrives.

#!/usr/bin/env python

from time import sleep
from splinter.browser import Browser

url = ''

def main():
    browser = Browser()

    # variation A:
    while 'ORIGIN' not in browser.html:

    # variation B:
    # sleep(30)   # if you think everything arrives in 30 seconds

    f = open("/tmp/source.html", "w")   # save the source in a file
    print >>f, browser.html

    print '__END__'


if __name__ == "__main__":

You might be tempted to check the presence of ‘</html>’. However, don’t forget that the browser downloads a plain source first starting with ‘<html><body>…’ until ‘</body></html>’. Then it starts to interpret the source and if it finds some Ajax calls, they will be called, and these calls will expand something in the body of the HTML. So you’ll have ‘</html>’ right at the beginning.

Future work
This is not bad but I’m still not fully satisfied. I’d like something like this but without any browser window. If you have a headless solution, let me know. I think it’s possible with PhantomJS and/or Zombie.js but I had no time yet to investigate them.

Powerpoint is dead (HTML5 presentations with landslide)

September 23, 2011 1 comment

Powerpoint is dead. Well, not yet, but for simple presentations you can use the following tool perfectly. This entry is based on Francisco Souza’s excellent post entitled “Creating HTML 5 slide presentations using landslide“. Here I make a short summary.

Landslide is a Python tool for converting marked-up texts to HTML5 slide presentations. The input text can be written in Markdown, reStructuredText, or Textile. A sample slideshow presenting landslide itself is here.

Sample input: (taken from the landslide project)
Sample output: presentation.html


sudo pip install landslide



If you want to share it on the Internet: “landslide -cr“.

Help: “landslide --help“.

To learn about the customization of the theme, refer to Francisco’s post.

Convert to PDF

landslide -d out.pdf

For this you need Prince XML, which is free for non-commercial use. Unfortunately the output is black and white with additional blank pages for notes. If you know how to have colored PDFs without the extra pages, let me know.

It’d be interesting to replace Prince XML with wkhtmltopdf. I made some tests but the output was not nice. I think it could be tweaked though.

Related stuff

Pandoc is a universal document converter.

If you need to convert files from one markup format into another, pandoc is your swiss-army knife. Need to generate a man page from a markdown file? No problem. LaTeX to Docbook? Sure. HTML to MediaWiki? Yes, that too. Pandoc can read markdown and (subsets of) reStructuredText, textile, HTML, and LaTeX, and it can write plain text, markdown, reStructuredText, HTML, LaTeX, ConTeXt, PDF, RTF, DocBook XML, OpenDocument XML, ODT, GNU Texinfo, MediaWiki markup, textile, groff man pages, Emacs org-mode, EPUB ebooks, and S5 and Slidy HTML slide shows. PDF output (via LaTeX) is also supported with the included markdown2pdf wrapper script.

Scraping AJAX web pages (Part 2)

September 20, 2011 5 comments

Don’t forget to check out the rest of the series too!

In this post we’ll see how to get the generated source of an HTML page. That is, we want to get the source with embedded Javascript calls evaluated.

Here is my solution:

#!/usr/bin/env python

Simple webkit.

import sys
from PyQt4 import QtGui, QtCore, QtWebKit

class SimpleWebkit():
    def __init__(self, url):
        self.url = url
        self.webView = QtWebKit.QWebView()

    def save(self):

    def process(self):
        QtCore.QObject.connect(self.webView, QtCore.SIGNAL("loadFinished(bool)"),

def process(url):
    app = QtGui.QApplication(sys.argv)
    s = SimpleWebkit(url)


if __name__ == "__main__":
    #url = ''
    if len(sys.argv) > 1:
        print >>sys.stderr, "{0}: error: specify a URL.".format(sys.argv[0])

You can also find this script in my jabbapylib library.


./ ''

That is, just specify the URL of the page to be fetched. The generated HTML is printed to the standard output but you can easily redirect that to a file.

As you can see, it’s hyper simple. It uses a webkit instance to get and evaluate the page, which means that Javascript (and AJAX) calls will be executed. Also, the webkit instance is not visible in a window (headless browsing).

This solution is not yet perfect. The biggest problem is that AJAX calls can take some time and this script doesn’t wait for them. Actually, it cannot be known when all AJAX calls are terminated, so we cannot know for sure when the page is completely loaded :( The best way could be to integrate a waiting mechanism in the script, say “wait 5 seconds before printing the source”. Unfortunately I didn’t manage to add this feature. It should be done with QTimer somehow. If someone could add this functionality to this script, please let me know.

Try to download this page: CP002059.1. If you open it in Firefox for instance, at the bottom you’ll see a progress bar. For me the complete download takes about 10 sec. The script above will only fetch the beginning of the page :( Some help: the end of the downloaded sequence is this:


If you can modify the script above to work correctly with this particular page, let me know.

Another difficulty is how to integrate this downloader in a larger project. At the end, “app.exec_()” must be called, otherwise no output is produced. But if you call it, it terminates the script. My current workaround is to call this script as an external command and catch its output on stdout. If you have a better idea, let me know.

Resources used

Update (20110921)
I just found an even simpler solution here. And this one doesn’t exit(), so it can be integrated in another project easily (without the need for calling it as an external command). However, the “waiting problem” is still there.

What’s next
In the next part of this series we will see another way to download an AJAX page. In Part 3 we will address the problem of waiting X seconds for AJAX calls. Stay tuned.

If you get the following error message:

Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap",

Then install this package:

sudo apt-get install gtk2-engines-pixbuf

This tip is from here.