Archive

Archive for June, 2014

Online learning: extend your CV

June 24, 2014 1 comment
Advertisements
Categories: Uncategorized Tags: , ,

Stypi, a realtime editor

June 24, 2014 Leave a comment

Stypi is a realtime editor that allows multiple users to make changes to a single document at the same time. All you need to do is share the URL with others to begin collaborating!

This editor also supports programming languages that you can access by clicking on the “</>” button on the top left. For more information on how to use Stypi please click the FAQ link on the bottom left.” (source)

Categories: Uncategorized Tags: ,

how to create a pull request on GitHub

June 20, 2014 Leave a comment

Explained step by step here: http://stackoverflow.com/questions/14680711/how-to-do-a-github-pull-request/14680805#14680805.

In short: fork the repo you want to contribute to; clone it to your local machine; do the changes, commit, then push it back to your repo. Then create the pull request on the web interface of github: visit your repo and click on the “pull request” button. Fill out the necessary information and create the pull request. The project’s owner will be notified about your contribution.

Categories: Uncategorized Tags: ,

playing with Facebook’s graph search

June 19, 2014 Leave a comment

Here is a nice post on Facebook’s graph search with several examples: link.

Some commands:

  • People from [Your Town] who work at [Company Name]
  • Restaurants nearby liked by my friends
  • Local events this weekend
  • etc.

mirror a website with wget

June 15, 2014 Leave a comment

Problem
I want to crawl a webpage recursively and download it for local usage.

Solution

wget -c --mirror -p --html-extension --convert-links --no-parent --reject "index.html*" $url

The options:

  •  -c: continue (if you stop the process with CTRL+C and relaunch it, it will continue)
  • --mirror: turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings
  • -p: get all images, etc. needed to display HTML page
  • --html-extension: save HTML docs with .html extensions
  • --convert-links: make links in downloaded HTML point to local files
  • --no-parent: Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
  • --reject "index.html*": don’t download index.html* files

Credits
Tips from here.

Get URLs only
If you want to spider a website and get the URLs only, check this post out. In short:

wget --spider --force-html -r -l2 $url 2>&1 | grep '^--' | awk '{ print $3 }'

Where -l2 specifies recursion maximum depth level 2. You may have to change this value.

Categories: bash Tags: , , , ,

GNOME Disk Utility

So far I’ve used gparted to get info about partitions. But today I found another tool that is shipped with Ubuntu called GNOME Disk Utility. It is also good for getting info but it can do some extra stuff too: create images, mount/unmount partitions, etc.

You can launch it with the “gnome-disks” command in the terminal. Or, just type “Disks” in the unity dash.

Screenshot:
disks

Categories: ubuntu Tags: , , , , ,