warning: Creating default object from empty value in /home/townxorg/public_html/modules/taxonomy/taxonomy.module on line 1364.

Stuff I've done - the howtos list

I maintain a howtos folder where I put text files about things I've managed to do, or failed to do, on my computer. I thought it would be fun to turn the list of files into a human-readable list to give an idea of what I've done over the last 20 years or so of working with computers. It might even be a useful thing for recruiters to look at, so they know I'm telling the truth when I say I can probably cope with most programming/computing tasks. (Not that I'm looking for a new job right now.)

aapt compile
agata report
ant without java
archlinux rpi on linux
atom text editor personalisation
audio file fixing
audio file trim
bash commands
bazaar ng
bind install
cmp jboss
cobol compilers
config dell laptop
cron jobs
db clustering
dbg install
ddwrt on tplink TL-WR740N-v4
dns servers
dosemu on ubuntu breezy
dvd ripping and video encoding
fastcgi ruby
fedora 11 audio setup
fedora personal setup
firefox plugins
first router netgear 2004
git replay commits
gnome shell extensions
grip config
haxe install
horde and imp
importing outlook
install breezy
install edgy
install fedora
java install fedora
java unit testing
joomla ecommerce install
jruby rails
lamp hardening
live cd ubuntu
ltsp hw requirements
mail by telnet
mail server postfix dovecot and fetchmail
mambo install
merb and datamapper
mongrel clustering
moodle install
moodle scalability
mplayer install win32 codecs on fedora
nas fun plug
openbsd on virtualbox
openbsd on vmware
open dns servers
oscommerce overview
phpbb db structure
php compilation
php with iis
playonlinux install
quakejs build
rasbian rpi
raspberry pi setup
realplayer ripping
report generators
ruby intro
ruby on rails from source
ruby on rails migrations
ruby on rails tutorial
ruby on rails with xampp and fastcgi on ubuntu hoary
running dos games
running dos on linux
soft phones
spamassassin and postfix
squid minimal setup
ssl self signed certificate
surveys and data analysis
svn apache trac tortoisesvn eclipse
svn on site5
tomcat install
umbraco on docker
vim commands
webmin install
windows on linux
xampp addon
xdebug xampp
zaurus usb connect
zencart and


Mise-en-abîme ("placing into infinity or "placing into the abyss", see Wikipedia) has always fascinated me. I suppose it started with the Quaker Oats man (who I'm sure I've mentioned here before):


Though I remember this image more vividly, and with reds, and I think from my childhood. Notice how he's holding a box with another Quaker Oats man just like him on it, and he's holding a box, ad infinitum.

The laughing cow is another food-related one (see

Also popular in the visual arts (Dali's La Guerre, see

And literature (the play within a play of Hamlet, footnotes to a poem in Pale Fire which actually constitute the narrative etc.). And film (Synecdoche, New York is probably the best example, but it also happens in Adaptation and more recently in Inception: dreams within dreams, reflecting and influencing each other).

And obviously in nature and mathematics we have fractals. And in computer science recursive functions. And so on...

So, quite interesting, occasionally mind bending.

I wondered whether I could extend this idea to web servers: could a web server present a page; and on that page, a link which would start another web server and load a page from it; the latter page being embedded in the first page, and also presenting a link which would start another web server then load a page from it; ad infinitum...

So I wrote such a thing in Ruby. It's attached to this blog entry. Here's a screenshot:

It could carry on until the resources of the computer ran out (here I started 19 web servers). It uses jQuery to load the content from the next web server into an iframe inside the current page. You need rack, backports, and mongrel to run it.

Just for fun.

A script for parsing work log files

Attached is my Ruby script for parsing log files I keep at work. I have to complete a weekly report, and this forms the basis of that; as I don't keep regular office hours (I flexi-work around child care), it also helps me keep track of the hours I've worked.

The basic principle of operation is as follows:

  1. I create a file for each working week, called something like week45.log
  2. During the week, I record bits of activity I do (see below); I also note down stuff I plan to do next (again, see below)
  3. At the end of the week, I run parse_work_log <file name> > summary.out on the file to produce a log of what I've done
  4. If anything goes wrong, I tend to edit the work log and regenerate, so I can just keep the log (not the summary)
  5. I cut and paste from the summary.out file into an email I send round

The format of the file is deliberately simple to make it easy to maintain; there are a handful of formatting rules. An example is shown below.


09:30-12:00    Researching this and that #research
12:00-13:00    -Lunch
13:00-14:00    More research
14:00-18:00    Writing some application #coding


09:30-10:00    Admin #Admin
10:00-10:15    -Break
10:00-12:00    Found out something rather marvellous #Very important research
12:00-12:45    -Lunch
12:45-18:00    Writing another application #coding

+NEXT Have lots of fun banging my head on a brick wall
+NEXT Reinstall my operating system

Which, when parsed, produces this on stderr:


Worked 15.25 hours

And this on stdout:

This week:


  * Admin

Very important research:

  * Found out something rather marvellous


  * Writing some application
  * Writing another application


  * Researching this and that


  * Have lots of fun banging my head on a brick wall
  * Reinstall my operating system

Notes on formatting:

  • The asterisks and dates are just there to make it more readable - they're basically ignored
  • An entry is a single line with the format HH:MM-HH:MM <...whitespace...> <text> <#optional tag>\n
  • The time span for an entry is added to the total time, unless the entry text contains -Lunch or -Break
  • # creates a tagged entry which gets included in the report; it appears as a bullet point with the tag as its heading; any entries with the same tag get aggregated under a single heading; tags and entries appear in the order they occur in the file
  • Any line starting with +NEXT gets listed in the Next section at the end; tags don't work on this (could, but don't at the moment)

It is pretty primitive, but it does the job for me.

Installation: there isn't any, really. It works from the command line and needs Ruby. The licence? BSD, I guess.

BBC iPlayer Ruby code

I threw together some code for querying and parsing the BBC iPlayer search pages and emailing the results to you. You configure it by putting the names of the programmes you want to look out for into config.yaml, along with your email details, e.g.

  - mighty boosh
  - lead balloon
  - never better

  pass: password
  auth_type: login

Copy the sample config.yaml.dist to config.yaml in the same directory and edit.

I run the command line script via cron once a day by calling the cli.rb script with an --email switch, e.g. with the following line in crontab:

0 21 * * * /usr/bin/ruby /home/ell/dev/iplayer/cli.rb --email

You could as easily run it from a Windows scheduled task.

Dependencies are:

  • hpricot
  • rack

What it does is request the iPlayer search page with each search term, one after the other. If there are multiple pages of results, it fetches each of those too, aggregating the results. It will then email you a list of links to the programmes on the iPlayer site. One thing it does which the iPlayer search page doesn't do is sort the matching results by how long is left for you to watch them: the ones with the least amount of time left are at the top.

You can also run it as a local web server on port 3334 with:

ruby server.rb

Which then becomes accessible at:


Nothing fancy: just an HTML page with the search results in it, using the same config. as the command-line client. You can also call the page with extra search parameters to perform custom one-off searches, e.g.


It's not a serious project, just a convenience for me. GPL licence.

Drupal module - now works with Drupal 5!

Another update: I noticed today that this module had disabled my site. I'm not sure why, but I think it may be because wasn't responding. I'm still using it, but if you have it on your site and everything goes bad, it might be because isn't available. I manually disabled it in the system table for my Drupal install and my site came back to life, so I know it was this module that caused the problem. I need to do some more debugging before it's suitable for production, so use at your own risk, or exclude the block from the admin page so at least you can disable the module if it causes problems.

Update: I've just updated this slightly, so that the cached track listing is used if your recent tracks from is blank. This avoids you getting an empty listing if you haven't been listening to music for a while. I also updated it a second time to prevent errors occurring if you get broken XML back from (which happened to me once, and caused my whole site to crash).

I've had a couple of requests for my Drupal module, in response to a previous blog post. This was also a good opportunity to learn how to write modules for Drupal 5 and learn the new forms API, which I've now done.

The new version is attached to the bottom of this post. It requires Drupal 5 and PHP 5 with the DOM extension enabled (this is used for parsing the XML feed from It won't check whether the DOM extension is enabled (it's not that clever yet), so it could cause issues (like your site breaking) if you try to install it on a system without PHP 5 and/or the DOM extension.

Install it as a standard module (drop into the modules directory, enable in Administer > Site building > Modules). I've integrated it with the admin. system (under Administer > Site configuration > module settings), so you'll need to configure it there before it works. The settings required are:

  • Your username.
  • The number of tracks you want to show (1-10).
  • The period for which you want to cache the list (5 minutes to 1 hour), or no caching. The listing is cached in your Drupal root directory (under the filename lastfm_cached.html) for efficiency. In cases where doesn't respond quickly enough, you will get an error message (rather than it just leaving your site hanging).

Then turn on the block with the title Recent tracks in the block admin. pages. And away you go! As proof it actually works, see the left-hand side of this site, where it's in action.

AxleGrease (aka ROROX)

This is a Ruby on Rails "add on" for XAMPP, including Ruby, Ruby on Rails and dependencies, plus some other goodies. It is maintained on RubyForge.


This is a Ruby library for working with Amazon S3. It is hosted on RubyForge.

My code for exporting Mozilla/Firefox bookmarks to is now hosted on SourceForge at

Music manager

This is a relatively simple Python script for managing a mass of .ogg and .mp3 files. I developed it using Python 2.4.2, on Ubuntu "Breezy". You will need the python-pyvorbis, python-id3 and python2.4-id3lib packages (that's what they're called on Ubuntu, anyway; the Python libraries you need are ID3 and ogg.vorbis).


  • Rename files based on ID3 tags with weird characters removed, with optional space stripping and forcing to lowercase.
  • Clean ID3 tags on .ogg and .mp3 files: replaces weird and duplicated characters.
  • Organise files into folders using ID3 tag content. WARNING: this feature will remove empty directories from the start directory down. If you're worried, don't use the --output feature.

For usage instructions, run:

python --help

The script is released under the GNU Public License (GPL). See below for a copy of this license.


Syndicate content