Mise-en-abîme ("placing into infinity or "placing into the abyss", see Wikipedia) has always fascinated me. I suppose it started with the Quaker Oats man (who I'm sure I've mentioned here before):
Though I remember this image more vividly, and with reds, and I think from my childhood. Notice how he's holding a box with another Quaker Oats man just like him on it, and he's holding a box, ad infinitum.
The laughing cow is another food-related one (see http://lunettesrouges.blog.lemonde.fr/files/2007/10/mise-en-abyme.119332...).
Also popular in the visual arts (Dali's La Guerre, see http://www.ecriture-art.com/art/dalilaguerre.jpg).
And literature (the play within a play of Hamlet, footnotes to a poem in Pale Fire which actually constitute the narrative etc.). And film (Synecdoche, New York is probably the best example, but it also happens in Adaptation and more recently in Inception: dreams within dreams, reflecting and influencing each other).
And obviously in nature and mathematics we have fractals. And in computer science recursive functions. And so on...
So, quite interesting, occasionally mind bending.
I wondered whether I could extend this idea to web servers: could a web server present a page; and on that page, a link which would start another web server and load a page from it; the latter page being embedded in the first page, and also presenting a link which would start another web server then load a page from it; ad infinitum...
So I wrote such a thing in Ruby. It's attached to this blog entry. Here's a screenshot:
It could carry on until the resources of the computer ran out (here I started 19 web servers). It uses jQuery to load the content from the next web server into an iframe inside the current page. You need rack, backports, and mongrel to run it.
Just for fun.
Attached is my Ruby script for parsing log files I keep at work. I have to complete a weekly report, and this forms the basis of that; as I don't keep regular office hours (I flexi-work around child care), it also helps me keep track of the hours I've worked.
The basic principle of operation is as follows:
parse_work_log <file name> > summary.outon the file to produce a log of what I've done
summary.outfile into an email I send round
The format of the file is deliberately simple to make it easy to maintain; there are a handful of formatting rules. An example is shown below.
********** 2009-12-21 09:30-12:00 Researching this and that #research 12:00-13:00 -Lunch 13:00-14:00 More research 14:00-18:00 Writing some application #coding ********** 2009-12-22 09:30-10:00 Admin #Admin 10:00-10:15 -Break 10:00-12:00 Found out something rather marvellous #Very important research 12:00-12:45 -Lunch 12:45-18:00 Writing another application #coding ********** +NEXT Have lots of fun banging my head on a brick wall +NEXT Reinstall my operating system
Which, when parsed, produces this on stderr:
************************************* Worked 15.25 hours
And this on stdout:
************************************* This week: Admin: * Admin Very important research: * Found out something rather marvellous coding: * Writing some application * Writing another application research: * Researching this and that ************************************* Next: * Have lots of fun banging my head on a brick wall * Reinstall my operating system
Notes on formatting:
HH:MM-HH:MM <...whitespace...> <text> <#optional tag>\n
#creates a tagged entry which gets included in the report; it appears as a bullet point with the tag as its heading; any entries with the same tag get aggregated under a single heading; tags and entries appear in the order they occur in the file
+NEXTgets listed in the Next section at the end; tags don't work on this (could, but don't at the moment)
It is pretty primitive, but it does the job for me.
Installation: there isn't any, really. It works from the command line and needs Ruby. The licence? BSD, I guess.
I threw together some code for querying and parsing the BBC iPlayer search pages and emailing the results to you. You configure it by putting the names of the programmes you want to look out for into config.yaml, along with your email details, e.g.
search_terms: - mighty boosh - lead balloon - never better email: email_to: firstname.lastname@example.org email_from: email@example.com server: mail.example.com user: firstname.lastname@example.org pass: password auth_type: login
Copy the sample config.yaml.dist to config.yaml in the same directory and edit.
I run the command line script via cron once a day by calling the cli.rb script with an --email switch, e.g. with the following line in crontab:
0 21 * * * /usr/bin/ruby /home/ell/dev/iplayer/cli.rb --email
You could as easily run it from a Windows scheduled task.
What it does is request the iPlayer search page with each search term, one after the other. If there are multiple pages of results, it fetches each of those too, aggregating the results. It will then email you a list of links to the programmes on the iPlayer site. One thing it does which the iPlayer search page doesn't do is sort the matching results by how long is left for you to watch them: the ones with the least amount of time left are at the top.
You can also run it as a local web server on port 3334 with:
Which then becomes accessible at:
Nothing fancy: just an HTML page with the search results in it, using the same config. as the command-line client. You can also call the page with extra search parameters to perform custom one-off searches, e.g.
It's not a serious project, just a convenience for me. GPL licence.
Another update: I noticed today that this module had disabled my site. I'm not sure why, but I think it may be because Last.fm wasn't responding. I'm still using it, but if you have it on your site and everything goes bad, it might be because Last.fm isn't available. I manually disabled it in the system table for my Drupal install and my site came back to life, so I know it was this module that caused the problem. I need to do some more debugging before it's suitable for production, so use at your own risk, or exclude the block from the admin page so at least you can disable the module if it causes problems.
Update: I've just updated this slightly, so that the cached track listing is used if your recent tracks from Last.fm is blank. This avoids you getting an empty listing if you haven't been listening to music for a while. I also updated it a second time to prevent errors occurring if you get broken XML back from Last.fm (which happened to me once, and caused my whole site to crash).
I've had a couple of requests for my Last.fm Drupal module, in response to a previous blog post. This was also a good opportunity to learn how to write modules for Drupal 5 and learn the new forms API, which I've now done.
The new version is attached to the bottom of this post. It requires Drupal 5 and PHP 5 with the DOM extension enabled (this is used for parsing the XML feed from Last.fm). It won't check whether the DOM extension is enabled (it's not that clever yet), so it could cause issues (like your site breaking) if you try to install it on a system without PHP 5 and/or the DOM extension.
Install it as a standard module (drop into the modules directory, enable in Administer > Site building > Modules). I've integrated it with the admin. system (under Administer > Site configuration > Last.fm module settings), so you'll need to configure it there before it works. The settings required are:
Then turn on the block with the title Recent tracks in the block admin. pages. And away you go! As proof it actually works, see the left-hand side of this site, where it's in action.
My code for exporting Mozilla/Firefox bookmarks to del.icio.us is now hosted on SourceForge at http://sourceforge.net/projects/bkmrk2dlcs/.
This is a relatively simple Python script for managing a mass of .ogg and .mp3 files. I developed it using Python 2.4.2, on Ubuntu "Breezy". You will need the python-pyvorbis, python-id3 and python2.4-id3lib packages (that's what they're called on Ubuntu, anyway; the Python libraries you need are
For usage instructions, run:
python music_manager.py --help
The script is released under the GNU Public License (GPL). See below for a copy of this license.