elliot's blog

Ubuntu Feisty on Dell Latitude D820

I recently got a Dell Latitude D820 at work, and spent about 10 days pulling my hair out using Windows XP on it. Once I'd run out of hair, and after clearing it with the technical support people and my bosses, I persuaded them to let me install Linux (though they insisted on me keeping a small Windows boot partition).

Anyway, I've been using it successfully for the last 6 weeks or so, and it's been working great. Today, I worked from home for the first time, and wanted to get a few other things working:

  • I wanted to be able to move the machine around the house (wireless).
  • I wanted to get the touchpad working properly (i.e. scroll when I slide my finger down the side, and turn off taps being translated to mouse clicks).
  • It would be nice to get suspend to RAM working, so I can close the lid when I'm having my lunch and save some electricity.

In summary, I managed to get this lot working fine, and would say this machine is a nice one for running Linux. I don't like it quite as much as my old Thinkpad Z61t (keyboard isn't quite so nice), though I've never managed to get the wireless working on that.


The wireless worked fine, but for some reason the network applet just refused to allow me to connect. Eventually, I managed to get it working by hard-coding the IP address into the network config., and setting up the router to assign a static IP to the wireless card. But there was no need to install any drivers or anything: all of that was just working out of the box.


The touchpad was another matter, and took me about half an hour to figure out. In the end, after having read about 6 different articles on installing various shades of Linux on this model of laptop, I came across this extremely useful article, which enabled me to fix it definitively.

Eventually, I had to make two edits to /etc/X11/xorg.conf (as root). First, I added a line to the ServerLayout configuration element:

Section "ServerLayout"
        Identifier      "Default Layout"
        Screen "Default Screen"
        Inputdevice     "Generic Keyboard"
        Inputdevice     "Configured Mouse"
        # HERE'S MY EDIT
        Inputdevice     "Touchpad"      "AlwaysCore"
        Inputdevice     "stylus"        "SendCoreEvents"
        Inputdevice     "cursor"        "SendCoreEvents"
        Inputdevice     "eraser"        "SendCoreEvents"

Then I added a new section for the touchpad itself:

Section "InputDevice"
         Driver "synaptics"
         Identifier "Touchpad"
         Option "Device" "/dev/input/event3"
         Option "Protocol" "event"
         Option "LeftEdge" "130"
         Option "RightEdge" "840"
         Option "TopEdge" "130"
         Option "BottomEdge" "640"
         Option "FingerLow" "7"
         Option "FingerHigh" "8"
         Option "MaxTapTime" "0"
         Option "MaxTapMove" "0"
         Option "EmulateMidButtonTime" "75"
         Option "VertScrollDelta" "20"
         Option "HorizScrollDelta" "0"
         Option "UpDownScrolling" "1"
         Option "MinSpeed" "0.70"
         Option "MaxSpeed" "1.20"
         Option "AccelFactor" "0.080"
         Option "EdgeMotionMinSpeed" "200"
         Option "EdgeMotionMaxSpeed" "200"
         Option "SHMConfig" "on"
         Option "Emulate3Buttons" "on"

The important bit is where I specify the Device as /dev/input/event3. Also note I set MaxTapTime to 0 to turn off taps acting as mouse clicks. You may have to muck around with the *Speed and AccelFactor settings to suit you: there is a GUI client called gsynaptics which can configure scroll speed and taps; and there is a more comprehensive command line application synclient which helps if you want to fiddle around with the settings in real time.

Suspend to RAM

I found this article which provided the necessary settings. After I'd made the changes suggested, everything worked brilliantly: even the wireless came back from suspend. (Note that I use the proprietary NVidia drivers, so the instructions are specific to that.)

First off, I edited /etc/X11/xorg.conf, adding a couple of settings to the NVidia driver setup:

Section "Device"
        Identifier      "nVidia Corporation G72M [GeForce Go 7400]"
        Driver          "nvidia"
        Busid           "PCI:1:0:0"
        Option          "UseEDIDDpi"            "False"
        Option          "AddARGBVisuals"        "True"
        Option          "AddARGBGLXVisuals"     "True"
        Option          "NoLogo"                "True"
        Option          "NvAGP"                 "1"

Then I edited /etc/defaults/acpi-support (in all cases, the default setting was true):


REST semantics

As part of my work at Talis, I'm currently working on a RESTful application (for library data). I've read the RESTful Web Services book, I know about RESTful Rails, I'm aware of the Atom Publishing Protocol, and I've done some work with the Amazon S3 interface. But so far I can't find a complete agreement on the exact semantics of POST vs. PUT. Which one maps onto resource creation, and which one onto update? The frameworks I mentioned above aren't in complete agreement:

  • In the RESTful Web Services book, it's complicated. POST is used to append new resources to an existing resource (creating child resources, effectively), and PUT used to create new resources where the client knows the URI of the new resource (see S3 below). "The difference between PUT and POST
    is this: the client uses PUT when it’s in charge of deciding which URI the new resource
    should have. The client uses POST when the server is in charge of deciding which URI
    the new resource should have."
  • APP specifies that a POST = create, and PUT = update.
  • In Rails, POST = create, PUT = update.
  • In S3, PUT = create (there are no updates, just overwrites).

I think it boils down to:

  • Create
    • If you are placing a new resource into a known location identified by URI, use PUT (e.g. you are creating a new web page and know where you want to serve it once it is in place on the server).
    • If you are creating a new resource and don't know where it will end up (its URI) (e.g. creating a new blog entry), use POST.
  • Update
    • If you are modifying a resource by overwriting existing data (e.g. replacing a blog entry with a new one), use PUT.
    • If you are appending new data to an existing resource (e.g. adding a comment to a blog entry), use POST.

I went back to RFC 2616 for the formal distinction between POST and PUT, which is defined as follows:

"The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line." (trans.: if you create a new resource "inside" an existing URI, use POST: this applies if you are doing something like creating a new resource and you don't know what its URI will be).

"The PUT method requests that the enclosed entity be stored under the supplied Request-URI." (trans.: use PUT if you know the URI of the resource you are creating or modifying).

This ties with the definition given in one of the comments by Sergio (thanks). What's interesting about this is that creating a resource where you know the resulting URI should use PUT; a modification will generally use PUT; and creating a resource where you don't know the resulting URI for it should use POST. The RESTful Web Services book follows this line.

Any other thoughts?

Rake command completion in the Bash shell

I find the command completion tools in a shell indispensable (i.e. when you press Tab to complete a path or show you the options for a command - this works for svn, for example). Plus I get quite annoyed when they don't work for a command I'm using. I found out today it is pretty easy to add your own completions using the complete command (at least, when using Bash under Linux).

So here, for example, is how to get Rake to auto-complete with the names of the tasks in your Rake file:

complete -W "$(rake -T | awk 'NR != 1 {print $2}')" rake

It's not flawless, but it's a damn sight better than having to do rake -T and scrolling through a wordy list. NB this only works if you run the complete command in a directory where you have a Rake file to start with. There's probably some switch to complete which dynamically generates the completions when you try to use them: I need to investigate.

You could also try something like this to put up a list of names of hosts in your /etc/hosts file when you're using SSH:

complete -W "$(cat /etc/hosts | awk '$1 != "#" {print $2}')" ssh

You can just add these to your Bash profile to get them activated when starting a new shell, I imagine.

I am a winner!

Today I got home to find a cheque from the Premium Bonds people for £50! I'm a winner! I invested £1000 nearly a year ago, rather than keep it in savings. I reckon so far I'm ahead of the interest I would have got if I'd put it in the a bank.

Rails book - published at last

My (co-authored) Rails book, Ruby on Rails Enterprise Application Development: Plan, Program, Extend, is out. Finally. Feels like quite a relief for it to be out there, after about a year's work.

REST - get 'em started young

I was fiddling around with some designs for Spring REST applications this weekend, writing in a notebook. While I was writing, my daughter, Madeleine, came over to see what I was doing. She returned to the table, wrote on a piece of paper, then brought it back, proudly emblazoned with the word "PUT". Then she went backwards and forwards to the table, copying bits of what I was writing. This was the end result:

In case you can't read it, it says:


If I ever do a presentation on REST, I will definitely include this picture. Cute.

Koha - open source library management system

I spent a couple of hours last night and this installing Koha, an open source library management system. Unlike systems such as VuFind and Scriblio which are just OPACs (i.e. a web-based front-end for bibliographic catalogue data), Koha is a full-fledged LMS (or ILS, as they are called in the US). It's written in Perl, and runs as two sets of Perl CGI scripts under a web server like Apache, with MySQL as the back-end.

Installation was, how shall I say it diplomatically, not straightforward. The points that tripped me up were:

  1. I recommend creating all the directories you need before you start. I installed everything into /opt/koha, creating an opac, intranet and log directory underneath. You'll be prompted to enter these locations during installation.
  2. The permissions on the installation directory have to be set so that Apache can write into them. Without this, you get cryptic HTTP 500 errors when you try to access the web interfaces.
  3. It requires a load of Perl CPAN libraries to be installed. I don't know much about Perl, so this was fairly new to me. CPAN doesn't have the simplest interface, and it took me quite a while to work out whether the modules were properly installed or not. Plus Koha needs a version of one Perl library (ZOOM, which implements the ZOOM API for information retrieval) which wouldn't install on my machine, as my Perl installation is too old (I think). I had to manually visit CPAN, download an older version of the module as a tarball, and install it manually, which was a pain. The old version still seems to work fine with all the other Perl modules used by Koha.
  4. As I use XAMPP, I have a non-standard MySQL socket location, which meant that the installation crashed out at the point where the database gets set up. I tried to work out how to fix this, twiddling with SQL code inside the Perl scripts, but I couldn't get it to work. In the end I gave up and made a symlink (/opt/lampp/var/mysql/mysql.sock) from my XAMPP MySQL socket to the one Koha expects (/var/run/mysql/mysql.sock). This did the trick.
  5. You have to remember to include the generated koha-httpd.conf file in your main Apache config.. In my case, I also needed to uncomment the two Listen directives to make Apache listen on the ports I had configured Koha to run on.

The end result: a running LMS, with separate OPAC and admin. front ends. Rather confusingly, the admin. front end asks you for a card number to login, when what you need is the MySQL username to access the database.

I have to say that I thought the installation process was over-complicated. Ironically, by creating an installer which generates the database and copies the files for you, they make it harder to fix errors when one step in the process fails. For example, I had to try to do the whole installation about 8 times, rolling back all the changes to directories and the MySQL database each time it failed (near the end). I'd much rather the steps an installation process are discrete: create directories, copy the files, create the database and user, pipe in the SQL commands to setup the database, generate the httpd config., etc.. That way, if step 6 out of 6 steps goes wrong, I don't need to roll back the previous 5 steps.

I haven't got time to play with it now, but will post my thoughts when I do. So far, I'd say the admin. area is pretty slick:

And the OPAC simple but functional:

I need to do some config. to make it do interesting things, but at least I finally got it running.

Magic tricks revealed

I feel cheated. I was using StumbleUpon to find stuff and came across this article on aerogel. It's virtually transparent, and can support a fair bit of weight. It occurred to me that maybe David Blaine uses it in his performances to do his levitation, which has always befuddled me. I then started looking around for evidence of this, and discovered this PDF which explains his performances. As is always the case, reading about how mundane some of the tricks he performs really are is quite depressing.

Even more depressing was the explanation for how he does his levitation. A clue: have a look at Balducci levitation; here's a video of it being performed. When I looked again at David Blaine's levitation, it was obvious that he was doing a different trick, not the Balducci levitation; but the PDF explains exactly what was going on. I'll leave it up to you to read it if you want the illusion utterly shattered.

Drupal wins the Packt overall open source CMS award

Read about it here.

You knows it.

(I was on the judging panel, by the way.)

Post-scarcity economics

I came across the concept of agalmics the other day, while reading the rather good zenbullets. Agalmics is an approach to (or more properly, perhaps, an alternative to) economics which acknowledges that non-scarce goods will always be copied, whether legally or illegally: "With our information technologies copying data is the easiest thing in the world, so it would be fool-hardy to try to fight it." The answer, according to agalmics, is to acknowledge that rather than paying for and protecting data, we form a new economics based on "reputations, kudos, and respect, all of which are earned in a variety of non-Capitalist ways." We're already seeing this on the web, with blogs forming reputations and earning money. I have personally benefitted from this, as this blog has definitely enabled me to present myself effectively when looking for work, and exposed me to more opportunities than would otherwise have come my way. The end result of agalmics? "Information will be abundant and free."

In particular, I liked this analysis of recent trends in music, such as Radiohead offering up their album for what the listener thinks it's worth, and Prince giving his album away with a newspaper. Elsewhere he talks about Madonna signing a deal with a concert promotion company.

The corporate approach to this [the ease of digital copying] seems to be DRM (Digital Rights Management) which is perhaps the stupidest thing I have ever heard of. The idea is that the mp3s they sell are locked to a certain player. For example anything bought through the iTunes store is locked to only play through iTunes, limiting what a consumer can do with their purchase. What they are effectively doing is making sure that the mp3s they are charging money for are actually of less value to the consumer than the illegally downloaded, free equivalent. This can only encourage file sharing, rather than combat it.

(My emphasis). Hear, hear.

As an aside, this entry intrigued me (I've had a passing interest in the Situationist International and Guy DeBord), and led me to Kriegspiel, and then onto an online Kriegspiel. That was a happy hour or so.

Syndicate content