I first thought about learning LISP when I was still an undergrad, but I was stymied by Real Life and a lack of material to learn from. Shortly before I submitted my MSc thesis I picked up two LISP books - LISP and On LISP: Advanced Techniques for Common LISP - but my MSc was taking up all my time, so I put them on a shelf and forgot about them. About a month ago, I read Recursive Functions of Symbolic Expressions and their Computation by Machine (Part I), the original paper about LISP. It's very clearly written, and explains the design of LISP so well (in only 34 pages) that someone could make a reasonable attempt at implementing LISP based solely on reading it. Inspired by the paper, I dug out my books and started learning LISP; I've now reached a point where the solutions to some exercises are interesting enough to post.

Problem 5-3: Now write a pair of procedures KEEP-FIRST-N-CLEVERLY and KEEP-FIRST-N-CLEVERLY-AUX, that together make a list of the first n elements in a list. Be sure that KEEP-FIRST-N-CLEVERLY-AUX is tail recursive.

My solution:

(defun keep-first-n-cleverly (n alist)
  (keep-first-n-cleverly-aux n alist nil)

(defun keep-first-n-cleverly-aux (n alist newlist)
  (if (zerop n)
      (- n 1)
      (rest alist)
      (append newlist (list (first alist)))

I like tail recursion: lots of problems are simpler to solve recursively, and knowing that a tail recursive call will be optimised to a goto satisfies the part of my mind that thinks "What if my function is run on a list with 1000 elements? Would I be better writing it iteratively, so that it doesn't run out of stack space?".

Problem 5-9: Define SQUASH, a procedure that takes an expression as its argument and returns a non-nested list of all atoms found in the expression. Here is an example:

* (squash '(a (a (a (a b))) (((a b) b) b) b))
(A A A A B A B B B B)

Essentially, this procedure explores the fringe of the tree represented by the list given as its argument, and returns a list of all the leaves.

My solution:

(defun squash (alist)
    ((null alist) nil)
    ((atom alist) (list alist))
    (t (append
         (squash (first alist))
         (squash (rest alist))

Problem 5-12: The version of FIBONACCI we have already exhibited is inefficient beyond comparison. Many computations are repeated. Write a version with optional parameters that does not have this flaw. Think of working forward from the first month rather than backward from the nth month.

My solution:

(defun fib (n &optional (count 2) (fibn-2 0) (fibn-1 1))
  (case n
    (0 0)
    (1 1)
        (equal n count)
        (+ fibn-2 fibn-1)
        (fib n (+ count 1) fibn-1 (+ fibn-2 fibn-1))

The point of this exercise was to use optional parameters; if I was writing fib() for real, I would use an auxiliary procedure, like this:

(defun fib (n)
  (case n
    (0 0)
    (1 1)
    (otherwise (fib-aux n 2 0 1))
(defun fib-aux (n num-calculated fibn-2 fibn-1)
  (if (equal n num-calculated)
    (+ fibn-2 fibn-1)
    (fib-aux n (+ num-calculated 1) fibn-1 (+ fibn-2 fibn-1))

My first inclination when writing a fibonacci function is to use Memoization; if I was writing it in Perl I would use the standard module Memoize, where fibonacci is presented as an example in the documentation. I don't know yet how hard it would be to do this in LISP, but I expect that closures should be easy enough.

If your website is available under more than one FQDN, standard SEO advice is to pick a canonical FQDN and redirect the others to it. You can see that in action on this website: clicking on http://johntobin.ie/blog/2010/03/Smarter_HTTP_redirects will redirect you to https://www.johntobin.ie/blog/2010/03/Smarter_HTTP_redirects/ (and won't interrupt you reading this article). The simplest way to do this in Apache is to configure a VirtualHost for johntobin.ie, and use a single RewriteRule:

RewriteRule ^(.*)$ https://www.johntobin.ie$1

You can improve this in two easy ways. Firstly, heed the SEO advice and turn that temporary redirect (302) into a permanent redirect (301), which browsers and (more importantly) search engines' crawlers are supposed to cache.

RewriteRule ^(.*)$ https://www.johntobin.ie$1 [L,R=301]

See http://en.wikipedia.org/wiki/HTTP_response_codes for a list of HTTP response codes.

The second change won't have any SEO benefits, but it will reduce the load on your web server slightly, and will also slightly speed up your readers' browsing experience. You may have noticed that when you click on a URL like http://www.example.org/directory, your browser will display http://www.example.org/directory/ (note the trailing / on the second URL). When your browser makes a HTTP request for a directory, but the request doesn't end with a /, the web server will redirect your browser to the same URL with a / appended. When you combine that with a redirection from example.org to www.example.org, your web browser will have to make three requests:


We can eliminate the second request and the second redirection by appending a / whenever a request is missing one.

Here's the Apache config snippet:

# Add a trailing / if a request for a directory is missing one.
# This avoids an extra redirection: instead of
#   http://johntobin.ie/blog -> https://www.johntobin.ie/blog ->
#   https://www.johntobin.ie/blog/
# we get
#   http://johntobin.ie/blog -> https://www.johntobin.ie/blog/

# If the request is for a directory . . .
# . . . and the URL doesn't end with a / . . .
RewriteCond %{REQUEST_URI} !/$
# append a /, and fall through to the next RewriteRule.
RewriteRule ^(.*)$ $1/
# Redirect as before.
RewriteRule ^(.*)$ https://www.johntobin.ie$1 [L,R=301]

I needed to write a static web page in work recently, so I decided to use Markdown, because writing HTML is time-consuming and unproductive. I was writing a reasonably large page, so I wanted folding, which the syntax highlighting I've been using for years didn't support. I wrote some simple folding support to create nested folds at headers, and also reconfigured vim to recognise bulleted lists so that reformatting with gq doesn't destroy lists.

Save https://github.com/tobinjt/dotfiles/blob/master/.vim/plugin/markdown-folding.vim as ~/.vim/plugin/markdown-folding.vim - it will be automatically loaded every time you start vim, but it won't do anything by itself.

Add these lines to ~/.vimrc:

" Associate *.mdwn with markdown syntax.
autocmd BufRead,BufNewFile *.mdwn setlocal filetype=markdown
" Recognise bulleted lists starting with ^\*
autocmd FileType markdown setlocal formatoptions+=n formatlistpat=^\\*\\s*
" Interpret blockquotes as comments.
autocmd FileType markdown setlocal comments=n:>
" Configure folding to use the function defined earlier.
autocmd FileType markdown setlocal foldmethod=expr \

I'm migrating all my source code repositories from Subversion to Git. I tried git-svnimport, but it only works if your repository has the recommended layout of trunk, tags, and branches; unfortunately, a lot of mine don't. git-svn initially looked like overkill, but it worked quite well. Below is the simple shell script I used to import my repositories and push them to Github; I manually created each repository using Github's web interface, but it may be possible to script that too.


set -e

for repo in $( < "$HOME/repo-list" ); do
    echo "$repo"
    cd "$HOME/src"
    git svn clone svn+ssh://subversion.scss.tcd.ie/users/staff/tobinjt/src/svnroot/"$repo"
    cd "$repo"
    git remote add origin git@github.com:tobinjt/"$repo".git
    git push origin master

I've been working on my wife's website recently, and I wanted to check that all the internal links and resources worked properly. I wasn't going to do this by hand, so I wrote a simple wrapper around wget. It deliberately downloads everything and saves it to make finding the location of broken links easier. Any request that wasn't answered with HTTP status 200 is displayed, e.g.:

--2014-11-17 22:07:14--  http://example.com/bar/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
--2014-11-17 22:07:16--  http://example.com/baz/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
--2014-11-17 22:07:18--  http://example.com/qwerty/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
See /tmp/check-links-R4ZxQqw1Ak/wget.log and the contents of /tmp/check-links-R4ZxQqw1Ak for further investigation

That tells you which links are broken, and with that knowledge you're a simple grep -r /qwerty/ /tmp/check-links-R4ZxQqw1Ak to find the page containing the broken link.

It's not amazingly advanced, but it has been useful. I found a couple of 404s, and a large number of 301s that I could easily fix to avoid one more round trip for people viewing the site.

For several years I had a Subversion repository named cs_misc, where I accumulated various pieces of code that didn't need a repository of their own. A year ago, I decided to switch to Git, and created a repository named cs-misc. As described in Importing Subversion repositories to Git, I migrated cs_misc from Subversion to Git, and now I wanted to merge the two repositories. Having used git remote and git push with Github, I figured I'd try a similar approach, and this worked:

cd ~/src/cs_misc
git remote add integrate ~/src/cs-misc
git pull integrate master
git remote rm integrate

Two things struck me about this:

  1. It was so easy and intuitive. OK, it wouldn't have been intuitive if I hadn't used git remote before, but that's a fairly basic Git operation.

  2. This didn't just import the current version of each file from cs-misc, it imported the entire history. I have no idea how to do that in Subversion, except for the obvious: check out each revision from repository A, and commit it to repository B. I'm not saying it can't be done with Subversion, and it may actually be easy; I'm saying that, after five years of using Subversion, I have no idea where to begin.

The more I use Git, the happier I am to be using it instead of Subversion.

Last year I migrated the School of Computer Science and Statistics mail server from Solaris to Debian Linux. I made a lot of changes and improvements during the migration; one of the simplest was to keep /etc under version control. I assume most people are familiar with version control from writing code - if you're not, please spend a couple of hours reading and experimenting with any modern VCS, you'll be thankful you did. I first set up a version controlled /etc almost 10 years ago when I was Netsoc's sysadmin, but back then I was using CVS, and it was complicated by Solaris putting binaries and named pipes in /etc for backwards (and I really mean backwards) compatibility. This time I used etckeeper and git. One of the reasons for using git is that it's distributed: if we added a second mail server, I wanted to make synchronising /etc as simple as possible. It has proven to be very useful:

  • Being able to see the changes I made in previous days, especially during the initial setup, when a lot of services needed a lot of configuration.

  • Finding out when files last changed, so we can assure ourselves and users that we haven't changed anything that would cause the problems they're having, or find out that someone else made a change unbeknownst to us that could be responsible.

  • Avoiding directory listings like this:


Setup is explained in /usr/share/doc/etckeeper/README.gz but I'll summarise here:

cd /etc
etckeeper init
git status
# review the list of files to be added; files can be removed with
#   git rm --cached FILE
# files can be ignored by adding them to /etc/.gitignore
git commit -m "Initial import of /etc"

That's it - you now have a version controlled /etc. Chances are that you'll need to ignore some files because they're generated from others or modified daemons, but that's easy to do. If you intend cloning the repository, please read the security advice in /usr/share/doc/etckeeper/README.gz to avoid any nasty surprises.

I've been annoyed recently by MacOS not automatically reconnecting to WiFi when our router is restarted. This is slightly annoying when it's my laptop, but it's really annoying when it's the Mac Mini, because I have to grab a keyboard and manually reconnect. I decided that I should automate this away, and after some searching I found networksetup. I wrapped that with a small tool that checks connectivity using ping and runs networksetup if ping fails; I run the tool from cron every 10 minutes, so even if something fails it'll be retried pretty soon (though if it failed once it's likely to fail again if nothing has changed). You need to figure out the network interface to consider by running networksetup -listnetworkserviceorder and looking for the WiFi interface.

The code has grown more complex as I work around temporary blips in network connectivity, see the full code at https://github.com/tobinjt/bin/blob/master/reconnect-wifi

I've been learning LISP from LISP, and solving problems from Project Euler in LISP. Once you solve a problem, you gain access to the forum thread about that problem. After I solved Project Euler 10, I read someone else's LISP solution; it was quite different to mine, and contained some LISP constructs I hadn't seen before, e.g.

(defun seq-list (min max)
  (loop for i from min to max collect i)

I'd have written that like so:

(defun seq-list (lower-bound upper-bound)
      (current-number lower-bound)
      (result '())

      (when (> current-number upper-bound)
        (return result)
      (setf result (append result (list current-number)))
      (setf current-number (1+ current-number))

That's 16 lines of code versus 3 lines of code. OK, I could knock at least 6 lines off mine by squishing closing parentheses onto earlier lines, but that's ignoring the real problem: his code is simple and clear, whereas my code is all tangled up in the mechanics of declaring local variables, looping, and updating the list. A programmer who didn't know LISP would probably understand his code, but wouldn't have a clue what mine is doing.

I didn't remember seeing syntax like that when reading the section on (loop) in my book, so I checked it out: it has nothing like that. There's also nothing about (collect) in the index. I need to learn from a book that covers all of LISP, so that I can reasonably expect to understand other people's code. I know that I'm writing baby-LISP (cute and helpless) at the moment, but I want to progress on to child-LISP (enthusiastic and energetic), teenage-LISP (angsty and rebellious), and finally adult-LISP (uh, serious and . . . my analogy has run out of steam). I don't think there's any point in learning from an incomplete textbook, because later I'll need to start at the beginning of another textbook anyway. I'm putting Project Euler on hold until I find a better book; I might even redo some of the problems I've already solved.

I spend a lot of my time using Vim, Bash, and various CLI tools. Over the past 15 years I've spent a lot of time configuring these tools, and I've gotten so used to my configuration that it's really weird when I don't have it. I use 6 machines on a regular basis (some with multiple accounts), so I need a way of managing those configuration files (typically known as dotfiles) and keeping them in sync between machines.

Configuration files aren't much different to code, so the obvious way to maintain them is a Version Control System. I originally used CVS back in 2002 or so, then migrated to Subversion around 2007 (I think), and I've been using Git since 2010. The big difference between dotfiles and code is that dotfiles need to be in your home directory, not a subdirectory somewhere. One approach is to make your home directory into a VCS repository and configure the VCS to ignore everything you don't want checked in, but that requires more maintenance than I'm happy with, and it possibly leaks information (e.g. if .gitignore contains bank-details.txt). The other approach is keep the checked out repository somewhere else and link all the files into your home directory - this is the approach I've taken.

Start by creating a Git repository on a hosting service somewhere; I use https://github.com, but others have recommended https://bitbucket.org. Why use a hosted service? Because you want the repository to be easily available and you want someone else taking care of backups for you. I was very imaginative and named mine dotfiles :) Check it out somewhere; the tools I wrote assume it will be under ~/src and match *dotfiles*.

Now I need a tool to link the files in ~/src/dotfiles into your home directory. I couldn't find one with a quick search back in 2010 (though now there appear to be many available), and I needed a project to learn Python after starting work in Google, so I wrote one: linkdirs. I'm not happy with that code, but it's good enough for now - the ugly Perl code it replaced was much worse. linkdirs is generic: it ignores various files associated with VCS systems, and Vim swap files, but you can use it for linking directories for other reasons. It links from multiple source directories, creates destination directories as necessary, and hard links files from source to destination. If a destination file exists but isn't a hard link to the source file, it will check if the contents are the same; if they are it will delete the destination and create the hard link, otherwise it will display the diffs. If anything fails or there are diffs it will exit unsuccessfully.

linkdirs is pretty low level, so I wrote a wrapper: dotfiles. If finds all directories matching *dotfiles* directly under ~/src (so I can have a standard repository on every computer plus a work repository on work computers), runs linkdirs with the right args, and does two more things:

  1. cat "${HOME}"/.ssh/config-??-* > "${HOME}/.ssh/config"

    ssh doesn't support multiple config files or includes, but I have standard configs and work configs in different repositories, so I keep the config snippets in separate files and combine them. This is done every time dotfiles runs - there's nothing clever to check if an update is necessary.

  2. vim help tags from different plugins (see below) need to be updated, and spell files need to be compiled. I wrote a simple vim function for each update (UpdateBundleHelptags and UpdateSpellFiles) and they're both run every time by dotfiles.

Both linkdirs and dotfiles support reporting unexpected files in the destination directory, making it relatively easy to find leftover files that I've deleted from the repository.

I use about 20 Vim plugins, and I manage each plugin as a git submodule, allowing me to easily update each plugin over time. Because I add and update plugins quite infrequently I've written instructions for myself in my .vimrc. I use Vundle to manage Vim's runtimepath, but I add the repositories manually because Vundle doesn't support submodules.

When I push a change to Github I later need to sync that changes to every machine (normally the next time I use the machine, or when I notice that something is missing). This is simple but tedious, so I wrapped up the per-machine work in update-dotfiles-and-bin, which also pushes any local changes and reports any unexpected files.

A relatively rare action is setting up a new machine or a new user, but I also made that really simple: clone-dotfiles. Originally dotfiles was a shell function rather than a standalone tool, so clone-dotfiles was more complicated back then. When I use a new machine I clone my bin repository, run clone-dotfiles, and I'm ready to go.

All of these tools are generic except for clone-dotfiles and can be reused by anyone.