I've been annoyed recently by MacOS not automatically reconnecting to WiFi when our router is restarted. This is slightly annoying when it's my laptop, but it's really annoying when it's the Mac Mini, because I have to grab a keyboard and manually reconnect. I decided that I should automate this away, and after some searching I found networksetup. I wrapped that with a small tool that checks connectivity using ping and runs networksetup if ping fails; I run the tool from cron every 10 minutes, so even if something fails it'll be retried pretty soon (though if it failed once it's likely to fail again if nothing has changed). You need to figure out the network interface to consider by running networksetup -listnetworkserviceorder and looking for the WiFi interface.

The code has grown more complex as I work around temporary blips in network connectivity, see the full code at https://github.com/tobinjt/bin/blob/master/reconnect-wifi

Posted Mon Apr 25 23:23:53 2016 Tags:

I spend a lot of my time using Vim, Bash, and various CLI tools. Over the past 15 years I've spent a lot of time configuring these tools, and I've gotten so used to my configuration that it's really weird when I don't have it. I use 6 machines on a regular basis (some with multiple accounts), so I need a way of managing those configuration files (typically known as dotfiles) and keeping them in sync between machines.

Configuration files aren't much different to code, so the obvious way to maintain them is a Version Control System. I originally used CVS back in 2002 or so, then migrated to Subversion around 2007 (I think), and I've been using Git since 2010. The big difference between dotfiles and code is that dotfiles need to be in your home directory, not a subdirectory somewhere. One approach is to make your home directory into a VCS repository and configure the VCS to ignore everything you don't want checked in, but that requires more maintenance than I'm happy with, and it possibly leaks information (e.g. if .gitignore contains bank-details.txt). The other approach is keep the checked out repository somewhere else and link all the files into your home directory - this is the approach I've taken.

Start by creating a Git repository on a hosting service somewhere; I use https://github.com, but others have recommended https://bitbucket.org. Why use a hosted service? Because you want the repository to be easily available and you want someone else taking care of backups for you. I was very imaginative and named mine dotfiles :) Check it out somewhere; the tools I wrote assume it will be under ~/src and match *dotfiles*.

Now I need a tool to link the files in ~/src/dotfiles into your home directory. I couldn't find one with a quick search back in 2010 (though now there appear to be many available), and I needed a project to learn Python after starting work in Google, so I wrote one: linkdirs. I'm not happy with that code, but it's good enough for now - the ugly Perl code it replaced was much worse. linkdirs is generic: it ignores various files associated with VCS systems, and Vim swap files, but you can use it for linking directories for other reasons. It links from multiple source directories, creates destination directories as necessary, and hard links files from source to destination. If a destination file exists but isn't a hard link to the source file, it will check if the contents are the same; if they are it will delete the destination and create the hard link, otherwise it will display the diffs. If anything fails or there are diffs it will exit unsuccessfully.

linkdirs is pretty low level, so I wrote a wrapper: dotfiles. If finds all directories matching *dotfiles* directly under ~/src (so I can have a standard repository on every computer plus a work repository on work computers), runs linkdirs with the right args, and does two more things:

  1. cat "${HOME}"/.ssh/config-??-* > "${HOME}/.ssh/config"

    ssh doesn't support multiple config files or includes, but I have standard configs and work configs in different repositories, so I keep the config snippets in separate files and combine them. This is done every time dotfiles runs - there's nothing clever to check if an update is necessary.

  2. vim help tags from different plugins (see below) need to be updated, and spell files need to be compiled. I wrote a simple vim function for each update (UpdateBundleHelptags and UpdateSpellFiles) and they're both run every time by dotfiles.

Both linkdirs and dotfiles support reporting unexpected files in the destination directory, making it relatively easy to find leftover files that I've deleted from the repository.

I use about 20 Vim plugins, and I manage each plugin as a git submodule, allowing me to easily update each plugin over time. Because I add and update plugins quite infrequently I've written instructions for myself in my .vimrc. I use Vundle to manage Vim's runtimepath, but I add the repositories manually because Vundle doesn't support submodules.

When I push a change to Github I later need to sync that changes to every machine (normally the next time I use the machine, or when I notice that something is missing). This is simple but tedious, so I wrapped up the per-machine work in update-dotfiles-and-bin, which also pushes any local changes and reports any unexpected files.

A relatively rare action is setting up a new machine or a new user, but I also made that really simple: clone-dotfiles. Originally dotfiles was a shell function rather than a standalone tool, so clone-dotfiles was more complicated back then. When I use a new machine I clone my bin repository, run clone-dotfiles, and I'm ready to go.

All of these tools are generic except for clone-dotfiles and can be reused by anyone.

Posted Tue Dec 22 08:42:56 2015 Tags:

I've been working on my wife's website recently, and I wanted to check that all the internal links and resources worked properly. I wasn't going to do this by hand, so I wrote a simple wrapper around wget. It deliberately downloads everything and saves it to make finding the location of broken links easier. Any request that wasn't answered with HTTP status 200 is displayed, e.g.:

--2014-11-17 22:07:14--  http://example.com/bar/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
--2014-11-17 22:07:16--  http://example.com/baz/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
--2014-11-17 22:07:18--  http://example.com/qwerty/
Reusing existing connection to example.com:80.
HTTP request sent, awaiting response... 404 Not Found
See /tmp/check-links-R4ZxQqw1Ak/wget.log and the contents of /tmp/check-links-R4ZxQqw1Ak for further investigation

That tells you which links are broken, and with that knowledge you're a simple grep -r /qwerty/ /tmp/check-links-R4ZxQqw1Ak to find the page containing the broken link.

It's not amazingly advanced, but it has been useful. I found a couple of 404s, and a large number of 301s that I could easily fix to avoid one more round trip for people viewing the site.

Posted Mon Nov 17 22:18:15 2014 Tags:

I needed to write a static web page in work recently, so I decided to use Markdown, because writing HTML is time-consuming and unproductive. I was writing a reasonably large page, so I wanted folding, which the syntax highlighting I've been using for years didn't support. I wrote some simple folding support to create nested folds at headers, and also reconfigured vim to recognise bulleted lists so that reformatting with gq doesn't destroy lists.

Save https://github.com/tobinjt/dotfiles/blob/master/.vim/plugin/markdown-folding.vim as ~/.vim/plugin/markdown-folding.vim - it will be automatically loaded every time you start vim, but it won't do anything by itself.

Add these lines to ~/.vimrc:

" Associate *.mdwn with markdown syntax.
autocmd BufRead,BufNewFile *.mdwn setlocal filetype=markdown
" Recognise bulleted lists starting with ^\*
autocmd FileType markdown setlocal formatoptions+=n formatlistpat=^\\*\\s*
" Interpret blockquotes as comments.
autocmd FileType markdown setlocal comments=n:>
" Configure folding to use the function defined earlier.
autocmd FileType markdown setlocal foldmethod=expr \

Posted Sat Apr 6 01:43:39 2013

Note: this was originally a lot longer and more complex, but a later version of tmux show-environment supports formatting the output as shell commands to eval, so this is much easier now.

tmux is a tty multiplexer similar to screen, but with some really nice features. One of those features is updating environment variables when you reconnect to a session - the client sends the current values to the tmux server, and they can be retrieved with:

$ tmux show-environment -s
SSH_AUTH_SOCK=/tmp/ssh-lXpzMY3205/agent.3205; export SSH_AUTH_SOCK

Of course, tmux can't force other processes to update their environment. bash has a hook you can use to do it: PROMPT_COMMAND. If this variable is set to the name of a function, bash will run that function before displaying your prompt. Here's a function and supporting settings to update your environment:

function prompt_command() {
    if [ -n "${TMUX}" ]; then
        eval "$(tmux show-environment -s)"

Posted Wed Jun 15 20:11:02 2011 Tags:

For several years I had a Subversion repository named cs_misc, where I accumulated various pieces of code that didn't need a repository of their own. A year ago, I decided to switch to Git, and created a repository named cs-misc. As described in Importing Subversion repositories to Git, I migrated cs_misc from Subversion to Git, and now I wanted to merge the two repositories. Having used git remote and git push with Github, I figured I'd try a similar approach, and this worked:

cd ~/src/cs_misc
git remote add integrate ~/src/cs-misc
git pull integrate master
git remote rm integrate

Two things struck me about this:

  1. It was so easy and intuitive. OK, it wouldn't have been intuitive if I hadn't used git remote before, but that's a fairly basic Git operation.

  2. This didn't just import the current version of each file from cs-misc, it imported the entire history. I have no idea how to do that in Subversion, except for the obvious: check out each revision from repository A, and commit it to repository B. I'm not saying it can't be done with Subversion, and it may actually be easy; I'm saying that, after five years of using Subversion, I have no idea where to begin.

The more I use Git, the happier I am to be using it instead of Subversion.

Posted Thu Jun 17 18:00:23 2010 Tags:

I'm migrating all my source code repositories from Subversion to Git. I tried git-svnimport, but it only works if your repository has the recommended layout of trunk, tags, and branches; unfortunately, a lot of mine don't. git-svn initially looked like overkill, but it worked quite well. Below is the simple shell script I used to import my repositories and push them to Github; I manually created each repository using Github's web interface, but it may be possible to script that too.


set -e

for repo in $( < "$HOME/repo-list" ); do
    echo "$repo"
    cd "$HOME/src"
    git svn clone svn+ssh://subversion.scss.tcd.ie/users/staff/tobinjt/src/svnroot/"$repo"
    cd "$repo"
    git remote add origin git@github.com:tobinjt/"$repo".git
    git push origin master

Posted Wed Jun 16 10:02:03 2010 Tags:

About a month ago I wrote that I need a better LISP book. I gave up on the LISP book I had been reading, and started reading On LISP: Advanced Techniques for Common LISP by Paul Graham. I've read about one third of it, and understood most of it - I had trouble with some of the more difficult code, but I understood his explanations of LISP features and what the code was doing. I was impressed enough to get a copy of ANSI Common LISP, and I've read about one third of it in the last week. It's excellent - clear, concise, well structured; I highly recommend it. I've started solving Project Euler problems again, and I'm much happier with my code.

Posted Sun Mar 14 14:05:00 2010 Tags:

If your website is available under more than one FQDN, standard SEO advice is to pick a canonical FQDN and redirect the others to it. You can see that in action on this website: clicking on http://johntobin.ie/blog/2010/03/Smarter_HTTP_redirects will redirect you to https://www.johntobin.ie/blog/2010/03/Smarter_HTTP_redirects/ (and won't interrupt you reading this article). The simplest way to do this in Apache is to configure a VirtualHost for johntobin.ie, and use a single RewriteRule:

RewriteRule ^(.*)$ https://www.johntobin.ie$1

You can improve this in two easy ways. Firstly, heed the SEO advice and turn that temporary redirect (302) into a permanent redirect (301), which browsers and (more importantly) search engines' crawlers are supposed to cache.

RewriteRule ^(.*)$ https://www.johntobin.ie$1 [L,R=301]

See http://en.wikipedia.org/wiki/HTTP_response_codes for a list of HTTP response codes.

The second change won't have any SEO benefits, but it will reduce the load on your web server slightly, and will also slightly speed up your readers' browsing experience. You may have noticed that when you click on a URL like http://www.example.org/directory, your browser will display http://www.example.org/directory/ (note the trailing / on the second URL). When your browser makes a HTTP request for a directory, but the request doesn't end with a /, the web server will redirect your browser to the same URL with a / appended. When you combine that with a redirection from example.org to www.example.org, your web browser will have to make three requests:


We can eliminate the second request and the second redirection by appending a / whenever a request is missing one.

Here's the Apache config snippet:

# Add a trailing / if a request for a directory is missing one.
# This avoids an extra redirection: instead of
#   http://johntobin.ie/blog -> https://www.johntobin.ie/blog ->
#   https://www.johntobin.ie/blog/
# we get
#   http://johntobin.ie/blog -> https://www.johntobin.ie/blog/

# If the request is for a directory . . .
# . . . and the URL doesn't end with a / . . .
RewriteCond %{REQUEST_URI} !/$
# append a /, and fall through to the next RewriteRule.
RewriteRule ^(.*)$ $1/
# Redirect as before.
RewriteRule ^(.*)$ https://www.johntobin.ie$1 [L,R=301]

Posted Wed Mar 10 21:31:05 2010 Tags:

I've been learning LISP from LISP, and solving problems from Project Euler in LISP. Once you solve a problem, you gain access to the forum thread about that problem. After I solved Project Euler 10, I read someone else's LISP solution; it was quite different to mine, and contained some LISP constructs I hadn't seen before, e.g.

(defun seq-list (min max)
  (loop for i from min to max collect i)

I'd have written that like so:

(defun seq-list (lower-bound upper-bound)
      (current-number lower-bound)
      (result '())

      (when (> current-number upper-bound)
        (return result)
      (setf result (append result (list current-number)))
      (setf current-number (1+ current-number))

That's 16 lines of code versus 3 lines of code. OK, I could knock at least 6 lines off mine by squishing closing parentheses onto earlier lines, but that's ignoring the real problem: his code is simple and clear, whereas my code is all tangled up in the mechanics of declaring local variables, looping, and updating the list. A programmer who didn't know LISP would probably understand his code, but wouldn't have a clue what mine is doing.

I didn't remember seeing syntax like that when reading the section on (loop) in my book, so I checked it out: it has nothing like that. There's also nothing about (collect) in the index. I need to learn from a book that covers all of LISP, so that I can reasonably expect to understand other people's code. I know that I'm writing baby-LISP (cute and helpless) at the moment, but I want to progress on to child-LISP (enthusiastic and energetic), teenage-LISP (angsty and rebellious), and finally adult-LISP (uh, serious and . . . my analogy has run out of steam). I don't think there's any point in learning from an incomplete textbook, because later I'll need to start at the beginning of another textbook anyway. I'm putting Project Euler on hold until I find a better book; I might even redo some of the problems I've already solved.

Posted Tue Feb 16 19:20:31 2010 Tags: