Category Archives: Other

Tarsnap offsite backups

Look at that.

root@halleck:~# tarsnap --list-archives -v | sort -k2 | head -n1
M2009-09        2009-09-01 05:24:01

Having been using the Tarsnap backup service for five years kind of deserves a blog post? Let me share a few of the reason why I keep using Tarsnap for offsite backups, and why you too might want to.

  • Encryption happens locally, allowing one to worry less about the files being included.
  • Data deduplication on a block level, allowing for every kept backup to be a full snapshot.
  • Client uses basically the same command line option as regular tar, making Tarsnap familiar as well as scriptable.
  • While not open source, the Tarsnap client source code is fully available, coupled with a serious bug bounty program.
  • Server side Tarsnap uses Amazon S3 for storage, which has a proven record for durability.
  • IPv6 support.

Then there are these potential issues, which may or may not be an issue for you.

  • Restoring files tend to be a bit slow. That limitation can partially be worked around by restoring separate folders in parallel.
  • You can only use the Tarsnap backup client together with the Tarsnap backup service. Backing up to your own infrastructure is not supported.
  • Payment can only be made a head of time, and if your account runs out of money there is a relatively short window before your backups are deleted. Personally I can keep track of that, but it makes me hesitant to recommend Tarsnap in any work environment.

For more info, see the pages Tarsnap design principles and Tarsnap general usage.

In case you start using Tarsnap, then feel free to take a look at my tarsnap-helpers git repository. It contains a couple of monitoring scripts as well as some bash completions.

Easy IPv4+IPv6 Nagios monitoring using check_v46

Not feeling ready to give up on IPv4 quite yet? In that case you most likely want your Nagios to probe your services on both their IPv4- as well as their IPv6 addresses.

Looking into how how to handle that duplication in a sane manner I stumbled over the rather convenient check_v46 plugin wrapper. Assuming the actual check being run provides the -4/-6 options check_v46 can automatically, based on a hostname lookup, test using IPv4 and/or IPv6, and then return the worst result. See below for a trivial example, as well a matching example response.

define command{
       command_name    dual_check_http
       command_line    /usr/local/nagios/check_v46 -H '$HOSTNAME$' /usr/lib/nagios/plugins/check_http

Do note that there is also the option of manually feeding check_v46 IPv4 and IPv6 addresses. See the plugin –help for the actual details. Also note that the check_v46 wrapper does not appear to work with the Nagios embedded Perl.

Of course, a more perfect solution probably requires Nagios itself to be more IPv4 vs IPv6 aware. For example, in the case that a host (or a datacenter) temporarily becomes unavailable over IPv6, it might then be more helpful if the service checks focused primarily on the IPv4 results, instead of either going full ballistic or completely silent. Yet, as long as good enough is good enough, the check_v46 wrapper is definitely an easy win.

Fully using apt-get download

Occasionally I need to download a Debian package or two. While I could find a download link using / I really do prefer using apt-get download. In addition to the general pleasantness of using a command line tool the main benefit really is that apt automatically will verify checksums and gpg signatures.

For me the most typical usage scenario is that I want to download a Debian package from a different release than the one I happen to run on my workstation. Instead of putting additional entries in /etc/apt/sources.list, and hence having to deal with apt pinning as well as it making my regular apt-get update runs slower, I find it much more convenient to setup a separate apt environment.

First there is the basic directory structure.

$ mkdir -p ~/.cache/apt/{cache,lists}
$ mkdir -p ~/.config/apt/{apt.conf.d,preferences.d,trusted.gpg.d}
$ touch ~/.cache/apt/status
$ ln -s /usr/share/keyrings/debian-archive-keyring.gpg ~/.config/apt/trusted.gpg.d/
$ ln -s /usr/share/keyrings/ubuntu-archive-keyring.gpg ~/.config/apt/trusted.gpg.d/

(For an Ubuntu system the /usr/share/keyrings/debian-archive-keyring.gpg keyring is provided by the debian-archive-keyring package.)

Then there is the creation of the files ~/.config/apt/downloader.conf and ~/.config/apt/sources.list. They should contain something like the following.

## ~/.config/apt/downloader.conf
Dir::Cache "/home/USERNAME/.cache/apt/cache";
Dir::Etc "/home/USERNAME/.config/apt";
Dir::State::Lists "/home/USERNAME/.cache/apt/lists";
Dir::State::status "/home/USERNAME/.cache/apt/status";
## ~/.config/apt/sources.list
# Debian 6.0 (Squeeze)
deb squeeze main contrib non-free
deb squeeze-updates main non-free
deb squeeze/updates main contrib non-free

# Debian 6.0 (Squeeze) Backports
deb squeeze-backports main contrib non-free

# Debian 7.0 (Wheezy)
deb wheezy main
deb wheezy/updates main

# Debian Unstable (Sid)
deb sid main

# Ubuntu 12.04 (Precise)
deb precise main restricted universe multiverse
deb precise-updates main restricted universe multiverse
deb precise-security main restricted universe multiverse

# Ubuntu 12.10 (Quantal)
deb quantal main restricted universe multiverse
deb quantal-updates main restricted universe multiverse
deb quantal-security main restricted universe multiverse

Given the just described setup, apt-get download can now download packages from any release/codename defined in ~/.config/apt/sources.list.

$ APT_CONFIG=~/.config/apt/downloader.conf apt-get update
$ APT_CONFIG=~/.config/apt/downloader.conf apt-get download git/squeeze-backports
Get:1 Downloading git 1: [6557 kB]
Fetched 6557 kB in 2s (2512 kB/s)
$ APT_CONFIG=~/.config/apt/downloader.conf apt-get download git/precise
Get:1 Downloading git 1: [6087 kB]
Fetched 6087 kB in 3s (1525 kB/s)

Do note that apt-get download was introduced in apt 0.8.11. For Debian that translates into Wheezy (7.0) and for Ubuntu that would be as of Natty (11.04). The main difference between apt-get download and apt-get –download-only install is that the later also does dependency resolution.

My bastardized Masterless Puppet

I am currently using Puppet to control my laptop as well as my two VPS nodes. That is not exactly the scale where I feel the need to have a puppet master running. Especially not since I am not overly keen on the idea of giving an external machine control over my laptop.

That being said, I still want some central location from where my nodes can fetch the latest recipes, allowing me the freedom to push updated recipes even if a node don’t happen to be online at the time. I just don’t want to spend any actual resources on this central location, nor having to trust it more than necessary.

At first my recipes didn’t contain any secrets and I got away with pulling updated recipes from a (public) github repository. The only overhead was the need to have my puppet cron script verify that HEAD contained a valid gpg signed tag.

Now my puppet recipes do depend on secrets. These shouldn’t be available neither to the central location nor to the wrong node. That bringing us to my current homegrown, slightly bastardized, solution.

The current central location for my puppet recipes is a cheap web host. To it I am uploading gpg encrypted tarballs. These tarballs are individually generated as well as encrypted with each nodes own gpg key. For further details, see the included Makefile below.

	apt-get moo

locally: manifests/$(shell facter hostname).pp
	puppet apply --confdir . --ssldir /etc/puppet/ssl ./manifests/$(shell facter hostname).pp

	tarsnap --configfile ./.tarsnaprc -c -f "$(shell date +%s)" .

manifests/%.pp: manifests/ manifests/
	cat $^ > $@

	find -regextype posix-egrep -regex ".+.(pp|inc)" | xargs puppet parser validate
	find -name "*.erb" | xargs ./tools/

exported/%.tar: manifests/%.pp validate
	tar cf $@ manifests/$*.pp modules/ secrets/common/ secrets/$*/

exported/%.tar.gpg: exported/%.tar
	gpg --batch --yes --recipient puppet@$* --encrypt $<

exported/%.tar.gpg.sig: exported/%.tar.gpg
	gpg --batch --yes --detach-sign $<

upload-%: exported/%.tar.gpg exported/%.tar.gpg.sig
	scp -o BatchMode=yes exported/$*.tar.gpg.sig
	scp -o BatchMode=yes exported/$*.tar.gpg

hosts := halleck hawat leto
deploy: $(addprefix upload-, $(hosts))

.PHONY: default locally backup deploy validate

…and here is the download script running on the nodes. In addition to doing the gpg stuff the script also handles ETags for the http download.


tarball="$(facter hostname).tar"

bailout () {
    rm -rf "$workdir"
    [ -n "$2" ] && echo "$2"
    exit $1

umask 0027

if [ -f "$etagfile" ]; then
    curretag=$(head -n1 "$etagfile" | grep -Ei "^[0-9a-f-]+$")

workdir=$(mktemp --directory)
cd $workdir || exit 1

curl --silent --show-error 
    --netrc-file "$netrcfile" 
    --header "If-None-Match: "$curretag"" 
    --dump-header "$gpghead" --remote-name 

if grep -Eq "^HTTP/1.1 304" "$gpghead"; then
    bailout 0
elif grep -Eq "^HTTP/1.1 200" "$gpghead"; then
    newetag=$(sed -nre "s/^ETag: "([0-9a-f-]+)"s*$/1/pi" "$gpghead")
    [ -n "$newetag" ] && echo "$newetag" > "$etagfile"
    bailout 0 "Failed to get expected HTTP response."

curl --silent --show-error 
    --netrc-file "$netrcfile" --remote-name 

gpgv --keyring /usr/local/etc/puppet/gnupg/trustedkeys.gpg "$sigfile" 2> /dev/null
if [ $? -ne 0 ]; then
    bailout 0 "Signature verification failed."

export GNUPGHOME=/usr/local/etc/puppet/gnupg
gpg --quiet --batch "$gpgball" 2> /dev/null
if [ $? -ne 0 ]; then
    bailout 0 "Decryption failed."

tar --no-same-owner --no-same-permissions -xf "$tarball"
if [ $? -ne 0 ]; then
    bailout 0 "tar extract failed."

rsync --archive --delete --chmod=o-rxw,g-w 
    manifests modules secrets /usr/local/etc/puppet/

if [ $? -ne 0 ]; then
    echo beef > "$etagfile"
    bailout 1 "rsync update failed."

rm -rf "$workdir"

Of course, this approach involves a bit more work while setting up Puppet on a new node. So while I feel that it is a good fit for my current situation it isn’t anything I would use in a larger environment. Also, with a larger amount of nodes there are puppet master features, such as reporting and storeconfigs, being potentially more valuable.

For some reason I decided it was a good idea to register the domain Currently it is only used to serve a static html page, proclaiming the following message.

There’s no place like ::1

Any ideas on other, possible slightly more creative, ways to use the domain name?

OpenSSH 5.7, SFTP and hard links

OpenSSH 5.7 just got released. You can read the full announcement at Personally I especially appreciate the following improvement to their SFTP stack.

sftp(1)/sftp-server(8): add a protocol extension to support a hard link operation. It is available through the “ln” command in the client. The old “ln” behaviour of creating a symlink is available using its “-s” option or through the preexisting “symlink” command

Being able to handle hard links definitely makes SFTP even more useful as a remote filesystem.

Server configuration and version control

One of the (few?) good habits I managed to pick up during 2010 was that I became serious about keeping server configuration under version control. While it might primarily have been something I was taught at work it is definitely a practice I have adopted privately as well.

The most obvious benefit, and potentially the most valuable one, is the historic record version control provides. Yet, the part I appreciate most is how easy it becomes to compare new configuration against current one; to verify that you only made  just those changes which you  intended to make. There is a certain comfort in being able to run a git diff before restarting a local service or before pushing new cluster configuration.

(Not that I do not appreciate having access to the configuration history. When being asked about something which happend a few months ago, those commit messages and those diffs becomes awful handy.)

For your local /etc this is as a good time as any to take a peak at etckeeper.

Returning from FSCONS 2010

Back in Linköping, after enjoying yet another FSCONS conference. In case you want to know if there is something you might want to ask me about, these are the talks I attended:

Kaizendo: Customizable schoolbooksA Labour Process Perspective on the Development of Free SoftwareAre you weak in the middle?The Inanna ProjectScalable application layer transfersThe Future of RepRap and Free and Open HardwareWomen in FLOSSFuture TransportsGNU ParallelEthics of Intellectual MonopoliesWho are the Free Users? and Bits and bytes: the importance of free software in the industry.

That diversity in topics is by the way one of the things I really appreciate about going to FSCONS. Another nice thing is the people you get to meet. This year I had, among others, the pleasure of meeting up with a few members of the Danish Ubuntu LoCo.

Now on Skype

Against all previous principles, I have now began using Skype. If you know me, feel free to me to your contact list. Just do not expect me to be Online all the time.

Skype Name:

…and no, that principle I mentioned has nothing to do with free vs proprietary software. It is more about me not necessarily being a big fan of telephones.

Vacation summary, by flickr and twitter

Now back in Sweden, after my vacation to New York, Philadelphia and Washington DC. For starters I have put a few photos online, in my flickr collection USA Vacation ’10.

Then there are the tweets I wrote (@andol). While incredible incomplete, they do provide some kind of summary.

  1. Now in New York City.
  2. “My exit music, please.”
  3. Highlight of the day: Eating lobster roll in the shadow of the Brooklyn bridge, while admiring the Manhattan skyline.
  4. Definitely think someone ought to open a Korean restaurant in Linköping.
  5. Best positive surprise so far: The Bitter End, in Greenwich Village –
  6. Feels a bit odd that I only have to pay about ten dollars to have someone else to my laundry. No, not complaining.
  7. Breathtaking beauty: New York City, by night, from Top of the Rock.
  8. Seven bagels later; leaving New York for Philadelphia.
  9. Walking the streets of Philadelphia, appreciating the directional maps in every other street corner.
  10. Also, pretty sure that the Free Library of Philadelphia, at Logan Square, is the nicest library I have had the pleasure to visit so far.
  11. Philadelphia South Street, by night, almost feels kind of mediterranean.
  12. Leaving historic Philadelphia for present Washington DC.
  13. First night in DC: Evening walk in the National Mall, followed by an interesting Ethiopian meal in the Shaw neighborhood.
  14. Enjoyed the DC Ducks just as much as I enjoyed the Boston Ducks.
  15. Today turned into Smithsonian day. Visited the Museum of the American Indian as well as the Air and Space Museum.
  16. Today’s excursion to Theodore Roosevelt Island was a nice break from the city. The shadow provided by all trees wasn’t half bad either.
  17. Chafed feet –> silly walks –> loads of fun.
  18. DC beauty: The Lincoln Memorial, and its reflecting pool, during sunrise.
  19. Goodbye Washington DC. Hello eight hour flight.
  20. Back home in Linköping. Would like to thank my traveling companions @parwieslander and

(Anyone who wants the full story will have to buy me and/or Pär a suitable cold beverage.)