Docker and Go Hello World

One of the most bandied about buzzwords at the moment has got to be Docker. So many people I’ve spoken to claim to be moving their application environments to it and some even have it running in production with varying degrees of success.

Fine in theory

The theory of isolating groups of related processes into containers which then themselves can be further orchestrated to build out an environment is very appealing. From my understanding you can package up everything an application needs, including it’s code and related software, into an image and then run any number of containers based on that image without necessarily having to consider where or how that happens. Very powerful and broadly in line with how everything is run inside Google now.

In practice

Until recently all I’d done apart from reading documentation and attending meetups was spin up and poke around with a few basic containers, I didn’t yet appreciate the practicalities of containerisation.

One thing I’d noticed though was with some open-source Go projects I’d contributed some minor changes to. They had a Dockerfile at the top-level of the repository, like this one. My flatmate here in London had also managed to deploy some applications for his client using Docker so I was determined to get something of my own running I could refer back to later.

The Dockerfile

The basis of Docker is the Dockerfile which defines how to build an image and gives you a small number of commands which you can use to copy files onto said image, install packages, run applications, expose network ports and the like. Whatever is written to the filesystem as a consequence of these commands being run becomes part of your image.

Building an image

Once you’ve written your Dockerfile you can build an image from using it by running this from the directory it’s contained in:

docker build -t hello-world .

This will create an image which you can then refer to by “hello-world” and which you should be able to see listed with this:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
hello-world         latest              55f36149c050        2 hours ago         453.9 MB

Running a container

Now we have an image we can start a container based on it with the following:

docker run -p 8080:8080 --rm hello-world

This will spin up a container, run the command specified with CMD in the Dockerfile and expose port 8080 from the container to the machine running Docker which in my case is a VM managed by boot2docker. You can see the running application within the container like this:

$ docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                    NAMES
c37584b7a2cc        hello-world:latest   "./app"             13 seconds ago      Up 11 seconds       0.0.0.0:8080->8080/tcp   jolly_hopper

Boom! Containerised!

The end result

I’ve put together a Dockerfile which compiles and runs a simple web app written in Go which can be found on GitHub. When run, the application can be interacted with as follows:

$ curl $(boot2docker ip 2>/dev/null):8080
Hello Go!

Simple but I think it illustrates the principles well.

Blogging on Ansible

A good friend of mine recently voiced the opinion that the day will soon be upon us when the traditional SysAdmin will no longer be relevant and all operations activities will be some variety of DevOps.

Now, I’ve been managing my own infrastructure for some years but every package has been manually installed with every configuration file modified by hand, resulting in snowflake servers which are nigh-on impossible to recreate exactly and have occasionally caused me a lack of sleep. I’ve known for a long time that I could be doing better.

Configuration Mismanagement

In recent years all the clients I’ve contracted with have used either Chef or Puppet to automate the management of their infrastructure.

The funny thing has been that the last couple have both used Puppet in conjunction with some bespoke, cobbled together system to upload the modules, manifests etc to the target machines and run puppet from there, bypassing the usual master/agent setup. Seeing this once was unusual but seeing it twice in a row struck me as very odd indeed and made me think that there was perhaps something overcomplicated with the “usual” approach.

Ansible Simplicity

With Ansible there is no client/server model: all you need is a repository of playbooks, SSH access to an inventory of machines and the rest is as simple as running ansible-playbook. Very straightforward.

Something I’d wanted to do for some time is migrate my own modest infrastructure away from a single overloaded, handcrafted VPS and to get everything under configuration management in the process.

I’d seen a few examples of Ansible and it was one of the buzzwords being bandied about at the time so it seemed like good opportunity to get my hands dirty and approach my own environment as I would a client’s albeit with cheap OpenVZ containers instead of EC2 instances.

Migrate All The Things

One of the things to be migrated was this blog. It had been running on a woefully out-of-date version of WordPress and using the same theme for the past half decade or more so getting both of those updated along with the infrastructure aspect was good.

Whilst I was shaking things up I’ve also taken the opportunity to move the blog from it’s own domain, sickbiscuit.com, to a subdomain of my personal site, blog.stevenwilkin.com, in an effort to simplify my web presence.

In terms of servers this is now running on one I’m now using to host PHP apps, there’s another that handles redirects from the old domain along with other static sites and there’s a third that stores the nightly backups produced. All three have been provisioned with Ansible.

I found using YAML for the playbooks to be a bit unusual to begin with but on the whole Ansible has been easy to get up and running with and so far seems simpler than the alternatives.

Next Steps

Before I can decommission the old VPS completely I still have to migrate my mail server along with a few other apps which will require a bit of effort. When that’s out of the way I may for the sake of completeness setup monitoring with Nagios and I’ve a strange hankering to start running my own DNS servers. I know.

Time will tell what exactly I’ll do but with Ansible provisioning new machines will be a breeze.

Using Vagrant and Chef to setup a Ruby 1.9 development environment including RVM and Bundler

It has become common practice these days to use tools like RVM and Bundler to manage a project’s dependencies. When these pacakages are in place, getting up to speed with another project is a breeze.

The Pain

But how about installing these tools themselves? How about other dependant pieces of software such as databases and the like? What if they have to be compiled from source? What if they have to be installed with a package manager? Which package manager do you then use: Fink, MacPorts, Homebrew?

Not always so easy!

It’s a UNIX system! I know this!

I love UNIX. I’ve worked in the investment banks, telecoms companies and startups of the world building and supporting software on FreeBSD, Solaris, RHEL, Debian and more. For just over a half decade now I’ve been using a Mac to do this. I was reluctant to begin with and I could care less for the fanbois but I now strongly believe that a Mac running OS X is the best UNIX workstation I can get my hands on.

Despite all I like about it, OS X can be pretty hostile towards developers, just ask anyone who has setup a Ruby dev environment on a fresh install of Lion recently or who uses Xcode regularly.

Enter Vagrant and Chef

Vagrant can be used to provide an isolated environment running inside a virtual machine. When combined with a provisioning tool like Chef it’s possible to concisely specify the necessary environment for an application and then have it available in a dependable manner on a on-demand basis.

This combination is to systems what RVM and Bundler are to Ruby interpreters and libraries.

I’d been hearing good things about Vagrant and Chef for some time but what prompted me to delve deeper was the upcoming addition of a junior dev and an intern at Converser. Having all dependencies scripted, including operating systems, programming languages, document and key/value datastores seemed like a good way to get the new starts up and running with the minimum of time and headaches.

An Example

I have an example of what I’ve learned on GitHub. It’s a simple Sinatra app but demonstrates all the moving parts.

Standalone installers for Vagrant, Chef and Git are to be found here, here and here which removes the need for any form of package manager on the host OS X system.

Once everything’s installed and the repo cloned, the following commands will start up the example app within a fresh virtual machine:

vagrant up
vagrant ssh
cd /vagrant
bundle exec rackup

Browse to http://0.0.0.0:8080 to view the output.

For the curious, the versions or Ruby and Bundler can be checked thus:

$ vagrant up
$ vagrant ssh
$ cd /vagrant
$ ruby -v
ruby 1.9.2p320 (2012-04-20 revision 35421) [x86_64-linux]
$ bundle -v
Bundler version 1.1.4

The virtual machine can be powered down and disposed of with this:

vagrant destroy

Should Things Be This Convoluted?

Perhaps the world would be a better place if all this was simpler. Perhaps returning to a Linux workstation would remove some of these headaches. Maybe I’ve just grown accustomed to the pain.

For the time being OS X appears to have the correct balance of desktop software and UNIX-y stuff for my needs. Until something appears that surpasses the form-factor, power and utility of my current setup I’ll continue to pay the Apple-tax.

Installing RMagick on Debian Lenny

Just a quick reminder to myself on how I installed RMagick on a Debian 5.0.4 “lenny” VPS. All commands to be run as root.

Build ImageMagick from source

curl -O ftp://ftp.imagemagick.org/pub/ImageMagick/ImageMagick.tar.gz
tar xvzf ImageMagick.tar.gz -C /usr/src/
cd /usr/src/ImageMagick-6.6.6-6/
./configure
make
make install

Install the RMagick Gem

gem install rmagick

Bingo!

As an interesting aside, ImageMagick handles a lot of image formats via delegate libraries. I previously installed RMagick on a CentOS box and had to separately install TrueType fonts which were necessary for the project in question. These had to be installed before ImageMagick and were accessible through yum and the xorg-x11-fonts-truetype package.

Minder – a simple web-app monitoring tool written in Ruby

Background

A few months ago I was tasked with migrating and maintaining a bunch of legacy Rails apps and a couple of them were misbehaving. Requests were hanging which was tying-up the front-end Apache child processes and resulting in all the web-apps on the server becoming unresponsive

At the time I didn’t know what was causing the hanging requests, it wasn’t happening in a predictable manner and on the surface I had all the apps dependencies in place.

Seeing as this was a job I was doing on the side I had very limited time to investigate the underlying issue and decided I would have to make do with just knowing when the apps went down so I could ssh in and manually kill off the offending processes.

My initial thought was to use Nagios for monitoring but it seemed excessively heavy-weight for the task at hand so I seized the opportunity to quickly develop a tool perfectly tailored to my needs.

The resulting script, minder, checks a list of domains and if the root of each can’t be read within 10 seconds a notication is sent via XMPP. Nothing more, nothing less. Simple.

Minder proved effective and I was bombarded with instant messages whenever any of the apps went under. Thankfully I found to bit more time to investigate more thoroughly and tracked the issue down to a missing TrueType font used by ImageMagick.

The first iteration of minder was very simple but I thought I’d put a bit of extra effort in to make it easier to customise and share it with the internet-at-large, hopefully someone will get some use out of it.

Prerequisties

  • Two XMPP accounts:

    • one to send notifications from, and
    • a second to receive notifications

    Google Talk or any regular Jabber/XMPP accounts will suffice

  • Blaine‘s xmpp4r-simple gem must be installed:

    sudo gem install xmpp4r-simple

Usage

  • copy minder.yaml.sample to minder.yaml
  • edit minder.yaml to specify the username and password of the sending XMPP account
  • specify your own personal XMPP account (xmpp_to)
  • modify the list of domains to monitor
  • run the script
    $ ./minder.rb
  • Optionally run the script from a Cron job, the following will run every hour:

    0 * * * * /path/to/minder.rb > /dev/null 2>&1

Get the code

Clone the repo from GitHub. Fire any queries to @stevebiscuit.

MySQL database backup with remote storage

Prevent a disaster

After reading Jeff Atwood’s backup failure last month I decided to finally get around to doing something I’d been intending to do “one of these days” but had in actual fact been putting off for years.

Here’s the steps I took to ensure the databases on my webserver were backed up every night and copies of the dumps stored remotely.

On the remote storage machine

Generate an ssh key pair with and empty password and put the public key on the remote server. This will give our script access to the server without requiring you to enter a password each time:

$ ssh-keygen -t rsa -f /home/steve/code/db_backup/id_rsa
$ scp /home/steve/code/db_backup/id_rsa.pub REMOTEHOST:

This script will fetch all the backups, logging in as the rsync user and using the private key just generated. It’s located at /home/steve/code/db_backup/sync_backups.sh:

#!/usr/bin/env bash
rsync -e "ssh -l rsync -i /home/steve/code/db_backup/id_rsa" -avz REMOTEHOST:mysql/ /data/primary/backup/mysql/

Have this happen automatically daily at 12:20am:

$ crontab -l
# m h  dom mon dow   command
20	0	*	*	*	/home/steve/code/db_backup/sync_backups.sh
$

On the machine to be backed up

Create a new user and allow ssh access with the previously generated key:

# adduser rsync
# mkdir ~rsync/.ssh
# mv ~steve/id_rsa.pub ~rsync/.ssh/authorized_keys
# chown rsync:rsync ~rsync/.ssh/authorized_keys
# chmod 400 ~rsync/.ssh/authorized_keys

This script will dump all available databases and is located at /root/bin/backup_databases.sh:

#!/usr/bin/env bash
 
# dump all available databases
# SJW
 
AUTH='-uroot -pROOTPASSWORD'
DBS=`mysql $AUTH --skip-column-names -e 'SHOW DATABASES;'`
BACKUPS='/home/rsync/mysql/'
 
for DB in $DBS
do
	mysqldump $AUTH $DB > $BACKUPS`date +%Y%m%d%H%M`_$DB.sql
done
 
# delete backups older than 5 days
find $BACKUPS -mtime +5 -type f | awk '{print "rm "$1}' | sh

Have the script run nightly at 12:10am via cron:

# crontab -l
# m h  dom mon dow   command
10  0 * * * /root/bin/backup_databases.sh
#

Closing thoughts

This approach is realtively straight forward, everything happens automatically and it could easily be extended to cover mailboxes, source code repositories, uploaded content etc. However, for mission-critical databases master-slave replication may be more appropriate. For further reading you may enjoy JWZ’s thoughts on backups.

# shutdown -h now

I’ve just shutdown the beige box that was home to this blog for just over a year and a half until I started renting a VPS.

I had intended to shutdown this machine since the start of the year but never got around to it and after a techie chat on IM with Dave Dripps earlier in the day decided to just “pull the finger out” and do the needful.

I archived the contents of /var/www/htdocs and my home directory for safe keeping, copying them over to my file server, and dumped all the MySQL databases I still hadn’t migrated.

Self-hosting a site was a great learning opportunity but not something a business could be built up upon and comparing the energy cost of running a machine 24×7 versus renting a VPS priced in American Dollars the choice wasn’t hard to make, it’s just a shame it took me half a year to finally flick the switch.

Goodnight substance!

command line history – me too

I’ve seen this on a couple of blogs recently so I thought I’d give it a go on the VPS this site is hosted on:

steve@decaf:~$ history | awk '{a[$2]++} END {for(i in a)print a[i] " " i}' | sort -rn | head -10                                                                
190 ls                                                                          
80 cd                                                                           
24 cp                                                                           
22 sudo                                                                         
19 rm                                                                           
14 svn                                                                          
12 history                                                                      
11 tar                                                                          
10 wget                                                                         
10 vi

And as root:

172 ls                                                                          
59 vi                                                                           
56 cd                                                                           
22 apt-get                                                                      
20 less                                                                         
20 apache2ctl                                                                   
17 apt-cache                                                                    
15 cp                                                                           
10 pwd                                                                          
10 ps

I’m kinda suprised that ls comes out on top each time ;)

I wonder which commands Matt, Aidan and Philip have been using most?

Updating WordPress via Subversion: it works!

The last time a new version was released I decided to update my WordPress installation with Subversion with the idea being that this would make future updates easier.

Well, the good news is that this technique works :)

All it took was 3 simple steps:

  1. $ cd /var/www/sickbiscuit.com/blog
  2. $ svn switch http://svn.automattic.com/wordpress/tags/2.5/ .
  3. launch wp-admin/upgrade.php via web-browser

To be safe I backed up the database prior to the update and so far everything seems good, job’s a good ‘un!