Docker and Go Hello World

One of the most bandied about buzzwords at the moment has got to be Docker. So many people I’ve spoken to claim to be moving their application environments to it and some even have it running in production with varying degrees of success.

Fine in theory

The theory of isolating groups of related processes into containers which then themselves can be further orchestrated to build out an environment is very appealing. From my understanding you can package up everything an application needs, including it’s code and related software, into an image and then run any number of containers based on that image without necessarily having to consider where or how that happens. Very powerful and broadly in line with how everything is run inside Google now.

In practice

Until recently all I’d done apart from reading documentation and attending meetups was spin up and poke around with a few basic containers, I didn’t yet appreciate the practicalities of containerisation.

One thing I’d noticed though was with some open-source Go projects I’d contributed some minor changes to. They had a Dockerfile at the top-level of the repository, like this one. My flatmate here in London had also managed to deploy some applications for his client using Docker so I was determined to get something of my own running I could refer back to later.

The Dockerfile

The basis of Docker is the Dockerfile which defines how to build an image and gives you a small number of commands which you can use to copy files onto said image, install packages, run applications, expose network ports and the like. Whatever is written to the filesystem as a consequence of these commands being run becomes part of your image.

Building an image

Once you’ve written your Dockerfile you can build an image from using it by running this from the directory it’s contained in:

docker build -t hello-world .

This will create an image which you can then refer to by “hello-world” and which you should be able to see listed with this:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
hello-world         latest              55f36149c050        2 hours ago         453.9 MB

Running a container

Now we have an image we can start a container based on it with the following:

docker run -p 8080:8080 --rm hello-world

This will spin up a container, run the command specified with CMD in the Dockerfile and expose port 8080 from the container to the machine running Docker which in my case is a VM managed by boot2docker. You can see the running application within the container like this:

$ docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                    NAMES
c37584b7a2cc        hello-world:latest   "./app"             13 seconds ago      Up 11 seconds       0.0.0.0:8080->8080/tcp   jolly_hopper

Boom! Containerised!

The end result

I’ve put together a Dockerfile which compiles and runs a simple web app written in Go which can be found on GitHub. When run, the application can be interacted with as follows:

$ curl $(boot2docker ip 2>/dev/null):8080
Hello Go!

Simple but I think it illustrates the principles well.

Blogging on Ansible

A good friend of mine recently voiced the opinion that the day will soon be upon us when the traditional SysAdmin will no longer be relevant and all operations activities will be some variety of DevOps.

Now, I’ve been managing my own infrastructure for some years but every package has been manually installed with every configuration file modified by hand, resulting in snowflake servers which are nigh-on impossible to recreate exactly and have occasionally caused me a lack of sleep. I’ve known for a long time that I could be doing better.

Configuration Mismanagement

In recent years all the clients I’ve contracted with have used either Chef or Puppet to automate the management of their infrastructure.

The funny thing has been that the last couple have both used Puppet in conjunction with some bespoke, cobbled together system to upload the modules, manifests etc to the target machines and run puppet from there, bypassing the usual master/agent setup. Seeing this once was unusual but seeing it twice in a row struck me as very odd indeed and made me think that there was perhaps something overcomplicated with the “usual” approach.

Ansible Simplicity

With Ansible there is no client/server model: all you need is a repository of playbooks, SSH access to an inventory of machines and the rest is as simple as running ansible-playbook. Very straightforward.

Something I’d wanted to do for some time is migrate my own modest infrastructure away from a single overloaded, handcrafted VPS and to get everything under configuration management in the process.

I’d seen a few examples of Ansible and it was one of the buzzwords being bandied about at the time so it seemed like good opportunity to get my hands dirty and approach my own environment as I would a client’s albeit with cheap OpenVZ containers instead of EC2 instances.

Migrate All The Things

One of the things to be migrated was this blog. It had been running on a woefully out-of-date version of WordPress and using the same theme for the past half decade or more so getting both of those updated along with the infrastructure aspect was good.

Whilst I was shaking things up I’ve also taken the opportunity to move the blog from it’s own domain, sickbiscuit.com, to a subdomain of my personal site, blog.stevenwilkin.com, in an effort to simplify my web presence.

In terms of servers this is now running on one I’m now using to host PHP apps, there’s another that handles redirects from the old domain along with other static sites and there’s a third that stores the nightly backups produced. All three have been provisioned with Ansible.

I found using YAML for the playbooks to be a bit unusual to begin with but on the whole Ansible has been easy to get up and running with and so far seems simpler than the alternatives.

Next Steps

Before I can decommission the old VPS completely I still have to migrate my mail server along with a few other apps which will require a bit of effort. When that’s out of the way I may for the sake of completeness setup monitoring with Nagios and I’ve a strange hankering to start running my own DNS servers. I know.

Time will tell what exactly I’ll do but with Ansible provisioning new machines will be a breeze.

A simple service oriented architecture using Ruby and Go

Some common pain points seem to keep cropping up time and time again as applications grow in size and complexity:

  • tests get longer to run
  • deployments become a much more involved process
  • making a small fix to one area of code can often break something in another
  • it can be much harder to reason about the system as a whole

These large, monolithic applications are affectionately known as monorails in the Rails world and the common wisdom is to start splitting up the codebase into services: to extract areas of functionality out into self-contained, independent applications which work in unison to deliver the original capability.

Here’s how Songkick did it and how SoundCloud did similar recently.

Close, but no cigar

On a previous contract I worked on a platform which claimed 20 million users and the only way it was possible to handle that sort of capacity was to use services, lots of services.

The platform was centred around a legacy ColdFusion application hooked up to a MySQL database and the other applications had a variety of ways of accessing the central data:

  • using ActiveResource or libraries derived from it to consume a RESTful API provided by the ColdFusion app
  • connecting directly to the database and defining their own models and schemas
  • using one of the two semi-aborted attempts at packaging up a collection of ActiveRecord models into a shared library
  • mixing and matching a selection of the above in different areas of their codebases

Unfortunately there was little consistency and the organisational culture encouraged quick fixes and neglect of old code. Both new development and maintenance work provided numerous challenges as you can well imagine.

The ideal

As I progressed through the term of the contract and had to deal with the peculiarities of the technical ecosystem at hand I discussed with the other engineers what the ideal situation would look like, everything else being equal. The only sane solution seemed to be to develop both an API and the client used to access it.

Since moving on from that contract I’ve wanted to implement a proof of concept of that idea and I’m glad I’ve now codified the basics of it.

A worked example

The API is written in Go and allows manipulation of resources in a standard RESTful manner. No surprises there other than everything is stored in-memory and will so be lost when execution stops.

The client library is in Ruby and basically just wraps a HTTP client with an ActiveRecord-like interface. The final piece is a demo script which uses the library to add and then manipulate a handful of items of data. All very straightforward but I believe it illustrates the concept.

This shameless display of hipster polyglotism can all be found on GitHub. Fill your boots!

Serving up a static site using Go

Like a lot of people in technology a portion of my time is spent wondering what will be the next programming language, framework or whatever to take off and become popular. Of greater personal importance is the matter of which one of them will get added to my toolboox next and press-ganged into productive use.

For years now Ruby has been my goto for a lot things and I’ve been doing more and more OS X and iOS development but it had been some time since I last picked up something new and I had a hankering for some shinyness.

I started tinkering with Node a few years back but it didn’t have enough appeal to invest much of myself in it and Scala looked a likely contender for a while but I didn’t get any further than solving a few Project Euler problems with it, mostly while spending my evenings in hotel rooms in Dublin.

Enter Go

Earlier this year I spent a month in Melbourne and thanks to having some time away from paid work I could see that my preference seemed to be leaning towards developing web services and then consuming those services with native client apps. I wanted to keep my skills sharp so started tinkering with a personal project and for the API piece the relatively new Go seemed ideal for the job at hand.

Once I got started what struck me was that I was producing a single, standalone binary which didn’t need any external dependencies in order to be run on the target architecture. No need for a particular version of a language or a slew of packages to be installed, just a command to be executed. It’s almost like I’d forgotten that that was how things used to be once upon a time after these years of using scripting languages and it was a liberating experience.

For that project I wanted to do everything test-first but unfortunately the depth of my knowledge of Go in general and web application development with it more specifically are not what I want them to be so when I hit a stumbling the project was abandoned. Perhaps I’ll come back to it in the future.

A Need Emerges

So I still wanted to get some Go out into the wild and when I decided to put together a new website for my limited company I thought I could use the situation as a learning experience.

Serving up a static site can be done any number of ways and I certainly didn’t need to go re-inventing the wheel but with a bit of tinkering I had put together a simple app that could look at a requested path and if it corresponded to a file under public/ then send the file’s contents to the client.

What pleased me with the solution was that it was achieved with only what’s provided out-of-box with Go and without too much code. Simple and to the point and getting it running on Heroku was trivial with the Go buildpack.

The End Result

The site, as simple as it may be, can be found at nulltheory.com and the code is on GitHub. Bon appetit!

Contracting in London

Since I last wrote about my experience working outside of Northern Ireland I’ve returned to the UK, turned down prospects of contract work in Belfast and have set up shop in London.

Serendipity

A good while back Rob and myself had discussed the possibility of London. He’d been working remotely with a company based here and due to changes in his circumstances leaving N. Ireland was a valid option.

Lines of communication between the two of us dropped off for a while as a result of travel, work and such like and when I finally got caught up with him I discovered he’d already bitten the bullet and was moving over. Further conversation uncovered that the house he was going to be sharing in Ealing had a box room going spare which I was welcome to make use of for a month. Wheels were set in motion.

The Search For Work

With my mind made up about London I started keeping my eye out for contract work. Apart from the usual job sites I focussed my efforts on the LRUG mailing list as many companies will directly post job ads there and going direct with the client is very often the preference with contracting.

A week or so before I was due to fly back to Ireland positions with two companies were posted to the list, I responded and eventually interviews via Skype were arranged. These both went well and next steps were organised.

>30 hours travelling had me back in my home county of Fermanagh. I rested, caught up with my family and returned to Belfast long enough to pack a carry-on case and head to London for the joys of face-to-face interviews.

Song and Dance

The first interview was on a Thursday, the next on the Friday and the first offer was in my inbox on the Monday. Not bad at all and very indicative of the state of supply and demand for technical talent here in London specifically and across the industry in general.

Unfortunately the first offer turned out to be unable to match the rate I was on during my last contract. Adding to the displeasure was me having effectively done a morning of unbilled work as part of the process along with having to suffer the song and dance of a competency based interview. Such is life, the world doesn’t owe anyone a living.

The other offer when it arrived didn’t involve a drop in rate and looked to have the opportunity to get my hands dirty with a bunch of interesting technology. From a business point of view this seemed to be the one to take so I went for it.

The Road Ahead

As has happened before in my career, the job I was sold and the job I ended up getting didn’t quite match up. I did my three months, put myself forward for a handful of contracts, did a couple of technical tests and one face-to-face interview and I’m now with another client and things are better.

Before coming to London I only knew a little about the place. One area was more or less interchangable with any other area. I soon discovered that the majority of the contract Ruby work going is focussed in Central and East London and being out West in Ealing for a lot longer than originally planned has meant a great deal of commuting and a general feeling of being disconnected from the hustle and bustle of the technology world here.

The solution to this has been to move closer to the trendy part of town and within a few weeks we’ll have transplanted everyone across to the other side of town and into a converted warehouse near Whitechapel.

I still have my flat in Belfast, have started investigating parts of the world with even lower costs of living and have only the vaguest idea of what the future might hold but can’t see myself working anywhere other than in London over the next few years.

No choice but to continue to making it up as I go along then :D

Decisions, decisions, decisions

I can be an awfully indecisive person at times. Often I’ve been in the grips of analysis paralysis unable to pick a course of action to take, crippled with choice.

When I was a student I read The Dice Man and was sorely tempted to make all my decisions by rolling a die or tossing a coin. Needless to say this is maybe not the best way for a person to navigate through life.

Recently I had to contemplate a specific yes/no situation and joked with the idea of tossing a coin to decide. It then occured to me that I could go one better and after a little bit of tinkering wth Objective-C this is what I made:

Decisions App - Yes

Upon touching the screen the text alternates between “Yes” and “No” and after a random period of time between 1 and 3 seconds it stops. Simple.

This app scratches an itch and it’ll likely never see any additional work though a possible future addition would be to allow a user defined number of free-text choices, maybe even with weighting towards one choice or another.

The source is on GitHub if you think it could be useful for anything.

Two years of working away from home

I’m currently sitting in Melbourne enjoying the Australian summer and a well deserved break from work. I can barely remember the northern hemisphere winter I left behind a few weeks ago.

I’ve only briefly mentioned working away from home before but January past marked two years since I last worked in Belfast. The pace has been hectic at times and I’ve spent more nights in hotels than I care to recall but from my first contract in Dublin to my last in the south of England I’ve found a consistant theme: being treated with more respect, working on more interesting problems and for higher pay. Not a bad combination!

RubyConf Australia

Coinciding with my trip to Melbourne was the first ever RubyConf Australia where I was able to shake hands and speak with some of the known names in the Ruby world. I’ve also been able to attend a Ruby Australia meetup and a Travis CI coffee morning so there’s been plenty of opportunities to geek it up along with soaking up the sun.

Adventure

In recent times I’ve done more and more travelling with the last 12 months seeing me in Dublin, London, Amsterdam, Barcelona and now Melbourne. My taste for adventure has grown, I’ve a hunger to be where the action is and a part of the world where people are arguing over a flag is just not where it’s happening.

There’s no contract market for Ruby in Belfast though so in order to continue contracting, working with Ruby and living in Belfast the first flight out of town on a Monday morning seems unavoidable.

Where to next?

There’s no two ways about it, living out of a suitcase sucks. The alternatives seem to be to leave Belfast, start up my own product company or take on a remote working permanent position. Needless to say there’s drawbacks to all of these and most of the time it’s seems best to just suck up the drawbacks of working away from home.

Considering the strength of the contract market in London I’ve often thought I’d end up there for a stint and after a recent conversation with Tim I’m also quite interested about contracting in Berlin.

The day I booked my flight to Oz was the final day of my last contract and I’d chosen not to put any effort into lining up the next piece of work. Within a fortnight I’ll be back in the UK and dealing with jetlag. Will something turn up by then? If if does where will it be, what will it be doing and how much will it pay? The usual uncertainties associated with contracting.

Whatever happens, the next few weeks should be interesting!

An Introduction to WebSockets

Last week I gave a short talk at BelfastJS outlining WebSockets: what they are, how you use them, examples, warnings and alternatives. That should cover the basics I think.

A few months ago I spoke at BelfastRuby and it was good again to be sharing some knowledge with a bunch of people enthusiastic about technology and the local community. Hopefully we can keep things going and help make Belfast a supportive environment for those involved in the knowledge economy.

The slides are online along with the code for the demo app.

Using Vagrant and Chef to setup a Ruby 1.9 development environment including RVM and Bundler

It has become common practice these days to use tools like RVM and Bundler to manage a project’s dependencies. When these pacakages are in place, getting up to speed with another project is a breeze.

The Pain

But how about installing these tools themselves? How about other dependant pieces of software such as databases and the like? What if they have to be compiled from source? What if they have to be installed with a package manager? Which package manager do you then use: Fink, MacPorts, Homebrew?

Not always so easy!

It’s a UNIX system! I know this!

I love UNIX. I’ve worked in the investment banks, telecoms companies and startups of the world building and supporting software on FreeBSD, Solaris, RHEL, Debian and more. For just over a half decade now I’ve been using a Mac to do this. I was reluctant to begin with and I could care less for the fanbois but I now strongly believe that a Mac running OS X is the best UNIX workstation I can get my hands on.

Despite all I like about it, OS X can be pretty hostile towards developers, just ask anyone who has setup a Ruby dev environment on a fresh install of Lion recently or who uses Xcode regularly.

Enter Vagrant and Chef

Vagrant can be used to provide an isolated environment running inside a virtual machine. When combined with a provisioning tool like Chef it’s possible to concisely specify the necessary environment for an application and then have it available in a dependable manner on a on-demand basis.

This combination is to systems what RVM and Bundler are to Ruby interpreters and libraries.

I’d been hearing good things about Vagrant and Chef for some time but what prompted me to delve deeper was the upcoming addition of a junior dev and an intern at Converser. Having all dependencies scripted, including operating systems, programming languages, document and key/value datastores seemed like a good way to get the new starts up and running with the minimum of time and headaches.

An Example

I have an example of what I’ve learned on GitHub. It’s a simple Sinatra app but demonstrates all the moving parts.

Standalone installers for Vagrant, Chef and Git are to be found here, here and here which removes the need for any form of package manager on the host OS X system.

Once everything’s installed and the repo cloned, the following commands will start up the example app within a fresh virtual machine:

vagrant up
vagrant ssh
cd /vagrant
bundle exec rackup

Browse to http://0.0.0.0:8080 to view the output.

For the curious, the versions or Ruby and Bundler can be checked thus:

$ vagrant up
$ vagrant ssh
$ cd /vagrant
$ ruby -v
ruby 1.9.2p320 (2012-04-20 revision 35421) [x86_64-linux]
$ bundle -v
Bundler version 1.1.4

The virtual machine can be powered down and disposed of with this:

vagrant destroy

Should Things Be This Convoluted?

Perhaps the world would be a better place if all this was simpler. Perhaps returning to a Linux workstation would remove some of these headaches. Maybe I’ve just grown accustomed to the pain.

For the time being OS X appears to have the correct balance of desktop software and UNIX-y stuff for my needs. Until something appears that surpasses the form-factor, power and utility of my current setup I’ll continue to pay the Apple-tax.

An iOS client for my Coffee Tracker API

About 9 hours ago I dandered down the road to see what was happening at the FlackNite event being hosted in Farset Labs.

When I finally got myself settled down with network access and a cup of coffee and said hello to everyone it seemed I was the only one without a project to work on. Nightmare.

Decisions, Decisions

I couldn’t think of what to focus on but Rob and Pete were sitting next to me and tinkering with some Objective-C and Cocoa. Considering that I’m now on the iOS developer program, doing something similar seemed like a good idea.

I needed to pick something that could be accomplished within a relatively short period of time and I’ve had a few small projects which I’ve wanted to get out of the way for a while but most of them would have required some research effort to take me into unchartered territory.

This constraint left me with a single task: the relatively straight-forward job of porting the OS X client of my Coffee Tracker API to iOS.

Double Jalapenos

Coffee was drank, pizza was eaten, many laughs were had and some Objective-C was cranked out.

Beyond creating a new iOS project and user interface, the existing code didn’t require much change. Turning off ARC for AFNetworking was the most involved thing I had to do.

After the basic port I added a few finer details like turning on the network activity indicator when the web service was being accessed and making sure the count was refreshed whenever the app became active. Simple.

Dive In

It was good again to sit down with a specific task, open Xcode and eventually come to a solution. Maybe some day I’ll produce something more involved but I’m pleased with how things are progressing.

Full source code is available on GitHub so fill your boots.