RPC between Go and Ruby using Protocol Buffers

I’ve previously written some thoughts on service oriented architectures and since then I’ve wanted to explore beyond the currently accepted standard of sending and receiving JSON data over web APIs.

Protocol Buffers

With JSON there’s an overhead of transmitting plain-text over the wire coupled with parsing that text in your application once received. For many cases this is probably acceptable but for a large-scale distributed system this can represent a not insignificant cost in terms of bandwidth and computing resources.

In response to this situation Google came up with Protocol Buffers which basically give you a mechanism to define ahead of time the data structures your services will be exchanging and to convert these structures into a more efficiently transmitted and deserialised binary format.

The supplied compiler generates the code to handle this for you and supports the majority of popular languages which is essential for an ecosystem comprised of heterogeneous technologies.


The concept of being able to invoke a piece of code on another machine is far from a new one. Many languages provide a way of doing so yet the fashion in the web world seems to be to create REST-like services on top of HTTP, even if the end result could almost be described as a JSON RPC service.

With gRPC you can describe your services as you can the data they interchange and the generated code handles all the interaction for you, allowing you to concentrate on the actual function of the service and not the logistics of how it is delivered.

As an added bonus it uses HTTP/2 under the hood so you automatically get the benefit of things like header compression and request multiplexing.

Same but different

To demonstrate gRPC in action I’ve simply forked and updated the code from the previous post. There’s still a simple Go service being accessed by a Ruby client though now my application code doesn’t have to be concerned about HTTP and JSON. Accepting the format of the generated code seems to be the price to pay for this but I think it’s been a worthwhile exercise.

Check it out on Github.

Docker and Go Hello World

One of the most bandied about buzzwords at the moment has got to be Docker. So many people I’ve spoken to claim to be moving their application environments to it and some even have it running in production with varying degrees of success.

Fine in theory

The theory of isolating groups of related processes into containers which then themselves can be further orchestrated to build out an environment is very appealing. From my understanding you can package up everything an application needs, including it’s code and related software, into an image and then run any number of containers based on that image without necessarily having to consider where or how that happens. Very powerful and broadly in line with how everything is run inside Google now.

In practice

Until recently all I’d done apart from reading documentation and attending meetups was spin up and poke around with a few basic containers, I didn’t yet appreciate the practicalities of containerisation.

One thing I’d noticed though was with some open-source Go projects I’d contributed some minor changes to. They had a Dockerfile at the top-level of the repository, like this one. My flatmate here in London had also managed to deploy some applications for his client using Docker so I was determined to get something of my own running I could refer back to later.

The Dockerfile

The basis of Docker is the Dockerfile which defines how to build an image and gives you a small number of commands which you can use to copy files onto said image, install packages, run applications, expose network ports and the like. Whatever is written to the filesystem as a consequence of these commands being run becomes part of your image.

Building an image

Once you’ve written your Dockerfile you can build an image from using it by running this from the directory it’s contained in:

docker build -t hello-world .

This will create an image which you can then refer to by “hello-world” and which you should be able to see listed with this:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
hello-world         latest              55f36149c050        2 hours ago         453.9 MB

Running a container

Now we have an image we can start a container based on it with the following:

docker run -p 8080:8080 --rm hello-world

This will spin up a container, run the command specified with CMD in the Dockerfile and expose port 8080 from the container to the machine running Docker which in my case is a VM managed by boot2docker. You can see the running application within the container like this:

$ docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                    NAMES
c37584b7a2cc        hello-world:latest   "./app"             13 seconds ago      Up 11 seconds>8080/tcp   jolly_hopper

Boom! Containerised!

The end result

I’ve put together a Dockerfile which compiles and runs a simple web app written in Go which can be found on GitHub. When run, the application can be interacted with as follows:

$ curl $(boot2docker ip 2>/dev/null):8080
Hello Go!

Simple but I think it illustrates the principles well.

A simple service oriented architecture using Ruby and Go

Some common pain points seem to keep cropping up time and time again as applications grow in size and complexity:

  • tests get longer to run
  • deployments become a much more involved process
  • making a small fix to one area of code can often break something in another
  • it can be much harder to reason about the system as a whole

These large, monolithic applications are affectionately known as monorails in the Rails world and the common wisdom is to start splitting up the codebase into services: to extract areas of functionality out into self-contained, independent applications which work in unison to deliver the original capability.

Here’s how Songkick did it and how SoundCloud did similar recently.

Close, but no cigar

On a previous contract I worked on a platform which claimed 20 million users and the only way it was possible to handle that sort of capacity was to use services, lots of services.

The platform was centred around a legacy ColdFusion application hooked up to a MySQL database and the other applications had a variety of ways of accessing the central data:

  • using ActiveResource or libraries derived from it to consume a RESTful API provided by the ColdFusion app
  • connecting directly to the database and defining their own models and schemas
  • using one of the two semi-aborted attempts at packaging up a collection of ActiveRecord models into a shared library
  • mixing and matching a selection of the above in different areas of their codebases

Unfortunately there was little consistency and the organisational culture encouraged quick fixes and neglect of old code. Both new development and maintenance work provided numerous challenges as you can well imagine.

The ideal

As I progressed through the term of the contract and had to deal with the peculiarities of the technical ecosystem at hand I discussed with the other engineers what the ideal situation would look like, everything else being equal. The only sane solution seemed to be to develop both an API and the client used to access it.

Since moving on from that contract I’ve wanted to implement a proof of concept of that idea and I’m glad I’ve now codified the basics of it.

A worked example

The API is written in Go and allows manipulation of resources in a standard RESTful manner. No surprises there other than everything is stored in-memory and will so be lost when execution stops.

The client library is in Ruby and basically just wraps a HTTP client with an ActiveRecord-like interface. The final piece is a demo script which uses the library to add and then manipulate a handful of items of data. All very straightforward but I believe it illustrates the concept.

This shameless display of hipster polyglotism can all be found on GitHub. Fill your boots!

Serving up a static site using Go

Like a lot of people in technology a portion of my time is spent wondering what will be the next programming language, framework or whatever to take off and become popular. Of greater personal importance is the matter of which one of them will get added to my toolboox next and press-ganged into productive use.

For years now Ruby has been my goto for a lot things and I’ve been doing more and more OS X and iOS development but it had been some time since I last picked up something new and I had a hankering for some shinyness.

I started tinkering with Node a few years back but it didn’t have enough appeal to invest much of myself in it and Scala looked a likely contender for a while but I didn’t get any further than solving a few Project Euler problems with it, mostly while spending my evenings in hotel rooms in Dublin.

Enter Go

Earlier this year I spent a month in Melbourne and thanks to having some time away from paid work I could see that my preference seemed to be leaning towards developing web services and then consuming those services with native client apps. I wanted to keep my skills sharp so started tinkering with a personal project and for the API piece the relatively new Go seemed ideal for the job at hand.

Once I got started what struck me was that I was producing a single, standalone binary which didn’t need any external dependencies in order to be run on the target architecture. No need for a particular version of a language or a slew of packages to be installed, just a command to be executed. It’s almost like I’d forgotten that that was how things used to be once upon a time after these years of using scripting languages and it was a liberating experience.

For that project I wanted to do everything test-first but unfortunately the depth of my knowledge of Go in general and web application development with it more specifically are not what I want them to be so when I hit a stumbling the project was abandoned. Perhaps I’ll come back to it in the future.

A Need Emerges

So I still wanted to get some Go out into the wild and when I decided to put together a new website for my limited company I thought I could use the situation as a learning experience.

Serving up a static site can be done any number of ways and I certainly didn’t need to go re-inventing the wheel but with a bit of tinkering I had put together a simple app that could look at a requested path and if it corresponded to a file under public/ then send the file’s contents to the client.

What pleased me with the solution was that it was achieved with only what’s provided out-of-box with Go and without too much code. Simple and to the point and getting it running on Heroku was trivial with the Go buildpack.

The End Result

The site, as simple as it may be, can be found at nulltheory.com and the code is on GitHub. Bon appetit!