A match made in heaven

If you are a regular reader you will know just how much I have fallen for Golang recently. If not, see Fun with WebSockets in Golang for why I think it’s such a great language for writing backend services.

As explained in that blog post, my motivation for learning Golang originated with my experimentation with Docker. Golang programs are (usually) statically compiled to machine code, not bytecode, so no runtime interpreter like a JVM or Python is required to run them. This means that you can fit those programs into the smallest Docker containers possible for maximum density and reduced attack surface. Pair that with Golang’s performance (which is comparable to C++) and you have a match made in heaven.

Full circle

This blog post is going to take the simple client and server programs I wrote as part of Fun with WebSockets in Golang, containerize them and deploy them with Docker Compose. Whereas that original blog post focused on the features of the Golang language, this one is going to focus on its strengths in application delivery, bringing the whole exercise full circle.


Tool Version used in this post Link
Docker for Windows* 17.03.1-ce https://www.docker.com/docker-windows
Golang 1.8.1 https://golang.org/dl/

* I’m using Docker for Windows (with Linux containers) for this blog post because Windows 10 is my bare-metal OS, but this will work equally well with Docker for Linux or Docker for Mac. The only difference should be what kind of build script you develop.

Source code for this blog post is available on GitHub here.

Building containers

Our Golang programs are ready, having been built as part of Fun with WebSockets in Golang, the next step is to get the executables output by go build into Docker containers.

The container build process is defined by a Dockerfile. Let’s construct some simple Dockerfiles for our client and server programs, starting with the server.

Server Dockerfile

FROM scratch
ADD server /
CMD ["/server"]

Our server image is built from the scratch image, this is declared with the FROM directive. Scratch is the root of all Docker images, it represents the empty container, the minimal possible execution context.

Next, we add our executable, called server, to the image’s root directory (/) with the ADD directive.

We allow the container to listen on port 80 for HTTP requests with the EXPOSE directive.

Finally, we specify our runtime command with the CMD directive, this is where we tell the container what to execute when it starts. There are multiple forms to the CMD directive that define how the command gets executed, this one is the exec form, which directly invokes an executable. Another common one is the shell form which invokes the command via a shell, but we cannot use that here because scratch does not come with a shell. Further details of the forms are available here. In our Dockerfile, we just execute the server executable from the root directory that we added earlier.

Client Dockerfile

FROM busybox
ADD client /
ENV SERVER localhost
CMD /client -server=${SERVER}

The client image definition is similarly concise. This time we build from the busybox image, which like scratch is very small, but does have some minimal UNIX/GNU utilities in it, including a shell. We need the shell because we need to resolve an environment variable (ENV) and pass it to the client executable as a command line argument, to tell the client what server to connect to. For that reason, we’re using the shell form of the CMD directive here.

The localhost value above is just a default value to assign to the SERVER environment variable if one isn’t provided at runtime. We won’t be using it.

Testing the build

You can test build the images by setting the working directory to be the directory containing the Dockerfile and executing:

docker build --tag <ContainerName> .

Writing a build script

The next step is to bring the construction of the Docker images into our normal build process. For a very simple demo program like this one, this is unlikely to be anything more than a short script. Here is one I’ve thrown together:

# Set env params for Linux cross-compilation.
$env:GOOS = 'linux'
$env:GOARCH = 'amd64'

Push-Location .\client
go build client.go
docker build --tag kmacphee/golang-samples/websockets/pingpong/client .

Push-Location .\server
go build server.go
docker build --tag kmacphee/golang-samples/websockets/pingpong/server .

Because I’m on Windows 10, mine is a simple PowerShell script that sets the Golang compiler up for cross-compilation to Linux binaries and performs go build and docker build on the client and server directories. go build produces the binary executables and docker build adds them to new Docker containers, as defined by our Dockerfiles in each directory.

If you’re on Linux you probably want to do the equivalent in a Makefile instead.

Tiny, tiny containers

After running the script and building the images, you can ask Docker to list them by running: docker image ls. If you do this, you’ll be able to see how big or small they are.

At the time of writing, my built server image (the one based on scratch) is 6.05 MB in size. My client image (based on busybox) weighs in at 6.77 MB. That’s one of the strengths of Golang for writing microservices, they’re as lean as they come and have the smallest possible attack surface. The equivalent server program in Python+Flask would probably be built on top of the python:2.7-slim image, which weighs in at 182 MB by itself. It would be slower too.

Coordinating the containers

So now we have two lean and mean Docker containers for our Golang client and server. With the following Docker commands we can fire them up:

# Start server container
docker run --detach kmacphee/golang-samples/websockets/pingpong/server

# Get server container ID
docker ps -q

# Get IP address of container from output of
docker inspect <ContainerId>

# Start client container with SERVER variable set to IP of server container
docker run --detach --env SERVER=<IpAddress> kmacphee/golang-samples/websockets/pingpong/client

It’s useful to know how to use those commands, but using them is a little cumbersome. Thankfully Docker provides a tool that is much better suited to coordinating and linking multiple containers that need knowledge of each other. Its name is Docker Compose.

Composing a masterpiece

Similar to how Docker images are built from Dockerfiles, Docker Compose operates with a YAML file, the default name for which is docker-compose.yml. This file defines the environment that the containers operate in and how they can be linked together.

Here is a simple docker-compose.yml for orchestrating the Golang client and server containers:

version: '3'
    image: kmacphee/golang-samples/websockets/pingpong/server
    image: kmacphee/golang-samples/websockets/pingpong/client
      - server:pongserver
      - SERVER=pongserver

I think of YAML as a format similar to JSON, but a little more suited to hand-editing as there are less brackets to worry about.

  • version is just the version number of the Docker Compose file format used. The current version is 3.
  • services contains the services to be orchestrated. There are two in this file, server for our server container and client for our client container.
  • Both the server and client services have an image key, which defines which Docker image should be used for the container. The values are what we named the images in our build script above.
  • The client service has a links key. The value of this is a list of other services and their aliases (in the form: service:alias) that your service can connect to.
    • The linked services are available to the current service via a hostname that is their alias. So here, server is available to client with the hostname pongserver. If an alias is not provided the hostname is the service name.
  • We specify the SERVER environment variable that the client container needs with the environment key. We set this to pongserver, as that is the hostname that will resolve to the server container.

With that file in our current working directory, all we need to execute is:

# The -d switch detaches the docker-compose process from the current shell.
docker-compose up -d

That’s much easier to use and we can check the environment definition into source control also.

Checking the composition worked

To check if the composition worked, we can use docker exec on the client container to view the client program’s log file.

docker exec <ContainerId> cat /ping.log

Note that you won’t be able to read the server log this way, as that container doesn’t have a shell. The above command outputs something similar to the following:

08:23:01.689331 Making connection to: ws://pongserver/pingpong
08:23:01.691291 Sending: ping.
08:23:06.691934 Received: pong.
08:23:11.693729 Sending: ping.
08:23:16.696384 Received: pong.
08:23:21.698114 Sending: ping.
08:23:26.699432 Received: pong.
08:23:31.701311 Sending: ping.
08:23:36.703755 Received: pong.
08:23:41.705455 Sending: ping.
08:23:46.708088 Received: pong.

That shows the connection string we expected and we can see that the client is sending and receiving messages over its WebSocket.

About the Author Kirk MacPhee

An experienced software developer and technical lead, specializing in automation technologies and their application.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s