With many of the tools commonly used in a Continuous Delivery pipeline, Windows is not the original OS the tool was developed for. Although support and adoption are growing all the time, there can still be some pain points and gotchas in configuring some of them to work as you would expect on a Windows OS.
In this post, we’re going to combine two of the big hitters in this space
At the day job, one of my team’s current projects is a bespoke “serverless” script execution service for internal use, not unlike AWS Lambda or similar offerings. I’m not the main guy on this, but I’ve been involved in some interesting discussions about how we should control the execution environments. Ideally, they would be sandboxed and completely disposable, possibly only alive for the lifetime of the script they are executing. The obvious solution to this is to use containers.
The dominant scripting language amongst our user base is PowerShell, so we need to try
A match made in heaven
If you are a regular reader you will know just how much I have fallen for Golang recently. If not, see Fun with WebSockets in Golang for why I think it’s such a great language for writing backend services.
As explained in that blog post, my motivation for learning Golang originated with my experimentation with Docker. Golang programs are (usually) statically compiled to machine code, not bytecode, so no runtime interpreter like a JVM or Python is required to run them. This means that you can fit those programs into the smallest Docker containers possible for maximum density and reduced attack surface. Pair that with Golang’s performance (which is comparable to C++) and you have a match made in heaven.
Today’s post isn’t exactly automation-related, but I’ve been having a lot of fun learning Golang over the last week or two and felt the need to share some of the things that I really like about the language and what I think it’s strengths are.
You won’t need me to tell you that Docker has been a dominating force in automated infrastructure for the last couple of years. For the uninitiated, a container is an isolated, lightweight execution context for an app/service (and its dependencies) that share a kernel with other containers. Because an app can be delivered in its deployed state and run consistently by any Docker host, using containers greatly reduces the scope for environmental issues, e.g. incompatible versions of libraries on the host machine, interfering external processes, etc. Containers are the biggest advancement in application delivery since the birth of server virtualization, so they’re worth learning how to use.
Recently, I’ve been on a run of trying out different combinations of well-known automation tools to dynamically provision and tear down some dev test infrastructure that I need for a side project. The results of which I put together into a two-part tutorial called: Provisioning Windows Server 2016 with Vagrant, Hyper-V and Chef (Part 1 – Part 2).
I chose Chef to do the configuration management piece because they’re known for being quite forward in their support for Windows platforms. If you’re involved in automation I think it’s important to be comfortable with all mainstream server operating systems, so
Part 1 in this series can be found here.
Those readers who followed on from Part 1 in this series will know that I’m working on a side project that requires testing against a CIFS/SMB storage server and that the only hardware I have access to is my normal, everyday home workstation, lest I get banished to the sofa when it’s time to sleep.
A quick summary of my other goals are:
I’m working on a side project at the moment that involves storage servers, specifically CIFS/SMB shares. As it’s a side project, I’m working on it at home and I don’t have access to the resources at my day job, all I have to work with is my home workstation and network. Considering those constraints, what I really need is: