Back in February, I was given the chance to deliver a presentation for the BCS, the chartering body for IT and computing in the UK, on the evolution of the software development lifecycle as we race into the Cloud era.
Well, I say that. I was originally approached to do a talk about test automation, but as I was thinking about what I might be able to add to that arena it occurred to me that the testing phase of the classical SDLC gets far more coverage from an automation sense than any other. Much of the modern thinking on how to deliver software efficiently automates much more of the process than just the testing. I began researching how the most progressive teams used automation to drive some of the lesser covered phases and a talk on how automation technologies are taking these over became much more compelling to me. Hopefully, the audience agreed!
Recently, I’ve been on a run of trying out different combinations of well-known automation tools to dynamically provision and tear down some dev test infrastructure that I need for a side project. The results of which I put together into a two-part tutorial called: Provisioning Windows Server 2016 with Vagrant, Hyper-V and Chef (Part 1 – Part 2).
I chose Chef to do the configuration management piece because they’re known for being quite forward in their support for Windows platforms. If you’re involved in automation I think it’s important to be comfortable with all mainstream server operating systems, so
Like most software developers, my first taste of automation was in the field of testing. When I was barely out of my teenage years, my first internship involved developing a system based on STAF (an open source test automation framework) for a server/workstation production line. That was quite a long time ago, but I’ve been involved in both writing automated tests for software products and frameworks to orchestrate them in a variety of roles ever since. Here are the ten most important things I have learned in test automation since then.
Part 1 in this series can be found here.
Those readers who followed on from Part 1 in this series will know that I’m working on a side project that requires testing against a CIFS/SMB storage server and that the only hardware I have access to is my normal, everyday home workstation, lest I get banished to the sofa when it’s time to sleep.
A quick summary of my other goals are:
I’m working on a side project at the moment that involves storage servers, specifically CIFS/SMB shares. As it’s a side project, I’m working on it at home and I don’t have access to the resources at my day job, all I have to work with is my home workstation and network. Considering those constraints, what I really need is:
The outage of the Amazon S3 Service in Northern Virginia on the 28th of February 2016 was a warning shot to all of us involved in the delivery of software services of all descriptions. It made front-page news and highlights the scrutiny that public cloud providers are under and how much they are relied upon.
The two-sentence summary
Welcome to AnchorLoop! AnchorLoop is a new technology blog, focused on my interests in software development and the industry at large.
Who am I exactly?
I am Kirk MacPhee, a British software developer based in the university city of Cambridge in the UK. I specialize in automation technologies and their application to the software development process.