Back in February, I was given the chance to deliver a presentation for the BCS, the chartering body for IT and computing in the UK, on the evolution of the software development lifecycle as we race into the Cloud era.

Well, I say that. I was originally approached to do a talk about test automation, but as I was thinking about what I might be able to add to that arena it occurred to me that the testing phase of the classical SDLC gets far more coverage from an automation sense than any other. Much of the modern thinking on how to deliver software efficiently automates much more of the process than just the testing. I began researching how the most progressive teams used automation to drive some of the lesser covered phases and a talk on how automation technologies are taking these over became much more compelling to me. Hopefully, the audience agreed!

The event ended up being called The Legacy and Futures of Automation in Software Development. I blame my colleague and co-host Ben for the grandiose title. I think because we’re based in Cambridge and that these events always attract a good number of academics here, he felt like it had to sound like a thesis. I was pitching for a cheesy sci-fi title like Rise of the Machines, which funnily enough he stole to use as a subheading on one of his slides. Maybe I dared him to put it in? I can’t remember.

Anyway, this blog post is a summary of the SDLC-focused parts of that presentation. I hope you find it interesting.

The Classical SDLC

ClassicSDLC

I’m aware it’s not the 1980s and that absolutely nobody wants to read a walkthrough of the waterfall model, this is here to remind us where we came from. Here are a few brief points before we go back to the 21st century.

  • In most enterprise software vendors an iteration of this cycle was measured in years. A company that released annually was probably considered to be quite fast, back in the day.
  • Each transition between phases was probably accompanied by a handoff to a completely different team. It used to be common for testers to belong in a separate part of the org chart to developers. Maybe developers were involved in the design stage sometimes, but Software Architects were much more common back then than they are now. Any handover between different teams was an opportunity for conflicting priorities, miscommunications and late deliveries to choke the project and slow it down.
  • The customer took the burden of operating the software, often requiring skilled IT staff to keep it running at their cost.
  • New releases may or may not get taken by a particular customer. More likely than not, you wouldn’t have got any feedback on what was delivered unless it was terrible/unusable. Most customers don’t really care about helping you to be excellent, so long as you are tolerable.

Agile and DevOps

The Agile movement recognized some of the problems with the classical SDLC and the methodologies that rose to prominence make some effort to mitigate them. Scrum recognized that feedback in the classical model was infrequent, subjective and at the whim of the customer. To increase the frequency of potential feedback, it tries to condense that whole lifecycle into smaller increments that take a couple of weeks rather than months or years. It also introduced the Product Owner role to act as a sort of cache for customer feedback, representing their interests whilst also being completely committed to the success of the development team.

The DevOps principles try to persuade teams to tear down the organizational divides that slow down the delivery of projects. They have risen to prominence as vendors start to take the operational burden of their products back from customers and deliver them as Cloud services. Doing so can deal with a lot of the issues around getting valuable feedback from customers.

Have you ever been asked, when installing some new software, to participate in a customer experience improvement program? Have you ever opted-in to it? No, me neither. I nearly always click No, even though I know how valuable the telemetry is to the developers of the product. For most users, the need to feel like you’re not being spied on nearly always trumps any charitable feelings you might have towards the developer. However, if you own the kit that your software is running on (or it’s “yours” in the Public Cloud sense) and the customer is consuming it as a service, it becomes much easier to get the kind of telemetry and objective feedback that can be used to keep your products moving in the right direction. You can get a lot of feedback without requiring your customer to go our of their way.

Automating the SDLC

AutomationSDLC

The DevOps principles also promote the closely related practices of Continuous Integration and Continuous Delivery, successful implementations of which require heavy use of automation technologies from the code commit onwards (the red box on my slide above). Being an automation nerd, it’s no surprise that I’m a big fan of this.

Some observations from the diagram:

  • The units of change flowing through the lifecycle are much smaller than in the classical SDLC, in line with what Continuous Integration practices recommend. Small units of change make your automated monitoring much more effective at establishing cause and effect.
  • I’ve merged Design, Fabrication and Test Development into one box as these should really be serviced by one multi-disciplinary team covering development and operations. There are minimal inter-team interactions to deliver value to the customer.
  • You’ll have noticed Testing has become Test Development, Unit Tests and User Acceptance Tests (also known as System Tests) in this lifecycle. That’s mainly to highlight the prominence of automated testing in this lifecycle, not to say that these three boxes replace everything that used to belong under the umbrella of Testing.
  • Most stages of the lifecycle come with a feedback loop to the development team (represented by the dotted arrows). These start from the fastest feedback loop, unit tests that can run in minutes, to slower feedback loops like automated system tests that might take a couple of hours to run in a production-like environment.
  • Monitoring and telemetry are central to everything. Staging environments are monitored as stringently as production environments so that you can better understand the effects of the changes passing through the system.
  • New features don’t get big-bang releases, canaries are put in front of a control group and monitored carefully before they’re released to all users. Ideally in a way that proves the original requirement/hypothesis is fulfilled.

So there you have it, my interpretation of a software development lifecycle for the Cloud (or software-as-a-service) era. Let me know what you think.

About the Author Kirk MacPhee

An experienced software developer and technical lead, specializing in automation technologies and their application.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s