Like most software developers, my first taste of automation was in the field of testing. When I was barely out of my teenage years, my first internship involved developing a system based on STAF (an open source test automation framework) for a server/workstation production line.  That was quite a long time ago, but I’ve been involved in both writing automated tests for software products and frameworks to orchestrate them in a variety of roles ever since. Here are the ten most important things I have learned in test automation since then.

1. Unit testing and system testing prove different things to different stakeholders.

Many software professionals, including many I would consider quite experienced, don’t understand the different tiers of test automation, how to combine them, when each of them is useful and to whom. The most important difference to understand is between unit testing and system testing.

Unit testing proves that a unit of code does what the development team expects it to do. That unit can be an algorithm, a class, a function, etc. The code-under-test may not provide much user value by itself because it’s very fine-grained. Unit testing may use mocks to substitute real dependencies on other subsystems because the goal is to prove that an individual team’s code works free of all other interfering factors. The value is primarily to the development team.

System (or functional) testing proves that the system (or a component of it) delivers the value that the customer expects. This is more of an end-to-end automated test and they run in as realistic of a customer environment and with as realistic data/input as practical. The value is primarily to the Product Owner and the customer.

Don’t be confused by system tests that look like unit tests, i.e. they run in a real environment with real input but the code looks like a unit test suite. The difference between them isn’t coding style, it’s scope and target audience.

2. You almost certainly need both unit and system testing.

In the past, I have seen a lot of friction between developers biased towards unit testing and other more customer-facing stakeholders biased towards system testing. These arguments usually stem from a misunderstanding of rule 1. There may be some code coverage overlap between your unit and system tests, but neither is interchangeable with the other.

Are you trying to prove that your code works or that user value is delivered? They’re not the same thing, so choose your weapon accordingly.

3. No, really. You do need both.

Unless your team is exclusively responsible for the UI and it contains no units of functionality worth testing in isolation, or user deliverables that have a direct one-to-one mapping with your code base on a unit level, you do need both.

I’ve seen so many teams over-commit to one or the other that it’s worth two rules.

4. You can’t completely automate quality.

Some people see test automation as a way of completely removing the overhead of manual testing from producing software. That isn’t true at all. It can take away a lot of overheads you don’t need, like everybody wasting an hour a day installing the daily build, but you still need engineers to go hands-on with the product as part of your sprints. Automation can prove that something works within some specified parameters, but there’s more to quality than that.

5. Automation-assisted manual testing is a good place to start.

If you have no automation whatsoever and rely entirely on manual testing, consider what automation you can invest in to accelerate your existing process. This is a much less intimidating goal to aim for than something radically different and the benefits are more easily measured on known territory. The team probably already know where their bottlenecks are and have an itch to scratch. You might want to automate a self-service deployable test environment, or a simple build verifier, etc.

Once the team feels automation-enhanced you will be in a better position to take the next step.

6. Write the tests first.

The real cost of software is not developing it the first time around, it’s maintaining it and changing it to meet evolving use cases over time. You can’t change non-trivial software safely (never mind quickly) without a comprehensive suite of tests asserting the code’s intended behavior (unit tests) and customer value (system tests). For that reason, you can’t afford to ship a feature without automated testing. If you don’t deliver on automation then the work isn’t really done and you’ll end up having to fix the roof while it’s raining!

When test development is back-loaded towards the end of a sprint, the team can become tempted to drop it and ship their other deliverables anyway, sometimes under management or commercial pressures. Test-driven development (TDD) promotes the idea of writing your tests before you write any feature code, then with the tests acting as a form of specification you write the code to make the tests pass. Following this development loop means that tests never get chopped off due to time constraints and your future self will thank you for it.

7. Value metrics, but value transparency more.

Test automation is highly quantifiable. The results it produces satisfies every manager’s love of data, dashboards and colored lights. It’s understandable that they sometimes get attached to the numbers and start incentivising you to keep the lights green at all cost.

The problem with obsessing over pass/fail metrics is that you forget that you actually wrote all that automation to fail, not to pass. The tests are there to catch your mistakes as soon as possible after you make them and keep the distance between cause and effect short. Fundamentally, writing automated tests is a pessimistic activity, but also an honest and pragmatic one. It’s an acceptance that you and your team are human, will make mistakes and that you need a safety net.

Make sure that your attitude to metrics and dashboards matches that spirit and don’t be afraid of failing tests. Don’t be tempted to start covering up genuine product issues because they’re adding red lights to your dashboard because when a test fails it’s adding value. A failing test should be celebrated when it catches a product bug, that’s a bug you don’t have to spend an order of magnitude more time fixing later because you forgot what you did. Buy that test a beer and make sure it doesn’t stay red for long.

8. Identify your scaling factor and design for it.

Every software system has some factor, often but not always environmental, that it needs to scale over. Web Apps have to scale over multiple browsers, configurations and even hardware profiles (PC, tablet, phone, etc.). Successful test automation for Web Apps has to be able to scale over many, if not all of those variants with a single test.

If automation can’t get an order of magnitude more coverage for us than a manual test, in less time, then why are we even doing it? Automating a test case is a cost, so we need to make sure we get maximum return on investment by making sure we can scale over those variants that we need to cover. This can be choosing the right tools to automate with or, if you need to, which abstractions you need to design and implement in your own framework/plugins.

9. Different products require different approaches.

Different products have completely different needs and the automation strategy that worked well for you on one project could fail miserably in the next.

Does your project live quite high up in the software stack and have hard dependencies that are unlikely to change? It might make sense for you to invest more in automated system testing. If you’re dealing with primitives from a variety of sources and you have lots of algorithms then unit testing is going to be very valuable to you. Your approach should reflect the nature of what you’re working on.

10. Who you work with is as important as what you work with.

If your team is not responsible for delivering the full product in which your project operates, you should definitely have some tests that validate your code without interference from those other components.

Automation enables us to operate at speed, but there’s nothing fast about a bug report getting passed between teams like a hot potato, sitting in each developers bucket for a couple of days while everybody tries to blame everybody else. If you have unit tests that validate that your code works without any of the dependencies that you have no control over, then you have some evidence that the root cause of the bug is likely to be somewhere else. If every team did this then bugs would find their way into the correct bucket much more quickly, which leads to a better performing organization overall.

In other words, your automated testing approach should reflect the “organizational structure” involved and defend you where those time expensive touchpoints between different teams are.

About the Author Kirk MacPhee

An experienced software developer and technical lead, specializing in automation technologies and their application.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s