In order to improve any process, you require at least two things - an honest review of the current situation, and a willingness to change. So why do I get the feeling that we're happy with a status quo that is failing us when it comes to automation practice?
In theory, an automation suite should act as a tool to shorten development cycles, reduce manual test effort and provide confidence in every build, release on release. In practice, despite our best intentions, a lot of suites become just another expensive overhead, or an ongoing piece of development in their own right. So where are so many of us going wrong?
In putting together this post, I asked myself what I believe makes up the absolute fundamentals of a solid framework. Here's what I came up with:
- It must be quick
- It must be reliable
- It must provide effective feedback
If an automation suite fails to fulfil one of these characteristics, I would argue that it’s not fit for purpose.
Tests driven through the user interface (UI) by tools such as Selenium and Appium struggle to fulfil all of these criteria at once, and yet, perhaps because they’re the most visible to anyone outside of the codebase, a lot of teams rely on extensive frameworks built around these tools. Another reason for this, in my opinion, is a hangover from a time when Development and QA were seen as two entirely separate entities; where developers developed to a given point and testers tested in larger chunks through the UI. This gap that has been addressed through cross-functional teams and Agile ways of working, but I believe there’s further to go.
In this post, Melissa Marshall argues that unit tests should form the foundations of any reliable framework, adhering to Mike Cohn’s Test Automation Pyramid, and I support this argument. In my opinion, unit and component level tests coverage are far too often overlooked when formulating a test suite, as they have traditionally been viewed as a Developer’s concern, and given little consideration by a Tester. I’m not suggesting that these types of tests can entirely replace end to end automated tests driven through the UI, but to go to the other extreme and ignore them while duplicating steps repeatedly in more time consuming, fragile tests quickly becomes wasteful and expensive.
When failure becomes the norm...
Very few tests will pass 100% of the time over the course of their lifetime, but it’s when a failure occurs that I think you can really gauge the health of an automation framework. Very rarely throughout my career have I seen a failing test treated as an abnormality, and that’s a worry. Instead, the first port of call always seems to be to blame the flakiness of the test, or anything other than a new bug in the codebase. This, to me, is the first symptom that you’re heading down a path of anchoring yourself to a cumbersome automation framework. If a manual tester, running the same test, reported different results depending on which way the wind was blowing, I wouldn’t envisage a long career for them, however quick they were at running that test. So why should we put up with this from our automation suites?
A streamlined, robust automation suite, comprising of fewer, well designed UI tests and more unit, component and integration tests can also be a huge enabler in allowing a team to adopt continuous delivery practices.
There’s a lot to be said for continuous delivery; primarily that it takes the event of releasing to production and turns it into business as usual, and also puts products into customer’s hands at the earliest possible opportunity. An understated benefit however, is that it can drastically reduce the impact of making mistakes.
Bugs are a fact of life in software development, despite everyone’s best intentions; it’s how we respond to them that gives us an insight into the health of a team’s release pipeline. If a bug is found just prior to releasing, is it a minor inconvenience, or is it going to push the release out by hours or even days? Your automation suite can define the answer to this depending on how quick it is to run and how reliable the results are. The cost of making mistakes in releasing a bug into production can be greatly reduced as well, as a patch to production can be released faster, with minimal impact to the customer. With the damage of mistakes lessened by an effective response plan, teams are less constrained by the fear of failure, and in such an environment can often achieve their best work.
In short, in the current environment, responsiveness and maximising efficiencies in release cycles are important success factors, both in planned software releases and in handling the unexpected. With the best intentions of preventing the unexpected by implementing exhaustive automation suites, I believe that we have lost sight of what the team is trying to achieve in the first place. Yes, we want to ensure a quality solution, but we also operate in an environment where speed to market matters, so more time needs to be dedicated to discussing test coverage, starting at the component level.
The sunk cost conundrum
It is human nature to feel an attachment to something that we have invested time, effort and money into, so it is no surprise that we are sometimes guilty of plowing on regardless when an investment isn’t paying off. We should, in reality, enact counter-measures like scaling back the end-to-end tests and solidifying what remains.
So how can we ensure that we’re building and maintaining a useful automation suite? The key is to consistently assess its value over time and how it aligns to your business needs. If your company is risk adverse, in a highly regulated industry, or sees no real benefit in continuous deployment, a larger suite of tests may be justified. Even in these situations though, adhering to the Test Automation Pyramid is vital in producing a sustainable suite. Conversely, if speed to market plays a large factor in your company’s success, and patching the occasional bug in production isn’t the end of the world, then tailor your solution to those drivers. In both cases, it is vital that we have to have trust in what is in place, otherwise the implementation becomes self-defeating, as bugs are dismissed in the automation version of “The Boy Who Cried Wolf”. In my opinion, trust is best achieved with a manageable solution based on effective, multi-layered test design.
With anything in life, if you don’t trust something - stop using it.
This point is an important one, and is a scary proposition to anyone tied to the investment. Like with an old car though, there comes a time where the cost of repairs becomes more than what the car is worth, and selling it off for parts is the only option. An investment in automation should be no different. If there’s value in re-use, then by all means re-use components, but don’t fall into the trap of investing in something that is not going to be adding value in the future.
In summary, I believe that when creating, or maintaining an automation suite, we should always be reflecting on what the overall development goal was, and how it is enabling us to meet that goal. This isn’t revolutionary thinking but, in my experience, a surprising amount of companies continue to add lead time to their release cycles in the pursuit of automating through the UI to a greater degree than is necessary. Automating 10 login tests might sound like an achievement, but taking your time to implement a single, robust, well designed login test, that validates multiple conditions in one execution is adding real value to a framework.
Develop your automation framework with the same principles as the application under test. Plan and refactor as appropriate, and don't be fooled into thinking that simplicity equates to poor coverage. Duplication is waste, whatever the purpose of the code. Sometimes, less is more, and with UI driven tests, this has never been truer.