Setting Business goals, gathering requirements and providing specifications are fundamental activities of product development. However these activities are only partially covered by traditional Agile methodologies, which tell us how to build software the right way but not necessarily how to build the right software. Beautiful working software that does not reach the business goals it was built for is still a failure. Iterative development, deferring commitment, regular reviews and quick feedback mitigate potential problems in that regard more than they tackle them. To address what can be seen as the second constraint of software development, a second generation of Agile practices is currently developing, putting even more emphasis on collaboration with customer/business, while keeping if not reinforcing the ability to adapt to change. I recently went to an interesting workshop run by Gojko Adzic on ‘Specification by example‘. ‘Gojko Adzic has identified seven process patterns followed by companies that continuously deliver valuable software. These seven process patterns are the following:
- Deriving scope from goals
- Specifying collaboratively
- Illustrating specifications using examples
- Refining the specifications
- Automating validation without changing the specifications
- Validating the system frequently
- Evolving living documentation.
In this blog I summarize each pattern and relate ‘specification by example’ to other practices to hopefully provide a clear presentation of what it is.
deriving scope from goals
There are two main causes for the wrong software to be built: requirements are misunderstood or not communicated correctly, or the software has little business value or it is not the most effective way to achieve the expected business value. Companies that follow the ‘deriving scope from goals‘ pattern address this second cause. Instead of presenting the development team with solutions to implement, business representatives introduce the business goals and the tangible and measurable expected outcomes of a project. This way, all stakeholders can work out together elegant and valuable solutions within the software constraints. When the delivery team knows the business goals behind a project or a feature, it is obviously more likely to be able to help reaching them. This pattern overlaps with the following one: ‘Specifying collaboratively’.
Effect mapping is a collaborative mind map aimed at deriving high level scope from business goals. The first step is to clearly communicate the business goals and a way to measure them: why are we starting this project, what are we trying to achieve, how will we know we achieved it? This last question is almost as equally important as the business goals themselves. If the business goal is to increase the number of customers, it is probably not the same product/feature/action that will increase it by 5% or 50%. The second stage is to identify all the stakeholders: who can help us achieve these goals? The third stage is to identify how each stakeholder can help the project reach its goals. These are the business activities. Then the last stage is to ask how the delivery team can support or help the stakeholders with each of these business activities. Typically this 4th level will be epics that could be broken down further into minimal marketable features or user stories.
- The prioritization can be done at the third level. What are the most important business activities to support?
- This prevents the user stories going in multiple directions and growing in number to the point where they become unmanageable. Relating every feature to a business goal prevents scope creep.
- The effect map can be used as a road map.
- Relating all stakeholders, business activities and features to a business goal allows visualizing of all the assumptions.
- This last point is very useful for delivery teams directly presented with solutions to develop and who wish to challenge them as it is often possible to test the assumptions.
Leveraging the wisdom of crowds in heterogeneous groups is only one benefit of ‘Specifying collaboratively‘. It also ensures the team shares a common understanding of the problems and solutions because the further away from the delivery team the requirements are specified, the more transcription steps there is likely to be, the more room for misinterpretation there is. Finally, it paves the way for the development of a common language across all stakeholders, necessary to illustrate specification using examples.
Diverge and merge. To specify collaboratively splits the attendees in heterogeneous groups of 4 or 5 before merging all the ideas. This avoids meetings beinghijacked by a dominant attendee and other attendees may feel more comfortable expressing themselves in a smaller group.
Illustrating specifications using examples
We use examples every day in our life to clarify abstract matters. By definition, examples are concrete. They leave little room for little interpretation and are easily testable if well chosen. This makes them a great mean to communicate expected behaviours. ‘Illustrating specifications using examples‘ allows exploring hedge cases, identifying functional gaps and crucially providing a clear definition of ‘done’. All examples are welcome, the aim being to have the feature fully covered and understood. The attributes of a good example are to be self-explanatory, focus on one functionality only, expressed in domain language, measurable and not a script (a sequence of activities). These examples need next to be refined.
Refining the specifications
‘Refining the specifications’ is done at the level of the examples and at the level of the set of examples. The examples must be self-explanatory and testable. The set of examples should describe the feature unambiguously but not necessarily cover all cases. Our key examples are in fact what is more commonly called acceptance criteria, acceptance tests or functional tests. Too many examples could compromise their use as live documentation and their automation.
The most popular format to express acceptance criteria is Gherkin(Given, When, Then). Originally created for Cucumber, it is now used by other tools. The syntax is:
- Given some precondition
- And some other precondition
- When some action by the actor
- And some other action
- And yet another action
- Then some testable outcome is achieved
- And something else we can check happens too
Automate validation without changing the specifications
The set of key examples needs to be automated as tests so they can be run as often as needed and provide feedback as quickly as possible. It would not be realistic and effective to do these tests manually. However they must not be modified while automated as we do not want to lose important information or introduce wrong information during the transcription. It is therefore not possible to use traditional test automation tools such as the xunit family. To ‘automate validation without changing the specifications’, other tools exist that allow describing behaviours in plain English. Cucumber, Concordion and Fitnesse are popular. Tool choice depends mainly on the programming language used in the project and the domain of the project.
Validating the system frequently
Once the validations are automated, the system must be validated as often as possible. By ‘validating the system frequently’, developers and testers get quick feedback. The defects are identified early when they are the cheapest to fix. This is reminiscent of the Toyota/Lean principles: zero quality control. Quality is an inherent part of the whole process, it is built-in, and it is not a separate stage in the process.
Evolving living documentation
Unlike tangible products, software keep evolving and changing even after its initial release, which makes them rather difficult to document. Typically, for software, especially enterprise software, the only source of truth is the code. This results in a consequent bottleneck in knowledge acquisition and difficulties in implementing changes. By ‘evolving living documentation‘ teams practicing specification by example have a source of truth that evolves with the code and is readable and understandable by all team members.
Evolving live documentation is one of the most difficult aspects of specification by example but also one of its most valuable outcomes. Matt Wynne, author of the Cucumber book, is currently developing Relish. Relish helps teams publish, browse, search, and organize their Cucumber features on the web.
Whilst aimed at developing the right software and therefore reducing re-work, the seven patterns of specification by example bring valuable side-benefits: live documentation that will facilitate changes and knowledge transfer, a clear definition of ‘done’ and a set of tests that can validate the system at any time. There are also technical benefits that I am going to briefly highlight below but a little bit of semantic first will help.
What are the differences between Specification by example, Behaviour-Driven development (BDD) and Acceptance Testing-Driven Development (ATDD)?
Behaviour-Driven development is the most widely used of these terms. It was coined by Dan North around 2004. While practicing Test Driven Development he started naming his tests by sentences describing the next behaviour in which he was interested. He found it so beneficial, notably in helping defining what to test, that he started using ‘Behaviour-Driven Development’ instead of “Test-Driven Development” while coaching on the subject. This narrow definition of BDD, as an improvement of TDD by using meaningful sentences to name tests and objects and testing at a higher level is still in use. However BDD has gained a broader meaning for most practitioners including Dan North:”Over time, BDD has grown to encompass the wider picture of agile analysis and automated acceptance testing“. With this second definition BDD is almost identical to Specification by example, or it can be seen as its technical perspective.
The Rspec book, co-authored by Dan North provides an even broader definition: a combination of three practices, Acceptance Test-Driven Planning (ATDP), Test-Driven development (TDD) and Domain-Driven Design (DDD). Acceptance Test-Driven Planning (ATDP) is an extension of Acceptance Test-Driven Development (ATDD). ATDD advocates defining acceptance tests collaboratively before starting coding. ATDP adds that it must be done during the iteration planning so the acceptance tests are taken into consideration while estimating the user stories. Domain driven Design (DDD) is a set of practices and techniques created and/or assembled by Eric Evans, aiming at keeping code, business logic and business reality in line.
This third definition does not show BDD has an improvement on TDD or a replacement for it. However it increases the scope of BDD considerably by including DDD in it. Domain Driven Design has a lot in common with the practices described in this blog, especially in terms of collaboration between stakeholders and development of a domain language but it relies heavily on modelling which is not the case for Specification by Example. On the basis of this definition BDD encompass Specification by example.
The different definitions of BDD make the comparison with Specification by example difficult and these semantic incongruities are certainly not a major concern for the practitioners in their daily activities anyway, despite the irony for this domain to not have an ubiquitous language. Nonetheless there are differences due to their different initial purpose that can be consequent. Gojko Adzic observed patterns in companies delivering the right products, Dan North wanted to improve TTD. As a result BDD has a more technical perspective and Specification by example is not directly linked to TDD like BDD. For specification by example TDD is a very valuable XP practice addressing another issue: quality. In fact Gojko Adzic advises in his book to start by TDD before implementing Specification by example and does not say that it should replace it. This can be a fundamental difference between BDD and Specification by example as explained below.
and Test-Driven Development (TDD)?
In its broad sense TDD means writing a test first, watching it fail, coding to pass the test and refactoring. However TDD is mostly used with a narrower meaning, like in the above definitions, where the tests are unit tests, i.e. they only validate an individual method or class. Therefore under what could appear as trivial semantic concerns underlay the fundamental question of the level of testing/validation, unit or behaviour? In ‘growing Object-Oriented Software, guided by tests‘, Steve Freeman and Nat Pryce explain that both acceptance tests and unit tests are important. Acceptance tests tell us about the external quality of a system (how it meets user expectations) while unit tests tell us about its internal quality (readability, how easy to change…).
They provide this figure to describe the inner and outer feedback loops in TDD (broad meaning). For instance in the case of an acceptance test described using the gherkin syntax, there should be a unit test at each step, given, when and then. By adding acceptance tests on top of unit tests to drive development, Teams focus on features instead of objects, and design from the perspective of user without consequences on quality and the ability to refactor safely at the granular level.
Specification by example and Testing
It is hopefully clear by now that specification by example is not a ‘tester’ activity, at least no more than it is a developer or business activity. Nonetheless, Specification by example validates the developed feature but the set of examples can also be used for regression testing. In, ‘Agile testing, A practical guide for testers and Agile Teams’, Lisa Crispin and Janet Gregory classified the different kind of testing or validation depending on whether they are technology facing or business facing, whether they support the team or critique the product and whether they are automated or manual. Unit tests are technology facing, supporting the team and fully automated. The acceptance criteria ensued from Specification by example are Business facing, supporting the team and fully automated. However once the development of the corresponding features is completed, the set of automated acceptance criteria can become, in conjunction with the unit tests, a set of regression tests that is technology facing and critiques the product.
According to Gojko Adzic the number of different names for what are very similar practices is a reflection of the amount of research and work put into that field at the moment. Paradoxically it also shows how difficult it is to develop a common ubiquitous domain language. ‘Specification by example’ has the advantage, because of its focus on developing the right product, to not conflict with other Agile practices. In particular it does not position itself as a replacement for ‘(Unit) Test-Driven development’ or traditional testing activities.