Agile testing is such a huge topic. Probably we could only scratch the surface in this post. But, what I would like is to talk about some really basic concepts and practices, without which we would not even be able to pick the low-hanging fruits from the tree of Agile Product Development. So, let’s start the journey!
As you know, in Agile/Scrum, value is delivered to customers through the completion of Product Backlog Items (PBIs). A PBI could be :
- a users story, which brings direct and immediate business value to customers,
- or a description of one or more bugs, which resolution will help customers to gain business value,
- or a technical story (e.g. refactoring some source code to improve performance, install a new Continuous Integration server, have some architectural development to sustain new business features, etc.), which brings business value indirectly to customers,
- or a spike (technical or analysis feasibility, take a look here to one of my previous posts), which value is in gaining the necessary knowledge for the team to be able to satisfy a user need,
- or any documentation needed (user manuals, technical diagrams, etc.)
- or any other kind of work or activity which is worth doing for the product!
Any PBI must pass through the Plan, Design, Build and Test phases, before being presented to the Product Owner for the acceptance.
Well, one of the most challenging phases is actually Testing.
According to the last letter (T > Testable) of the INVEST acronim, a story must be testable and in order to do that, we must pass through some specific steps.
First of all, the product owner gives clear and enough information on what the system should do to deliver specific value, describing it through a user story like this example:
As a user of the home banking portal, I want to print the receipt of a money transfer operation I submitted, so that I can review its correctness and store on my private area in the portal
Then, the whole Scrum team reasons on how the story will be tested and demoed, specificing relevant acceptance criteria (see my previous post here). Usually this is ultimately done in the first part of the sprint planning meeting.
Moving forward, it’s time for the development team, during the second part of the sprint planning meeting, to reason about the “how”:
a first high level design of each committed PBI is done, which ends with technical tasks decomposition.
Oh, c’mon Emi….so far no news at all !!! Ok, ok, let’s push a bit on the accelerator then.
Testing Tasks on your Kanban Board
Regarding the testing activities, the teams should have been created, for each user story, something similar to these tasks: Test Scenario & Cases Design, Test Scenario & Cases Development, Test Cases Execution, Test Cases Automation, Final Validation.
Test Scenario & Cases Design
Firstly, let’s get rid off any misuderstanding about Test Scenario vs Test Cases: a Test Scenario is WHAT behavior we want to test; a Test Case specifiy HOW we are gonna test it (input data, output expected, variables, steps, etc.).
Well, this first design phase is key to the success of the whole story.
The team member/s who will develop the story (let’s call her developer) and the team member/s who will test it (let’s call him tester) meet and discuss how the story will be implemented (UI, any business rules, happy and alternate paths, data, performance, etc.).
This helps the tester to start clarifying all the possibile (known) business workflows a user could walk along and, on top of them, make a first risk analysis to discover what are the paths more subject to errors and crossing them with any business critical points (e.g. money transfer submission, credit card data confirmation, etc.).
The results of this is the identification of test scenarios (and most critical ones) and related cases, assuring an end-to-end functional testing. In this phase is of great help if also the end-user could join the discussion, actually largerly increasing the effectiveness of the final results.
Test Scenario & Cases Development
At this point developer and tester come back to their workstations. The former start developing the application and the latter writing the test cases, according to the design above mentioned.
This testing activity could be significantly time-consuming because pre-conditions, data, conditions, expected outcomes and the like, are finally transformed into detailed steps of the test cases. In case of any doubt, both the tester and developer could meet and clarify and finally review them together.
Two important aspects must be taken into account.
The first regards the fact that product functionalities and feautures are bult incrementally and same is actually for test artifacts: they grow incrementally, therefore maintainability is paramount and must be thought accordingly.
The second is traceability.
Traceability between features, PBIs, test scenarios and test cases, must be assured according, again, to the incremental approach mentioned. Additionally, every eventual new bug that pops up, shall be addressed and the related test case must be clearly identified and fixed.
Test Cases Execution
When the developer finishes her development tasks and move each and every related tasks in the Done column of the Kanban board, if the tester has already finished with test cases creation, it’s time to execute those cases against the new functionality.
Each step of every test cases belonging to that scenario, is then executed following the workflows previously written. Any bugs must be notified to the developer, which should fix them on-the-fly (if possible); otherwise it will be tracked into the defect management system.
Test Cases Automation
Well, when every story is finally successfully tested, it’s time to test automate it. Why?!
Because of the incremental product development approch of course! Starting from the first iteration the team develops some stories, then in the following sprints they build other stories on previous ones and so forth incrementally (like with a construction with LEGO blocks).
If the team does not automate tests, it happens that regression testing will become more and more complicated and, moreover, too much time consuming: while time passess, number of stories developed increase and manual testing everything is impossible. This has two different potential side-effects:
- The team in order to deliver the functionalities promised, decreases the attention and related effort towards testing (poor quality).
- The team remains fully disciplined and committed to quality/testing, deliverying less functionalities (poor performance).
Now, both must be avoided. It is very important for team members to find a good compromise on how much effort they should spend in testing automation. This can be done only distinguishing between the different types of automation testing and the relative effort that it is reasonable to spend on it (the design phase above is key here).
Some examples of testing automation practices are Unit Testing, API or Service or Behind the GUI automation testing and UI automation testing. Everyone has its own specificity that desevers more space and time to be covered. I’ll leave this for future posts.
The last task a tester should execute before putting the entire story in the so called “TO ACCEPT” column for the Product Owner, is the final validation, where some explorative, edge testing activities or, if requested, some performance or load testing happen.
Now, please, turn around. Look at your kanban board. How many testing tasks do you have on it? What?
Just a single, poor and generic task called “Testing”?