When coaching teams coming from waterfall or any phase-gate software development model, one of the biggest issues is to bring the testers lwithin the teams. Nor the testers neither the developers are accustomed to work together.
What any agilist knows, is that every user story must satisfy the acronim INVEST:
- I ndipendent
- N egotiable
- V aluable
- E stimable
- S mall
- T estable
Actually, who is in charge for the first five letters is the ProductOwner. Being able to negotiate and then produce an indipendent, small and valuable user stories is, in most of the cases that I’ve encountered, not an easy job to do.
What, indeed, is easier for the team having that type of story, is to estimate it.
But then arrives the capital T, the last phase of a user story, before it is released: the testing phase.
As you probably know within an iteration you will have several user stories, depending on the lenght of the iteration itself. Some of them are developed just few days after the planning meeting, some other are finished just few hours before the demo.
How is possible that every user story, before it is released even internally, should be tested?
I would make three separated considerations:
- You won’t be able to deliver all the user stories completely tested, if you do not bring testers within the team, giving to them an agile testing strategy to follow.
- You won’t be able to deliver all the user stories completely tested, if you do not automate yuor tests.
- Even if sticking with the two points above, it can happens that some user stories are not definetly tested. What to do?
The first problem, I think, is the most tricky because it has to do with people, behaviors, working habits, leaving the comfort zone: in few words, a real mess.
But we, as agile coaches, never give up and so let’s see how to define a first draft testing strategy to rely on, letting developers and testers work together, even with a first raw process. For sure, they will somehow amend it during the first retrospective, trying to improve it; but actually it is wath we want: do you remember the words ‘inspect’ and ‘adapt’?
Viewing the problem by the tester side, a possible agile testing approach for each story, could be:
- Write high level testing scenario
- Write detailed testing test cases for the scenario
- Automation tests creation
- Automation & Manual test execution
- Final validation
But, take a closer look to such a process.
The picture above, depicts the five macro steps in terms of macro-activities that a tester should do in order to complete the testing phase of a user story. As you probably noticed, for some activities, within brackets are reported the PO (ProductOwner) and DEV (developer) labels, that means those items must be collaboratively executed with them.
In general, an agile tester has a lot (and amazing) of things to do: participate, as any other team members, to the planning meeting, engaging the PO in order to have more details if needed, user stories’ acceptance criteria, any UI mock up, examples on business rules behaviors, and so on.
Probably, the day after the planning meeting, she starts to reason about the first user stories with the developer and the ProductOwner and she starts to write the first high level scenarios, that should be, even not formally, validated by the PO.
It’s time to collaborate (pairing) with the developer understanding how he wants to realize the requirement, giving to him any insight, suggestion or advice and gathering all the information the tester needs to start testing. Once she has a clear idea of what the user story must do, she starts to write the detailed test cases.
Now, it’s time to spend time in creating some automated tests.
Automation in testing is not the scope of this first post, a second one will arrive soon.
For the moment what we should know is that, as Mike Cohn said in his books and posts, test automation in general can be thought as a pyramid with different layers: the unit test level (developers are in charge for it), API level (both develpers and testers can be responsible for it) and finally UI level (mainly in the hands of the testers).
Every level depends on the robustness and reliabilty of the previous one: API tests depend on Unit Tests, UI tests depends on API tests.
Regarding unit testing and UI testing much has been said and written, let me explain what the API tests are actually.
These tests are also called ‘behind the GUI’ tests’. They are tests able to avoid the GUI interaction, directly addressing the API methods, subroutines or functions that are called by the GUI itselfs. Usually these routines are contained in libraries, DLLs, web services, that are instantiated by th GUI via a remote call, SOAP, REST, whatever.
Automating this level is quite simple and has, in my opinion, an high ROI in terms of effort to be spent and returning software quality.
By the way, coming back to the strategy, once the tester finishes to write the automation tests, she executes and adds them to the continuous building process, in order to avoid any regression.
Then she is ready for the final manual and performance testing, before executing the User Acceptance Tests with the PO.
This last step, finally, ends the process.