August 31, 2015
Testing in an Agile context
In traditional, waterfall-style work there are often watertight barriers separating requirements, development, and testing.
Someone writes the requirements, another develops, and someone else tests. The requirements are the basis for development. Test cases are often written too late, so late that they will not be of any use to the developers. Acceptance tests are carried out too late by other roles that are not involved in the daily development, which means that it takes time and too many unnecessary errors detected unnecessarily late.
Combining requirements and documentation with testing
Agile work involves having requirements and test documentation for the tests. Both are necessary to write tests, and all involved to a greater extent in the demands of the job. An optimal approach is to create tests iteratively in step with the requirements to be developed.
For example, you write the requirements, write some test cases and then refines the requirements based on what they discover when trying to write test cases for requirements. Both requirements and test cases will thus be the basis for the developers. It is common to have one or more testers connected to the team. Although testers from operations, which otherwise comes in very late, often participate in the ongoing reconciliation.
This type of phased development divides tests in different levels: component testing, integration testing, system testing and acceptance testing. Acceptance testing is always the customer’s responsibility.
Agile work is conducted in the first three test levels more regularly and acceptance testing is carried out continuously, in connection with each sprint’s end. Often acceptance test is carried out in the form of a sprint demo, which allows the acceptance tests are spread out over the development period and thus start at a much earlier stage compared to traditional work.
In agile work it is common to automate some of the tests. Since there are usually more releases in agile work, maybe 10-20 releases per year, the amount of regression tests so extensive that it is very difficult to find time to test them manually. The purpose of regression testing is to ensure that existing functionality still works. This needs to be done all the time, at every deployment.
It is common to use continuous build and continuous integration, which means that the system is built up continuously and constantly integration tested. This gives developers greater security in case of changes.
One way to improve quality in these cases is to use pair programming. This ensures that the code is constantly being reviewed and teaches the developers involved more about each other’s work, thus contributing to the spread of expertise and reduces any vulnerability that occurs when a specialist stops or is absent.
It is common in Agile work that tests belong to the sprint. Each sprint then consists of design, development and testing. Certain tests however may be difficult to pin down to a particular sprint, for example performance and utility.
It may be the case that the tests do not happened within a particular the sprint. For example, performance testing, system testing or long acceptance test. The disadvantage of moving the test to another team is that the time taken for feedback to be collected will be longer. Responsibility decreases in the team, which can lead to poorer ownership of the developed code and even poorer quality. The delivered increment will contain bugs even after the end of the sprint.
With each increment, regression tests takes up an increasing share of the total number of tests. The purpose of regression testing is to ensure that existing functionality is still working and not accidentally broken by other changes.
An important concept in Agile development is technical debt. If you make a change to the system without doing enough good technical solutions, this builds up a debt that will eventually have to be repaid. The system architecture is getting worse over time if you do not spend time maintaining it.
For product owners, it is important to manage technical debt in a systematic way. The debt affects the schedule and budget. At the same time, it must be dealt with, otherwise the system degenerates. Decisions on changes may not be made unless the product owner is aware, thus it is critical to plan accordingly in order to manage technical debt.
There are different testing types used to test a system. The most common is the test case with step-by-step descriptions of test steps and expected results. In addition, you can use the checklists that are shorter testing instructions without the expected result. They demand more prior knowledge and creativity from the testers, but are also quicker to write and easier to manage.
Ad hoc tests are quite unstructured and undocumented. These lead to errors found, but there is one major drawback that there is no recorded after test execution.
Exploratory tests means that the tester documents the tests while you learn the system and test it.
“Eating your own dog food” involves using the system internally during the development period in order to try it yourself and find many of the bugs before the launch of the system to the market.
Strategies for handling known bugs
All systems have bugs. It is important to have a strategy. Bugs that occur within a sprint usually try to be managed under the same sprint. Define how many open bugs that are acceptable.
Example: Maximum 150 open bugs. Take action if the number of errors exceeds the criterion.
If bugs are not a priority, there is a risk that the developers to correct “errors” bugs. There are different ways to prioritize bugs, for example with a scale that high, medium, low, or a 5-point scale. A more modern way of prioritizing bugs is to look at how often the error occurs, i.e. frequency. It can be expressed as often to all, often for certain, sometimes all.
Another important aspect is the impact that occurs when the bug is exposed. It can for instance be that data is lost, the usability is poor, extra work for administrators, etc.
Bug priority is a combination of frequency and consequences. Bugs that occur frequently and lead to serious consequences should be addressed early on and vice versa.
Go continuously through with developers and decide which defects need to be remedied
Error of lower priority and not likely to be addressed should be closed. If the error occurs again, it can be opened.
One way to prioritize is to use the cross visualisation technique. Draw a cross on the whiteboard. Go through each bug. Let the developers assess the effort required to fix the bug. The Product Owner determines the degree of business value inherent in fixing the bug, thus the bugs that provide the most benefits at the lowest cost should always be addressed first.