All testers have asked themselves this question at some point in their career. When do I stop testing a product and consider the job done?
Young professionals probably ask that question every time they’re working on a new project. Veterans, on the other hand, seem to acquire the uncanny ability to just tell when it’s the right time to stop.
When the time is right
This ability of instinctively knowing when to stop testing reminds me a lot of the ancient Japanese art of ikebana, or flower arrangement.
In this artistic discipline, practitioners learn how to arrange flowers, twigs, small rocks and other ornaments into an aesthetically pleasant whole which ‘feels right’ when looking at it.
You might have felt something similar if you ever tried to arrange some flowers in a vase, tugging at them this way and that until all seems to fall in place.
Before this all starts sounding too wishy-washy for a blog about software testing, allow me point out the little-known and poorly understood psychological concept of yedasentience.
Researchers of abnormal psychology define yedasentience as:
A subjective feeling of knowing. An intuitive signal that you have thought enough, cleaned enough, or in other ways done what you should do to prevent chaos and danger.
Unsurprisingly, researchers have linked poorly developed yedasentience in humans with obsessive-compulsive disorder, which is typically characterised by repetitive behaviour fuelled by a sense of ‘unfinishedness’.
Testing. Testing. Tes-
As it turns out, testers do actually have to deal with a lot of ‘unfinished business’ in their work, which perhaps is why they tend to be so anxious about when to stop testing.
Testing software is a complex process due to the interdependence of various systems on each other, giving rise to a multitude of variables that cannot be realistically tested all individually under every possible condition they could exist.
The truth is that testing is never complete.
Like the arrow in Zeno’s famous paradox, we only get closer to our target without ever actually reaching it.
So, should we wrap up this blog post here and jump back into the rabbit hole that is testing?
Of course not!
The science of how to stop testing
Whilst in theory you’re never quite done testing, in practice testers utilise a multiplicity of methods to best determine when to stop testing.
The most common factors that are taken into account when deciding when to stop testing are:
- Deadlines, e.g. release deadlines, testing deadlines;
- Percentage of test cases passed;
- Test budget and rate of spending (your burn rate and the amount of runway it affords you);
- Amount of code, functionality, or requirements to be covered;
- Minimum accepted bug rate;
- Duration of beta or alpha testing periods.
The optimal targets for these factors should ideally be outlined in the test plan early on in the project, after discussing them with the customer and agreed upon by the whole team. The tester will then keep an eye on certain key testing metrics in order to establish when those targets are met.
Which testing metrics should I measure?
Testing metrics help testers keep track of their work progress and make more effective decisions when it come to advance their project as well as to end it.
The most helpful metrics which testers ought to keep an eye on include:
- Percentage completion: Number of executed test cases / Total number of test cases;
- Percentage of test cases passed: Number of passed test cases / Number of executed test cases;
- Percentage of test cases failed: Number of failed test cases / Number of executed test cases.
Ideally, a tester sets apart a limited number of test cases to be dealt with in a single test run instead of testing them all at one go. This makes it easier to keep track of the numbers and assess not just the quality of the software product, but also one’s own performance in writing and executing test cases.
Interpreting testing metrics
The decision to stop testing can be made empirically by using the data obtained from the metrics suggested above. A tester can decide to stop testing:
Based on the ratio of passed to failed test cases – there are three methods to interpret this:
Stop when all test cases get passed;
Stop when minimum proportion of test cases need to be passed is reached;
Stop when maximum proportion of test cases allowed to fail is reached.
- Exhausting all test cases available for execution during the test run.
Other more advanced metrics that can be considered in your decision to stop testing are:
- Mean Time Between Failure (MTBF): obtained by recording the average operational time before the system failure;
- Defect density: measured by recording the defects related to size of software;
- Coverage metrics: calculated by recording the percentage of instructions executed during tests;
- Number and perceived severity of open bugs. The later is usually a subjective evaluation along a scale that ranges from ‘Very Low’ to ‘Very High’.
A tester can decide to stop testing when the MTBF time is sufficiently long, defect density is acceptable, code coverage deemed optimal in accordance to the test plan, and the number and severity of open bugs are both low. The general aim is to reduce the risk of catastrophic errors happening when the product is released.
Software testing is potentially an endless activity which could be carried on and on to the point of absurdity. Knowing how to avoid falling in this recursive pattern is important not only to your client and team, but also for you own sanity!
As scientific as we tried to make it sound, the decision to stop testing is still largely guided by a sense of intuition gained over years of experience.
While young testers may find themselves obsessing over the testing metrics mentioned above and depending on them to determine when to stop, real testing experts have internalised this process such that it happens unconsciously.
For them, testing becomes like ikebana, and knowing when to end a test is as effortless as snipping away at a stray leaf poking out of a vase of flowers.