June 18, 2014
How We Test at ReQtest – “Defense in Depth”
We take testing seriously at ReQtest. Our approach to testing can be summarized as “defense in depth”: we defend ourselves against bugs by layering different kinds of tests on top of each other. None of the layers provides a 100% coverage on its own – at all levels we weigh the costs of adding more tests against the benefits, and stop when we think we’ve reached a good balance. When taken together, these layers, which individually may look a bit porous, are enough to give us confidence that we don’t and won’t break things.
Unit tests
Starting from the bottom of the defense in depth approach, the first layer in our testing strategy is unit tests. Whenever we write new code or modify existing code, we write automated unit tests for our changes. Often we work test-first, beginning with unit tests and then writing the code to satisfy those tests.
NUnit is our preferred unit testing framework. All unit tests are run every time code is committed to the code repository.
We take a pragmatic view of code coverage, and we don’t aim to cover 100% of our code with unit tests. Our code coverage in server-side code generally hovers around the 70% level. What we do, though, is keep an eye on code coverage, and when we find that coverage is decreasing in some part of the system, we check to see what code is not covered.
Some code is trivial and therefore not worth testing – the risk of errors is close to zero. Some components may be one step up from trivial, but particularly hard to write unit tests for. In those cases we just make sure that the other levels of testing cover these parts well.
Determining whether a given piece of code is trivial or not is a judgment call, and sometimes we get it wrong. If we ever find a mistake or a bug in a piece of so-called trivial code, that means we got our judgment wrong. In that case we add tests as part of fixing the bug.
When it comes to client-side code (JavaScript that runs in the browser) the industry as a whole is less mature and there are fewer tools available. We write unit tests for our JavaScript code as well, but have yet to incorporate code coverage measurements into our development process.
Code review/developer test
When code has been written and committed to our code repository, together with its accompanying unit tests, the next step in defense in depth is to have that code approved by another developer. Depending on the component this approval may mean a code review, or some brief manual testing and sanity checking, or both.
The reviewer also checks that unit tests have been written, in essence confirming the developer’s judgment call about whether the code is trivial or not.
Because the person approving each piece of work is a developer who can read and understand the code, he or she can often pinpoint areas of code that are of higher risk and focus their testing on those.
Structured manual testing
Developer testing and approval is done by small increments, as and when chunks of code get done. When an entire feature has been fully coded and approved, it gets handed over to our test leader. By this time all the most obvious bugs have found and fixed and it’s time to take a more structured approach to find the less obvious ones.
We start by writing a test plan (which we do in ReQtest of course). This test plan is prepared by the test leader and developers in collaboration to make sure that we cover all relevant scenarios: all the edge cases, valid and invalid inputs, etc.
Often the test plan also includes cross-browser testing to make sure that the user interface looks good and functions correctly in all relevant browsers. It might also include regression testing of other parts of the system affected by our changes, even though they were not covered by the requirement or user story.
The test plan provides structure, but the tester also often goes outside the test plan and uses their own judgment and experience to find issues and problem areas that we had not foreseen. This is where having an experienced, well-trained tester can really make a difference! A great tester can uncover hidden dependencies and non-obvious interactions between components and find bugs in areas we didn’t even think about. (That’s when we go back and write unit tests for them.)
Automated end-to-end tests
Finally, when a feature is complete and has been tested, and all the bugs have been fixed, we write automated end-to-end regression tests. This test suite is automatically run daily.
“End-to-end” means we test each feature as a whole, from the user interface all the way down to the database and back up again. This may mean, for example, logging in, creating a bug report, editing it, and then checking the list of bugs to see that the bug is present with the correct field values – all done automatically.
We use Selenium to write automated tests for the user interface. Selenium can automate a web browser and perform almost any action that a user can: navigate to a URL, type text, click links and buttons, etc.
We also write end-to-end tests for those parts of the system that do not have a user interface, for example our public API, ReQtest Connect. Here we simply use NUnit just like for unit tests.
As with all the other layers, we don’t aim for these tests to cover all features and all cases. The tests in this layer are there to ensure that core features always keep working, and that we don’t accidentally break something that we didn’t even notice. These tests cover all the key components that must never break, but focuses only on the “happy cases” – here we ignore edge cases and rare scenarios. We pay extra attention to features that are very important for specific customer groups but that we ourselves use rarely or not at all, such as new account sign-up, special features for trial accounts, etc.
So that’s our defense in depth approach. What’s yours? Share the knowledge in the comments below!
Share article