I was once involved in a project at an insurance company. The system we built was an insurance system for one of the largest insurance companies in the country. We had some hundreds of users spread across different locations over the country. There were 10 or so developers in the project, the system was deployed approximately 5 times per year and since the project had been going on for a few years testing was not something new to the project.
Acceptance tests were conducted by an acceptance test group who usually worked in the business side of the company, mostly in the roles of insurance salesmen, customer service agents, resellers, telemarketing staff and more along those lines. Before each release hundreds of test cases were run, all according to the ambitious test plan that described in detail how the tests should be done every day for several weeks.
Acceptance testing was extremely well-planned. In practice acceptance testing was the only present test level. Performance tests also took place but the performance test team had no knowledge about the system that was developed and the developers from the project did never attend the performance tests, so I think most of it cost money without leading to any tangible result.
Thanks to talented developers the quality still was good on the whole. The purpose of our acceptance testing was to check that the system was ready for deployment, that all large flows were working and that the actual tasks from the daily activities were supported by the system. Is it possible to calculate the insurance premium for a customer is calling and wants a quote on their car insurance?
When I came into the picture the acceptance tests were already completed. I was assigned the task of testing around a bit at the end to see if I could find any errors in addition to the errors the testers had already found.
Yes, my remit was ‘testing around a bit at the end to see if I could find any errors’. This idea in itself is actually really bad. Every tester can find bugs in any given system because they are coming in with fresh eyes. It may appear to be good to find errors, but the bad thing is that this way bugs are found so late that very few of them will be solved before deployment, since fixing the bugs leads to huge risks of inserting errors in other parts of the system.
So, what did I find?
What kinds of bugs did I then find? Oh nothing too major you might hope. Dream on. I soon discovered that the system was completely inconsistent. Each window was lovely, but when you were working on an insurance you were working in a flow that consisted of several windows. The interface was inconsistent so pretty soon you felt hindered by the system.
For example, in one window one of the terms was “social security number” but the same term in another part of the system was “social sec no” and in a third it was “soc sec #”. Buttons were placed differently on different windows. In one dialogue the OK button was to the left, otherwise it was to the right. Sometimes the buttons were aligned horizontally, sometimes vertically. The tab order was wrong in many places. This system is aimed to be used by users who enter information all day long and therefore they use the keyboard constantly. If the tab order is broken the users press tab, tab, tab, and suddenly the cursor ends up in the wrong field. Arrgh cue frustration!
How could this happen if we had run so many test cases?
My lesson here was that yes test cases are good but they do not cover everything. It is simply impossible to write test cases for all situations.
What would a test case to test tab order look like? Test Step 1 press Tab. Expected Result: The cursor moves to the next field. Test Step 2, press Tab again. Expected Result: The cursor moves again. And so on. Absolutely ridiculous and redundant. Something simpler is needed.
It was then that I invented checklists. A checklist consists of a series of one-line entries, such as “test the tab order”, “check button locations” and “terms are used consistently.” Checklists are now also an important function in ReQtest, and guess what, I use them all the time.
Confessions of a Tester is written by Johnny, a test leader by profession and friend of ReQtest’s. Johnny loves clean code and is suspicious of anything that seems to be completely bug free. Apart from bug tracking at his day job, Johnny plays guitar, watches action and gangster movies and is a great fan of comic books and superheroes. This blog is a chronicle of Johnny’s Software Testing Nightmares.