September 21, 2016
13 Common Misconceptions About Defect Management
Myths and misconceptions are a part and parcel of any subject – be it science, religion, Steve Jobs.
Just for fun, I looked up some common misconceptions and was bowled over myself. Bananas don’t grow on a tree? What? Our propensity to believe untruths goes so far and deep, somebody actually coined the word myth conception – probably because they got tired of saying ‘myths and misconceptions’! #True story.
As with any other subject, software development in general and Testing specifically are no exceptions to the rule. In fact, there are fewer things in the software development world that are more misunderstood than Testing – and Defect Management in particular.
In this post, we’ll look at common myths and misconceptions – okay, myth conceptions if you will – about Defect Management in Testing. And how you can skate over them. You’ll get to gape at the popular ones such as ‘more defects equal better testing’ and express suitable consternation (or shake your head knowingly) at others such as ‘No defects should surface in production’.
And, I dare say, hopefully benefit by correcting some of your own. So, shall we?
“Test teams should (rightly) focus on testing to meet requirements – not testing to make the product 100% bug-free.”
Myth #1: 100% of all defects have to be identified – and fixed
Testing and by extension defect management, is intended to identify as many bugs as possible, to help the software/project meet requirements.
Identifying and fixing 100% of all defects is well nigh impossible. And shouldn’t be pursued as a goal.
I’ve been part of major IT programmes rolling out complex products to multiple markets, and they have chosen to NOT fix hundreds, sometimes even thousands of defects – with reason. Not for fun was the term ‘working software’ coined.
The Requirements Traceability Matrix plays a central role to governing and guiding the Testing team in its efforts to uncover critical bugs that hamper user experience and utility of a software. Test teams should (rightly) focus on testing to meet requirements – not testing to make the product 100% bug-free.
“Not for fun was the term ‘working software’ coined.”
Myth #2: 100% of all defects have to be fixed in the current release/delivery
This one goes hand-in-hand with myth #1. Striving to fix every single defect identified within the current release is, well, a wild-goose chase. Here again, your Traceability matrix and Defects Triage (if you are leading a fairly large-scale implementation, you’ll need one, trust me) should help direct your team’s efforts at the most critical defects to pave the way for release.
It is (unfortunately) common for product owners and business sponsors to hold releases to ransom so they can get their ‘favourite’ features fixed and embellished to their satisfaction.
You should challenge them. Much of the successes in software development in recent times are mainly due to pragmatic product owners making pragmatic decisions about deferring non-critical defects to later releases, or goal-driven test managers, project managers and Business Analysts challenging the not-so-pragmatic product owners to make better defects triage decisions.
Follow-up bug-fix releases are a common practice to help fix the defects that impact user experience but don’t block the release – more on this later.
Myth #3: Testers are responsible for any bugs found in production
True – to some extent. If a tester doesn’t ensure testing covers all the requirements, if the tester doesn’t employ Traceability matrices to help direct their efforts, the eventuality will be that some bugs filter through to production.
Then again, it’s the project team’s responsibility to ensure everyone is pulling towards Requirements Traceability.
Did the Project Sponsor or the Product Owner accept watered-down Testing scope in favour of delivery by a certain date, or within a certain cost?
Well, there you have it!
Such data-driven decisions to meet a deadline is common in projects with poor planning.
Product Quality is the whole team’s responsibility. Accepting inadequate testing or not fixing critical bugs before release is always a collective decision – and hence a collective responsibility.
Myth #4: All defects will/should be caught before release
Myth #5: No defects should surface in production or live system
I’ve combined 4 and 5 because, essentially, they refer to the same thing; yet people get all muddled up about these two mythconceptions.
Ever heard of Alpha, Beta releases? Hello? iOS, Android developer and beta releases anyone? Or Service Pack releases by Microsoft? Or iOS/Android bug fixes? Or bug fixes for any app?
Severity of a bug, and the industry, service, function etc. should influence your release approach. Internal or Staff pilots are common practice because they help you put the product live and test it for defects without hurting your reputation. And defects – often critical ones – are identified, and fixed, during pilots and alpha, beta phases.
Obviously, you can’t release a product with glaring defects. And obviously, you can’t expect to write the perfect software solution every single time before you’ll let it release.
“Eons spent cleaning up code and what will you do if the software is out of date by the time you release?
Myth #6: Defects surfacing in production is bad
Following from the previous point, if all defects were indeed caught prior to release, Production Support wouldn’t exist as a stream of IT – would it? They’d be out of a job in a heartbeat.
Production support teams are there so you don’t have to build the perfect software – not because it isn’t possible but, well, because it isn’t worth pursuing.
“You can as much design a good product that delivers to a bad requirement.”
Eons spent cleaning up code and what will you do if the software is out of date by the time you release?
Myth #7: Automated testing identifies (al)most all defects
I don’t know about you, but I don’t think the machines are ready to take over the world yet.
Automated testing has its utility – in (you guessed it) automating redundant testing tasks, just like you try and automate any redundant task in real life. Automatic Climate Control has helped us sleep peacefully without worrying about overheating or overcooling our homes and offices.
So, yeah, go for test automation. You can free your testers up to do more meaningful work. But remember, Automation is for simple regression-type testing (at least for now until the technology can evolve). It cannot replace the value manual testers provide by running complex, exploratory test cases.
By the way, automation is only as good as the individual that helped create it. And yes, the machine only performs the test tasks it has been programmed to do. It won’t (yet) replace your manual tester.
Myth #8: Defects are bad
This one’s a shocker. Especially when mouthed by individuals and teams claiming to be Agile or Scrum compliant.
“Defects are not bad – they are good; if you can catch most as early as possible.”
There’s this stigma associated with defects being raised against your product. Developers vehemently fight defects, Project Managers and Scrum Masters alike seem to think defects reflect poorly on the team.
In fact, Agile works on the basis of identifying defects earlier – and Scrum supports just that. Logging bugs early on in your Sprints will mean you are able to identify and fix most major issues much before the product is put through any level of release preparation.
Agile branches like Test-Driven Development (TDD) are entirely dependent on constantly testing and fixing software as you code.
So defects are not bad – they are good; if you can catch most as early as possible.
Myth #9: Only testers uncover defects
Yet another zinger!
Aforementioned TDD is basically a developer testing as they develop so they can deliver higher quality code and faster.
Everyone on the team can (and should) test – BA, developer, customer, product owner, end user. And everyone should be able to log defects. Enough said.
Myth #10: Case Management doesn’t benefit Defect management
Case in point:
When we were working on a large global programme delivering a product to multiple markets over a period of 2 years, having a Case management repository helped make testing effective by cutting down on repetitive testing.
The repository also benefitted the programme in another way; reducing the number of times a defect was fixed. See?
With good case management practices, we were able to identify when the particular test case had failed previously, and reuse the resolution where applicable. We were also able to measure the severity of the defect by the number of times it was reported across releases – which helped a bunch when triaging defects.
Myth #11: Exploratory Tests don’t uncover many defects – so aren’t efficient, productive, beneficial
Exploratory Tests uncover complex, critical, difficult-to-catch bugs that you won’t catch through Traditional and Automated Testing.
Why? Because, some defects are just so complex, critical and difficult-to-catch to be uncovered by robots (read test automation tools) or, worse, humans trundling robotically through mundane test cases. The efficiency, productivity, and benefits of Exploratory Testing and related techniques should be measured in terms of impact – not number of bugs caught.
Myth #12: Higher the number of defects, better the Testing
Really? A high number of defects does not signify anything except that your project needs to spend a lot more time fixing bugs.
At times, the number of defects can also be the result of poor unit testing or dev practices.
So, if necessary, fix the source of the defects – such as poor coding, unit testing; and applaud your testers for doing their job diligently.
If you manage to eliminate the excess fat around bug discovery, you will be able to free up your talented testers more challenging bugs, exploratory tests etc.
In some cases, the high number of defects could just be the result of the scale and complexity of the project or programme. I’ve helped IT programmes costing in excess of $10 million to transform products away from legacy systems and adopt modern technologies. Defects count usually ran in the thousands for such endeavours. And it was okay because it was expected, given there were so many unknowns.
So defects count entirely depends on the situation, technology, code quality and a myriad other factors.
Myth #13: Higher defects count makes your product better
This one cracks me up all the time.
Testing is called Quality Control with reason. Testing helps you control your product based on established quality standards. It won’t magically make your code better.
Testing keeps code quality in check; doesn’t necessarily ensure it.
“Testing helps you control your product based on established quality standards. It won’t magically make your code better.”
Don’t rely on testing to make your code better. Get your developers to <write better, leaner code. Sometimes, excess code becomes one of the biggest contributors of defects.
Sure, use testing to identify areas where code doesn’t meet certain quality standards. But then, ensure the developer is given time to go back and make the code better, instead of just fixing the bug.
Don’t just treat the symptom; fix the root cause.
Bringing it all together
You will have known – consciously or otherwise – almost every myth I’ve tried to dispel in this blog. I’ve simply tried to create a ready reckoner that you can use to refresh your memory when you need it.
Ultimately, remember that you can treat all the symptoms and make them better; you can make the product bug-free (well, almost) if you want. Yes, you can – given time and money.
It’s all down to whether that is your real goal.
Defects help you deliver an idea faster to the market. They don’t necessarily make the idea better. You can as much design a good product that delivers to a bad requirement – and it will fail – not because you haven’t uncovered enough defects or written enough lines of code, but because you didn’t have a good idea or requirement in the first place.
You can fix all the bugs identified with a product, release, project, system; but it takes diligent coding practices to build software to a level of quality standard to make Testing lean and useful rather than expensive and slow. Think about it.
I hope you found this list of myths and misconceptions about Defect management useful. Do you have any other mythconceptions to share? Add your thoughts in the comments section below.