Defect evaluation is driven by the need for Defect prevention. Understanding defects and metrics surrounding them will help eliminate defect injection and rework.
Links to helpful Defect understanding techniques:
Pareto Analysis – http://erc.msh.org/quality/pstools/pspareto.cfm
The 5 Whys
Root Cause Analysis http://www.mindtools.com/pages/article/newTMC_80.htm
most metrics won’t mean anything at first until we can establish a baseline to compare to; some metrics may be useful out of the gate depending on project acceptance.
Useful metrics to gain:
Root Cause Analysis
This is digging deeper into the data available and the resources involved finding out why a defect has entered the system. Examples could be Developer Error, Missing Tests/Requirements, Unclear Requirements, or Environmental (Client/Server) or any of the reasons. It’s important to identify it correctly to be able to look for patterns or repeats of specific reason types. This will help to identify problem areas and propose solutions to resolve.
- Define the Problem
- What does the defect describe?
- Why is it a defect?
- Collect Data
- Reproducible steps(A.k.a. Proof of Defect)
- How long has it been in the system?
- Severity Impact of Defect
- Identify Possible Causal Factors
- Sequence of Events
- Conditions of the System
- Cascading Issues (Do other problems arise?
- Techniques to use
- 5 Whys
- Drill Down
- Cause and Effect
- So What?
- Identify the Root Causes
- Why does this problem exist?
- How did this problem occur?
- More so, how was it allowed to occur?
- The Real Underlying Issue
- Recommend and Implement Solutions
- What can we do to prevent future occurrences?
- Who will implement fix?
- Who owns the solutions to the root problem?
- Risks to fixing Root Cause?
Once a baseline is started with projects, there will always be different sizes and scopes, however if a full life cycle application is used to track tasks, you can use ratios to trend, such as Features released to defect, defect ratio per developer, analyst etc. It is important to note however that the numbers still by themselves are meaningless without context, defect metrics should be a guide to point you in a direction in which better root-cause analysis should be performed.
If defects seem to have a higher attachment rate to a developer, you should investigate what types of defects they are; break down who found them, the severity and the type. From here you can see if it was related to poor coding standards, lack of simple developer testing before shipping, a constant misunderstanding around requirements, or another factor.
Another important Tester related Metric is Defect Discovery. This is measured after a project or sprint has been released to the customer. You track the defects found internally (tester’s job) to defects found externally. If the tester finds 95% of the defects, and the customer only finds 5%, you can then equate that to a cost, and form acceptable defect capture rates. It may be those 5% are all edge cases, and 10% find is acceptable, due to project costs, rework. However if the customer is finding (true defects) at a higher level the tester is, there is most likely a flaw in the process, people, technology, and understanding.
Defect per Function Points
There is also ways with unit tests/code reviews, to show code coverage, and codes with higher appropriate coverage should be less susceptible to tester defects, so you can test Developer code to Tester defect attachment, which can show, Amount of code (Function Points) to defects discovered. It would be safe to say a few defects for every X function points is acceptable, however if the defect rate per function point is high, then you can see a lot less time reading/understanding the requirements or testing before delivery is being done.
You can do Defect to Requirement attachment as well, if defects are being done registered in the design phase, to do gap analysis and where requirements may be missing or deficient.
Test Coverage Deficiency
You can also compare Test Coverage to Defect Coverage. If Test Coverage is at X% and Defect report on the feature is at Y%, it may be a long term metric to find appropriate coverage for return on test value. Over testing can be costly to an organization and these metrics can be valuable when trying to determine when over testing happens and the cost to the company.
Defect Categorization and Types
There is also usable data from Defects (Internal/External) suck as simple categorization metrics, such as environments, operating systems, web browser types, defect types, Expected Results vs. Actual, Performance related, and Usability issues. Through review, it can be shown what may be considered areas of high risk for defect accrual, which may cause process/technology/tool changes to be fixed and improved. An example would be to do Prototyping, which then reduces requirement related defects, or adding a Usability Survey Test Session to reduce defects related to UI.
The big thing with defects is to understand where in the stream they are happening, and the cost of a defect. Defects caught earlier on will cost less than defects caught downstream, as the cost of re-work and perception of quality raises or lowers. Reducing the number of defects at one point in the stream is not the same value as reducing the same amount of defects downstream.
Root Cause Analysis is the fundamental tool for all above defect understanding. Once metrics have been gathered (with scientific theory applied) they must be researched to make sure they are accurate. Asking questions and open communication is the best tool for this.
Retrospective Discussions based off defect analysis in an open forum is also a positive way to make meaningful use of this data.