Defect Evaluation and Metrics

Defect evaluation is driven by the need for Defect prevention. Understanding defects and metrics surrounding them will help eliminate defect injection and rework.

Links to helpful Defect understanding techniques:
Pareto Analysis – http://erc.msh.org/quality/pstools/pspareto.cfm
The 5 Whys
Root Cause Analysis http://www.mindtools.com/pages/article/newTMC_80.htm

References:
http://www.ijcaonline.org/volume8/number7/pxc3871759.pdf
most metrics won’t mean anything at first until we can establish a baseline to compare to; some metrics may be useful out of the gate depending on project acceptance.

Useful metrics to gain:
Root Cause Analysis

This is digging deeper into the data available and the resources involved finding out why a defect has entered the system. Examples could be Developer Error, Missing Tests/Requirements, Unclear Requirements, or Environmental (Client/Server) or any of the reasons. It’s important to identify it correctly to be able to look for patterns or repeats of specific reason types. This will help to identify problem areas and propose solutions to resolve.

  1. Define the Problem
    1. What does the defect describe?
    2. Why is it a defect?
  2. Collect Data
    1. Reproducible steps(A.k.a. Proof of Defect)
    2. How long has it been in the system?
    3. Severity Impact of Defect
  3. Identify Possible Causal Factors
    1. Sequence of Events
    2. Conditions of the System
    3. Cascading Issues (Do other problems arise?
    4. Techniques to use
      1. 5 Whys
      2. Drill Down
      3. Cause and Effect
      4. So What?
  4. Identify the Root Causes
    1. Why does this problem exist?
    2. How did this problem occur?
      1. More so, how was it allowed to occur?
      2. The Real Underlying Issue
  5. Recommend and Implement Solutions
    1. What can we do to prevent future occurrences?
    2. Who will implement fix?
    3. Who owns the solutions to the root problem?
    4. Risks to fixing Root Cause?
Defect Trends

Once a baseline is started with projects, there will always be different sizes and scopes, however if a full life cycle application is used to track tasks, you can use ratios to trend, such as Features released to defect, defect ratio per developer, analyst etc. It is important to note however that the numbers still by themselves are meaningless without context, defect metrics should be a guide to point you in a direction in which better root-cause analysis should be performed.

If defects seem to have a higher attachment rate to a developer, you should investigate what types of defects they are; break down who found them, the severity and the type. From here you can see if it was related to poor coding standards, lack of simple developer testing before shipping, a constant misunderstanding around requirements, or another factor.

Defect Discovery

Another important Tester related Metric is Defect Discovery. This is measured after a project or sprint has been released to the customer. You track the defects found internally (tester’s job) to defects found externally. If the tester finds 95% of the defects, and the customer only finds 5%, you can then equate that to a cost, and form acceptable defect capture rates. It may be those 5% are all edge cases, and 10% find is acceptable, due to project costs, rework. However if the customer is finding (true defects) at a higher level the tester is, there is most likely a flaw in the process, people, technology, and understanding.

Defect per Function Points

There is also ways with unit tests/code reviews, to show code coverage, and codes with higher appropriate coverage should be less susceptible to tester defects, so you can test Developer code to Tester defect attachment, which can show, Amount of code (Function Points) to defects discovered. It would be safe to say a few defects for every X function points is acceptable, however if the defect rate per function point is high, then you can see a lot less time reading/understanding the requirements or testing before delivery is being done.

Defect Attachment

You can do Defect to Requirement attachment as well, if defects are being done registered in the design phase, to do gap analysis and where requirements may be missing or deficient.

Test Coverage Deficiency

You can also compare Test Coverage to Defect Coverage. If Test Coverage is at X% and Defect report on the feature is at Y%, it may be a long term metric to find appropriate coverage for return on test value. Over testing can be costly to an organization and these metrics can be valuable when trying to determine when over testing happens and the cost to the company.

Defect Categorization and Types

There is also usable data from Defects (Internal/External) suck as simple categorization metrics, such as environments, operating systems, web browser types, defect types, Expected Results vs. Actual, Performance related, and Usability issues. Through review, it can be shown what may be considered areas of high risk for defect accrual, which may cause process/technology/tool changes to be fixed and improved. An example would be to do Prototyping, which then reduces requirement related defects, or adding a Usability Survey Test Session to reduce defects related to UI.

Defect Investigation

The big thing with defects is to understand where in the stream they are happening, and the cost of a defect. Defects caught earlier on will cost less than defects caught downstream, as the cost of re-work and perception of quality raises or lowers. Reducing the number of defects at one point in the stream is not the same value as reducing the same amount of defects downstream.

Root Cause Analysis is the fundamental tool for all above defect understanding. Once metrics have been gathered (with scientific theory applied) they must be researched to make sure they are accurate. Asking questions and open communication is the best tool for this.

Retrospective Discussions based off defect analysis in an open forum is also a positive way to make meaningful use of this data.

2010 in review

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Minty-Fresh™.

Crunchy numbers

Featured image

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 2,300 times in 2010. That’s about 6 full 747s.

 

In 2010, there were 4 new posts, growing the total archive of this blog to 23 posts. There were 6 pictures uploaded, taking up a total of 2mb.

The busiest day of the year was March 22nd with 34 views. The most popular post that day was LeanKit: Kanban Board Experience Report.

Where did they come from?

The top referring sites in 2010 were stickyminds.com, google.com, linkedin.com, agile-tester.com, and infoq.com.

Some visitors came searching, mostly for kanban board, leankit kanban, kanban, retrospective scrum, and scrum board.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

LeanKit: Kanban Board Experience Report March 2010

2

Retrospective, Scrum Gold Mine or Fool’s Gold? August 2008
1 comment

3

Team Dynamics, Don’t be afraid of Change October 2008
1 comment

4

TamperData a free Firefox Plugin to test Server Side Validation October 2010
2 comments

5

An Introduction to SWAT 2.0 August 2008
2 comments

A Cool new tool for Distributed Teams (Currently Free)

Just wanted to give a little free advertisement for a distributed tool we use called Sococo. It is basically a multi person voice and text chat system that overlays a virtual office on top. This allows for some organization of teams, easy switching between teams and individuals for communication. It also has multiple screen sharing, so up to 4 people can be sharing their screen at once in one single chat room.

Their is most likely current plans to make this a paid software as they add new features(Skype and file sharing on its way). So if they keep up this quality and innovation it is definitely a tool to keep an eye on.

TamperData a free Firefox Plugin to test Server Side Validation

A few of my co-workers were asking me how I keep finding bugs in the server side validation, when they can’t reproduce the error at all via the GUI layer. What I told them was about a plugin I use(there are many more similar to this) called TamperData.

TamperData is a free plugin that allows you to monitor,intercept and transform HTML posts after they have been posted by your browser. This will allow you to see how your data is being sent, and override any client side validations that you were imposing.

Many untrained or newer developers put a lot of validation into a UI product via the UI layer such as Javascript, or simple HTML/CSS limitations. What they forget to do is put the same validation in the object to provide safety from such things as XSS Script inject, SQL Injection, Method Overloads, and other types of malicious or just incorrect data that the system was not ready to handle properly.

Below are a few screen shots on the free plugin. You can download it here.

Main Window

ConfirmationTamper window with context menu.

LeanKit: Kanban Board Experience Report


This is an Example of a Kanban board from http://www.leankitkanban.com

 

As a new team that has transitioned to Kanban and also deals with distributed team members, we decided to implement a virtual kanban board. After googling around for possible virtual kanban boards we were unable to find any we felt that suited our needs. This lead me to ask around to fellow teams inside of development who also deal with distributed teams and the implementation of Kanban. This led to the recommendation of the LeanKit: Kanban board(link above).

image

So we started implementing the new Kanban board just over two weeks ago, with pretty good success. The Kanban Board is web based and you can get access to 1 board with 5 users for free. Because we have a team of 7 I created 3 accounts for the Team’s PO, Dev Lead and Team Lead, and then one generic developer account and one generic tester account. We then went and created our Kanban columns for process/development flow with simple drag and dropping and editing. We were even able to create feature swim lanes with there own sub WIP limits to enforce One story at a time for internal development projects(keeping the main focus on deliverables.)

We still use an internal system(JIRA/Greenhopper) so we titled our stories with the JIRA# and a quick description. Inside the comment field we put the team members working on the story, the effort, type and any additional information we may of needed. Then the Process Lead Engineer(this happening to be myself) would manage syncing up JIRA and the Board(or at least observing that this was being followed by the team members moving the story).

So far we have been able to use the board to point out the bottle necks in our development process, mainly for us it was having available builds to test and demonstrate our “delivered” product to our Product Owners.

So far I can say I highly recommend this digital Kanban board for people with distributed teams. We keep it always on display like a real board using the Television pictured above. There is still many new features in development and you can vote for them right at the website on what you want to see next, they have a very agile development cycle as well and deliver frequent updates.

I will report back later in the coming months to let you know how the continued and extensive use of this board fares.

South Florida Code Camp 2010

I just want to thank everyone who attended Mike Longin and my talks on SWAT and Applying Modern Software Techniques to UI testing presentations. We had a lot of amazing feedback from everyone and was nice to meet everyone and hear their thoughts afterwards. As promised here are the two presentations, and some links.

An Introduction to UI web testing using SWAT

Applying modern software development techniques to UI testing

Also you can download SWAT @

http://sourceforge.net/projects/swat/

 

Please if you have any feedback do not hesitate to email Mike Longin or Myself.

Michael_Longin@ultimatesoftware.com

Christopher_Taylor@ultimatesoftware.com

World of Goo – User Experience Testing

The talk by 2D Boy developer “Ron Carmel” creator of World of Goo was probably one of my most enjoyable presentations of the Agile conference, however it was probably the least focused on actual “Agile” and more on user experience testing.

The main focus on their discussion was really user experience testing. What that means is observing and reporting on the users interaction with the software, or in this case “World of Goo” video game, and how the user responds to the interaction. This is something that I think happens a lot of time after a product has already been shipped. We may beta test it, or get a few “UI specialists” to run through experience testing, but do we ever really pilot play test during the Software Development process?

The answer is usually no. We do not and probably should. The reasons I believe is that we would be able to find more than just “bugs” but "defects” in the users actual experience. The code may work fine but if the user experience is difficult, confusing, frustrating or anything other than positive we can witness these as they occur and apply those adjustments into the code during the development process and not after shipment.

Now unfortunately there is not always a way to have people off the street come and test your user experience during development, mostly for legal/security reasons. However there is probably many people in the company who could fill that role of an untrained user to help test the experience. Having someone go through and run your iterations worth of code when it is near complete(at the story level) to see if they understand or the deliverables are clear can be very valuable feedback tool.

I am very curious thou to successful user experience testing in Agile development shops that focus on web services and how they achieve great usability, any comments?

Agile 2009 Presentation – Slides

  Here is the link to the slides from Mike Longin and I’s 2009 Agile presentation on “Applying Modern Development Software techniques to UI Testing”

http://ulti-swat.wikispaces.com/file/view/Agile+2009+-+Applying+modern+software+development+techniques+to+automating+the+web+UI.pptx

img_1875img_1873 

img_1877 img_1874

Agile 2009 – Experience Report from Yahoo

I attended the Agile @ Yahoo: Experiences from the Trenches presentation this morning, and so far it has actually been one of my most enlightening and enjoyable experiences so far. The Yahoo experiences was to go through there pre-agile, agile adoption and agile maturity stages and to let people know some of the pain and success points they endured.

I am not going to focus much on the pre-agile pain points as I believe those are very similar to most waterfall environments, and have been heavily talked about it many agile sessions, books and trainings.

First I want to talk about some of the Risks that they experienced and needed to be overcome to improve the Agile transition and practice.

  • External Teams outside of the Agile practice
    • This can include other parts of the company in which one most rely/interact with that does not follow or believe in the agile implementations that your team my be following. This can cause delays, miscommunication and lack of productivity.
    • UI Design teams need to be part of the Agile practice as UI design is critical to the iteration cycle of release.
  • Team Inter-dependency.
    • In large organizations many people rely on deliverables of other teams to move forward in their iterations. The communication and planning’s between these teams need to be aligned to make sure the goals are the same, or they need to be able to be built modular enough to be able to deliver independent of the other teams progress.
  • Lack of Coaches
    • Without coaches its hard to introduce teams to successful practices and help people learn the positive aspects of agile.
    • Coaches can help communicate and begin dialogue with the non adopters to find where the fear lies in change.
  • Fragmentation
    • No centralized successful scrum practices. If people are not communicating what makes them successful how can anyone learn to succeed.
    • Need to educate non adopters, and if not possible get rid of them. Having the team not believe in the process will always keep them from achieving true success in Agile.
  • “Agile done wrong is worse then No Agile”
    • Scrummerfall – bad or mix mashed agile practices are usually more damaging than the old ways
  • Over reliance on Tools
    • The tools used to implement agile are only as strong as the users understanding to use them.
    • Don’t rely on a tool to solve your problems, rely on the people.

 

Now some of the things that we want to do to make Agile “Stick”

  • Trust
    • Trust within the organization that everyone is backing the change and the practices will allow for people to do what needs to be done without fear of failure.
  • Roles
    • Education on the roles in agile and how to properly work that role will contribute to the success of the people in them.
  • Team “Field Trip”
    • Team members need to move around to other team(preferably successful teams) and absorb different patterns and practices that are successful so they can implement them on their own.
  • Internal Agile Community
    • Sharing ideas is never negative. An internal community dedicated to the improvement of practices and education will give people rich experiences on how to manage Agile on a day to day basis.
  • Agile Principles. Education.
    • Many people don’t truly understand the principles and need to be educated and talked to about their benefits. This is a role of an Agile Coach. If you understand why we do it, you will be more inclined to do it right.

This was a highly interesting 45 minute session, and I will follow up some more once I read the white paper they submitted.

Follow

Get every new post delivered to your Inbox.