Links to helpful Defect understanding techniques:
Pareto Analysis – http://erc.msh.org/quality/pstools/pspareto.cfm
The 5 Whys
Root Cause Analysis http://www.mindtools.com/pages/article/newTMC_80.htm
most metrics won’t mean anything at first until we can establish a baseline to compare to; some metrics may be useful out of the gate depending on project acceptance.
This is digging deeper into the data available and the resources involved finding out why a defect has entered the system. Examples could be Developer Error, Missing Tests/Requirements, Unclear Requirements, or Environmental (Client/Server) or any of the reasons. It’s important to identify it correctly to be able to look for patterns or repeats of specific reason types. This will help to identify problem areas and propose solutions to resolve.
Once a baseline is started with projects, there will always be different sizes and scopes, however if a full life cycle application is used to track tasks, you can use ratios to trend, such as Features released to defect, defect ratio per developer, analyst etc. It is important to note however that the numbers still by themselves are meaningless without context, defect metrics should be a guide to point you in a direction in which better root-cause analysis should be performed.
If defects seem to have a higher attachment rate to a developer, you should investigate what types of defects they are; break down who found them, the severity and the type. From here you can see if it was related to poor coding standards, lack of simple developer testing before shipping, a constant misunderstanding around requirements, or another factor.
Another important Tester related Metric is Defect Discovery. This is measured after a project or sprint has been released to the customer. You track the defects found internally (tester’s job) to defects found externally. If the tester finds 95% of the defects, and the customer only finds 5%, you can then equate that to a cost, and form acceptable defect capture rates. It may be those 5% are all edge cases, and 10% find is acceptable, due to project costs, rework. However if the customer is finding (true defects) at a higher level the tester is, there is most likely a flaw in the process, people, technology, and understanding.
There is also ways with unit tests/code reviews, to show code coverage, and codes with higher appropriate coverage should be less susceptible to tester defects, so you can test Developer code to Tester defect attachment, which can show, Amount of code (Function Points) to defects discovered. It would be safe to say a few defects for every X function points is acceptable, however if the defect rate per function point is high, then you can see a lot less time reading/understanding the requirements or testing before delivery is being done.
You can do Defect to Requirement attachment as well, if defects are being done registered in the design phase, to do gap analysis and where requirements may be missing or deficient.
You can also compare Test Coverage to Defect Coverage. If Test Coverage is at X% and Defect report on the feature is at Y%, it may be a long term metric to find appropriate coverage for return on test value. Over testing can be costly to an organization and these metrics can be valuable when trying to determine when over testing happens and the cost to the company.
There is also usable data from Defects (Internal/External) suck as simple categorization metrics, such as environments, operating systems, web browser types, defect types, Expected Results vs. Actual, Performance related, and Usability issues. Through review, it can be shown what may be considered areas of high risk for defect accrual, which may cause process/technology/tool changes to be fixed and improved. An example would be to do Prototyping, which then reduces requirement related defects, or adding a Usability Survey Test Session to reduce defects related to UI.
The big thing with defects is to understand where in the stream they are happening, and the cost of a defect. Defects caught earlier on will cost less than defects caught downstream, as the cost of re-work and perception of quality raises or lowers. Reducing the number of defects at one point in the stream is not the same value as reducing the same amount of defects downstream.
Root Cause Analysis is the fundamental tool for all above defect understanding. Once metrics have been gathered (with scientific theory applied) they must be researched to make sure they are accurate. Asking questions and open communication is the best tool for this.
Retrospective Discussions based off defect analysis in an open forum is also a positive way to make meaningful use of this data.
The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:
The Blog-Health-o-Meter™ reads Minty-Fresh™.
A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 2,300 times in 2010. That’s about 6 full 747s.
In 2010, there were 4 new posts, growing the total archive of this blog to 23 posts. There were 6 pictures uploaded, taking up a total of 2mb.
The busiest day of the year was March 22nd with 34 views. The most popular post that day was LeanKit: Kanban Board Experience Report.
The top referring sites in 2010 were stickyminds.com, google.com, linkedin.com, agile-tester.com, and infoq.com.
Some visitors came searching, mostly for kanban board, leankit kanban, kanban, retrospective scrum, and scrum board.
These are the posts and pages that got the most views in 2010.
LeanKit: Kanban Board Experience Report March 2010
Retrospective, Scrum Gold Mine or Fool’s Gold? August 2008
Team Dynamics, Don’t be afraid of Change October 2008
TamperData a free Firefox Plugin to test Server Side Validation October 2010
An Introduction to SWAT 2.0 August 2008
Just wanted to give a little free advertisement for a distributed tool we use called Sococo. It is basically a multi person voice and text chat system that overlays a virtual office on top. This allows for some organization of teams, easy switching between teams and individuals for communication. It also has multiple screen sharing, so up to 4 people can be sharing their screen at once in one single chat room.
Their is most likely current plans to make this a paid software as they add new features(Skype and file sharing on its way). So if they keep up this quality and innovation it is definitely a tool to keep an eye on.
A few of my co-workers were asking me how I keep finding bugs in the server side validation, when they can’t reproduce the error at all via the GUI layer. What I told them was about a plugin I use(there are many more similar to this) called TamperData.
TamperData is a free plugin that allows you to monitor,intercept and transform HTML posts after they have been posted by your browser. This will allow you to see how your data is being sent, and override any client side validations that you were imposing.
Below are a few screen shots on the free plugin. You can download it here.
This is an Example of a Kanban board from http://www.leankitkanban.com
As a new team that has transitioned to Kanban and also deals with distributed team members, we decided to implement a virtual kanban board. After googling around for possible virtual kanban boards we were unable to find any we felt that suited our needs. This lead me to ask around to fellow teams inside of development who also deal with distributed teams and the implementation of Kanban. This led to the recommendation of the LeanKit: Kanban board(link above).
So we started implementing the new Kanban board just over two weeks ago, with pretty good success. The Kanban Board is web based and you can get access to 1 board with 5 users for free. Because we have a team of 7 I created 3 accounts for the Team’s PO, Dev Lead and Team Lead, and then one generic developer account and one generic tester account. We then went and created our Kanban columns for process/development flow with simple drag and dropping and editing. We were even able to create feature swim lanes with there own sub WIP limits to enforce One story at a time for internal development projects(keeping the main focus on deliverables.)
We still use an internal system(JIRA/Greenhopper) so we titled our stories with the JIRA# and a quick description. Inside the comment field we put the team members working on the story, the effort, type and any additional information we may of needed. Then the Process Lead Engineer(this happening to be myself) would manage syncing up JIRA and the Board(or at least observing that this was being followed by the team members moving the story).
So far we have been able to use the board to point out the bottle necks in our development process, mainly for us it was having available builds to test and demonstrate our “delivered” product to our Product Owners.
So far I can say I highly recommend this digital Kanban board for people with distributed teams. We keep it always on display like a real board using the Television pictured above. There is still many new features in development and you can vote for them right at the website on what you want to see next, they have a very agile development cycle as well and deliver frequent updates.
I will report back later in the coming months to let you know how the continued and extensive use of this board fares.
I just want to thank everyone who attended Mike Longin and my talks on SWAT and Applying Modern Software Techniques to UI testing presentations. We had a lot of amazing feedback from everyone and was nice to meet everyone and hear their thoughts afterwards. As promised here are the two presentations, and some links.
Also you can download SWAT @
Please if you have any feedback do not hesitate to email Mike Longin or Myself.
The talk by 2D Boy developer “Ron Carmel” creator of World of Goo was probably one of my most enjoyable presentations of the Agile conference, however it was probably the least focused on actual “Agile” and more on user experience testing.
The main focus on their discussion was really user experience testing. What that means is observing and reporting on the users interaction with the software, or in this case “World of Goo” video game, and how the user responds to the interaction. This is something that I think happens a lot of time after a product has already been shipped. We may beta test it, or get a few “UI specialists” to run through experience testing, but do we ever really pilot play test during the Software Development process?
The answer is usually no. We do not and probably should. The reasons I believe is that we would be able to find more than just “bugs” but "defects” in the users actual experience. The code may work fine but if the user experience is difficult, confusing, frustrating or anything other than positive we can witness these as they occur and apply those adjustments into the code during the development process and not after shipment.
Now unfortunately there is not always a way to have people off the street come and test your user experience during development, mostly for legal/security reasons. However there is probably many people in the company who could fill that role of an untrained user to help test the experience. Having someone go through and run your iterations worth of code when it is near complete(at the story level) to see if they understand or the deliverables are clear can be very valuable feedback tool.
I am very curious thou to successful user experience testing in Agile development shops that focus on web services and how they achieve great usability, any comments?
Here is the link to the slides from Mike Longin and I’s 2009 Agile presentation on “Applying Modern Development Software techniques to UI Testing”
I attended the Agile @ Yahoo: Experiences from the Trenches presentation this morning, and so far it has actually been one of my most enlightening and enjoyable experiences so far. The Yahoo experiences was to go through there pre-agile, agile adoption and agile maturity stages and to let people know some of the pain and success points they endured.
I am not going to focus much on the pre-agile pain points as I believe those are very similar to most waterfall environments, and have been heavily talked about it many agile sessions, books and trainings.
First I want to talk about some of the Risks that they experienced and needed to be overcome to improve the Agile transition and practice.
Now some of the things that we want to do to make Agile “Stick”
This was a highly interesting 45 minute session, and I will follow up some more once I read the white paper they submitted.