I probably haven’t mentioned in a while how nice it is to be back on a product team. The complexity and dynamic nature of software development is something I missed more than I knew, and I’m having fun being back in the flow of shipping a product for a million or so users. One thing that I never got a chance to think about while I was in EE was bugs. To be clear, I got to think about the concept of bugs, but not about real live bugs. Bugs are a big byproduct of the testing process, and like it or not, they help dictate some part of the flow of software development. At the very least, they give us something to think about.
So let me share what I’ve been thinking about…
Every bug is more difficult to find than the previously found bug. There will be little hops off the axis, but in general, I believe this to be true. Imagine if the opposite were true – if every bug were easier to find than the previous bug, we’d find more bugs every day and eventually have a product consisting entirely of bugs (insert joke about your least favorite product here).
As bug discovery decreases, product quality improves. There are plenty of reasons where this may not be true, but if few bugs are being discovered, it probably means that product quality is improving (or that all of the testers went on vacation) – but …
Tester skill and knowledge increases with each found bug. When a tester finds a bug, they also acquire knowledge. They learn something about the product, or a technique or behavior that they can use to find bugs in the future.
Tester skill can’t grow as fast as bug discovery difficulty decreases. If tester skill improved faster than the difficulty of finding the next bug increased, we’d find more bugs over time until we had a product with more bugs than …yes – you see where I’m going.
<some made up line charts would do well here, but I’m feeling lazy>
What I’m struggling with is coming up with a better understanding of how these statements interact. In more concrete terms, if the number of bugs discovered in a product or feature area isn’t tapering off at “an expected rate”, does that mean that product quality isn’t improving as much as expected, or that tester skill is improving more than expected? Since the answer is (you guessed it) “it depends”, how do you dig deeper? How do you know the right answer with some degree of confidence?
I have answers for this that satisfy me (customer data is a big part of this), as well correlating a bunch of other measurements, but I wonder if anyone else thinks about stuff like this and has other thoughts or ideas or experiences to share.