Thinking about bugs

I probably haven’t mentioned in a while how nice it is to be back on a product team. The complexity and dynamic nature of software development is something I missed more than I knew, and I’m having fun being back in the flow of shipping a product for a million or so users. One thing that I never got a chance to think about while I was in EE was bugs. To be clear, I got to think about the concept of bugs, but not about real live bugs. Bugs are a big byproduct of the testing process, and like it or not, they help dictate some part of the flow of software development. At the very least, they give us something to think about.

So let me share what I’ve been thinking about…

Every bug is more difficult to find than the previously found bug. There will be little hops off the axis, but in general, I believe this to be true. Imagine if the opposite were true – if every bug were easier to find than the previous bug, we’d find more bugs every day and eventually have a product consisting entirely of bugs (insert joke about your least favorite product here).

As bug discovery decreases, product quality improves. There are plenty of reasons where this may not be true, but if few bugs are being discovered, it probably means that product quality is improving (or that all of the testers went on vacation) – but …

Tester skill and knowledge increases with each found bug. When a tester finds a bug, they also acquire knowledge. They learn something about the product, or a technique or behavior that they can use to find bugs in the future.

Tester skill can’t grow as fast as bug discovery difficulty decreases. If tester skill improved faster than the difficulty of finding the next bug increased, we’d find more bugs over time until we had a product with more bugs than …yes – you see where I’m going.

<some made up line charts would do well here, but I’m feeling lazy>

What I’m struggling with is coming up with a better understanding of how these statements interact. In more concrete terms, if the number of bugs discovered in a product or feature area isn’t tapering off at “an expected rate”, does that mean that product quality isn’t improving as much as expected, or that tester skill is improving more than expected? Since the answer is (you guessed it) “it depends”, how do you dig deeper? How do you know the right answer with some degree of confidence?

I have answers for this that satisfy me (customer data is a big part of this), as well correlating a bunch of other measurements, but I wonder if anyone else thinks about stuff like this and has other thoughts or ideas or experiences to share.

Comments

  1. Hi Alan,

    I’d like to apologize upfront before going criticizing.
    [AP] No need to apologize

    Every bug is more difficult to find than the previously found bug
    I think that is true only in the following context: the same build of the same application tested on the same test environment by the same test team using the same approach.

    [AP] I was thinking in aggregate rather than every bug (I know I said “every” though). In general though, as a team finds more bugs, the remaining bugs will be more difficult to find (assuming they change approaches frequently, which good test teams do). Perhaps a better phrasing would be something like “Over time, the difficulty of bug discovery increases”.

    Here I can bring up a huge number of examples I witnessed. Starting from the situations when “last minute fix build” completely screws up, and ending with examples how a fresh team member can find lots of issues overlooked; or functional defects found during performance testing activities.

    [AP] The last minute screw up falls into the “hops” category I was referring to. However, in general, the difficulty slope does increase over time.

    As bug discovery decreases, product quality improves
    If we follow Gerald Weinberg’s definition of software quality, we may find plenty of contexts where changing number of issues in a particular program didn’t change customers’ attitude towards the program’s quality.

    [AP] I agree that you can have a buggy product that provides value to some person. However, there is a relationship (indirect) between bugs and value to a person (I’ll have to write a longer post on this relationship sometime). I can’t define value for every possible customer, so I’ll just say in general (this post is about generalities, not specifics), customers value products where they run into fewer bugs (or at least they get frustrated less).

    Examples can start from applications that are so buggy that customers stopped worrying about quality, or programs that have no alternative anyway. Or it could be a product that already completely satisfies a customer, so fixing bugs in a functionality never used would have no impact – but forcing to download and install an update might bother a lot.

    [AP] Think about the cycle that led to the release of these products rather than the end result that customers see. The really buggy product that customers can’t use was probably 100x as buggy three months before release (or there was no skilled testing on the product). Think how the products got to this state – if a product fully satisfies a customer, chances are that it wouldn’t have if released six months earlier. Even if there’s no alternative product, or I hate the product but have to use it, chances are that I would hate it more if fewer bugs had been fixed before release.

    How do you know the right answer with some degree of confidence?

    I guess, I’d try to do what you did: looking for external feedback. Maybe, in addition to customer data, involve internal teams for evaluation sessions.

    Thank you,
    Albert Gareev

  2. This your missing some variables:
    1) Bugs are directly proportional complexity and size of release.
    2) Bugs come about as you add or change lines of code. Amount of code written increases number of bugs.

    Unless your work on a trivial app the bug supply can constantly replenish. Tester skill may only be marginally related to knowledge of the current project.

    So while testers are important other factors also play a part:
    1) Consistency of coding paradigm. This can usually be attributed to the team buy into a framework or having a commitment to develop a framework. This may be related to how work is done or what the work is being done on. Framework development and Domain Centric design IMO have more to do with Software Quality than using bugs and tester’s skill as metrics.

    Your trying to build a better mouse trap. I would instead concentrate on leaving behind less mice.

    [AP] I’m not trying to do anything – just exploring the relationship between known errors, team growth, and product quality.

    Churn, as I mentioned, is definitely a contributing factor – as are dozens more.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.