Judgment in Testing

Last week I came across this article about Samsung fixing an SMS bug in their software. For the most part, it’s the typical “we found a bug and we’re fixing it” story that are just a bit too common in the news these days – but I was struck by this line:

Another annoying bug is related to the fact that the device cannot maintain phone calls longer than 3 minutes without rebooting.

To me, that’s a much more interesting (and critical) bug than an SMS issue. Yes – I do realize that a lot of people use their phone far more for SMS than making a phone call, but come on…

But I’ve also been wondering lately about tester judgment (there’s probably a better word for this, but for today, it will do). How do testers determine if a bug is a bug anyone would care about vs. a bug that directly impacts quality (or the customers perception of quality)? (or something in between?) Of course, testers should report anything that may annoy a user, but learning to differentiate between an “it could be better” bug and a “oh-my-gosh-fix-this” bug is a skill that some testers seem to learn slowly.

I once saw someone post a bug on their blog and ask readers to identify it. It was interesting to see several replies pointing out minor issues that may or may not have been bugs before someone finally(?) saw the critical bug. This wasn’t an isolated issue – I sometimes see testers so anxious to find a bug that they force the issue, trying to make a bug appear where there isn’t one, that they miss a bigger issue right in front of them. Other testers, however, seem to always be able to hone in on the biggest, most relevant issues.

So what is it that makes some testers zero in on critical issues, while others get lost in the weeds? Domain knowledge is a small part of it, but certainly not a critical factor in my observations. Critical thinking is a bigger part of it – as is experience, but I haven’t yet figured out to help testers consistently get out of the weeds.

Certainly not a huge issue, but if you have thoughts, please let me know.

Comments

  1. “So what is it that makes some testers zero in on critical issues, while others get lost in the weeds? ”

    I think much of it is motivation. Some shops, and some testers, are very positively motivated toward higher bug count.

    If a tester beings a testing session with the thought “my task is to find lots of bugs”, it’s only human nature to pick off the low-hanging fruit first.

    If instead the testing thinks “my task is to find the most important bugs”, she may take a different approach.

    Sometimes it doesn’t matter much what you report first, since you’ll have time later for the significant issues. Often though, testing time is scarce, so you must organize with importance in mind over quantity.

    1. Joe – I think you’re spot on with this comment. I forget sometimes the value some testers put on finding *any* bug vs. the most important bugs.

      Thanks for commenting (and the reminder).

  2. A colleague and I had a similar discussion recently, and a big piece we found missing from those that failed to zero in on critical bugs was lack of customer knowledge. Some folks only focus on testing a small set of features and don’t understand how customers use these features in the context of the larger product and can’t identify mainline customer scenarios.

    An example:

    A tester with weak customer knowledge sees an error dialog they find slightly confusing. They file a low priority bug that simply indicates the wording could be clearer. The bug gets pushed to the bottom of the pile, the product team starts to run out of time, and it fails to get fixed. Oh well.

    A tester with deep customer knowledge sees the exact same error dialog, but knows that for users upgrading projects written in the previous version of the product, this is the first dialog they’ll see and they’ll be blocked until they figure out what it’s telling them to do. Now the bug is going to be treated very differently and has a much greater chance of being fixed. This tester can better assess the customer impact because they understand the customer.

    Speaking from my own experience, it can be challenging to develop customer knowledge. First, I’ve found that there’s often no expectation that testers develop this. Or if there is some expectation, it’s a soft expectation. Second, it takes time and effort to build this knowledge. You won’t get it from working on the same feature for 5 years in a row. You need to actively seek it out.

    “Mark, building customer knowledge sounds great, but there’s tons of other stuff I need to do, so I think I’ll pass. As for that rebooting issue, who has a phone conversation longer than 3 minutes?”

  3. Maybe, looking at the software from its user’s point of view might help? If you place yourself in his shoes, you’ll immediately feel the real impact of the issue.

  4. Perfect timing for me to stumble upon this blog post because I’m organizing a testing dojo tomorrow. While reading this post it just occurred to me that, in addition to people finding bugs in the application to be tested, I will tell them to also try and categorize the bugs in terms of severity.

    Since it’s supposed to be a learning experience, why not learn more than just how to test – and exactly for the reasons mentioned by You and Mark Berryman in an earlier comment.

  5. I think this all boils down, ultimately, to “how well does the tester in question understand realistic customer usage situations.”

    If you’re a good tester, you’re either a domain expert, are well on your way to becoming one, or have well founded understandings of the general use cases and the instinct to explore around the fringes of what you understand. You’re also far more likely to use exploratory testing effectively rather than static, scripted testing.

    If a tester is inexperienced, fails to understand how users interact with his/her products, or just can’t be arsed to think dynamically as a key factor to how to do the job, he or she may occasionally luck out and find a critical flaw, but in general, the bugs discovered this way are going to be scattershot. Particularly if this sort of tester is writing test cases, testing is going to spend more time in the rough than in the fairway.

    This isn’t to say that experienced testers don’t get out there in the tall grass sometimes, but they’re significantly less likely to spend a particular amount of time there trying to figure out where to go next.

  6. Agreed that 3mins and then a reboot is required sounds like a bigger issue than a texting problem but that is my own opinion and I don’t have all the facts.
    I don’t know who the phone is marketed to, I don’t know how many people have reported the SMS issue and how many have reported the 3min issue.
    According to the article the 3min issue is ‘annoying’ so maybe to the users of the phone it’s not that big a deal?

    As for tester judgement, is it up to me to make the judgement call? No, it’s not. I have my opinion, experience and knowledge and based on the facts I have available I will try and judge whether a issue is critical or not and I will also ask other people involved, BA’s, Dev’s, PO’s, PM’s, whoever I can get to but the final ‘judgement’ is not mine to make. I don’t have all the facts and so the best I can do is a educated guess.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.