The Skeptics Dilemma

For testers, being skeptical is generally a good thing. When someone says, “Our application doesn’t have any reliability errors”, I, for one, am skeptical. I’ll poke and prod and (usually) find something they haven’t thought about. There’s power in skepticism. Last year, I led a team of testers in performing code reviews of production code. My hypothesis was, that while developers perform code reviews thinking, “Does the code do what it’s supposed to do”, testers think, “In what conditions will the code not do what its supposed to do. You can insert the comment about testers being pessimistic (or overly pessimistic) here, but in general, the tester mindset is to question statements that seem…well, questionable.

But it’s easy to go overboard with skepticism. Time and time again, I hear good testers apply their skepticism broadly and liberally. Some (paraphrased) quotes I’ve heard recently include:

  • “Model-based testing is interesting, but it doesn’t work in a lot of places”
  • “I’m skeptical of static analysis tools – sometimes they have false positives”
  • “Metrics are evil, because someone may use them to measure people”

I agree whole heartedly with each of these quotes. However, I worry that folks are throwing the baby out with the bathwater. Model-based testing (just an example), is a wonderful test design technique for stateful test problems. Although occasionally someone will screw up the value of MBT by claiming that it’s the only test approach you’re ever going to need, it’s just another technique (and the perfect technique given the proper context). Static analysis tools are also awesome, but aren’t perfect It’s good to measure some things too, but sure, one can screw it up.

I’m trying to think of anything in my life that works perfectly in every situation, but I’m coming up empty. I run into situations nearly every day where someone has a good idea that will obviously work most of the time – but not always. Given these situations, we could just send them back to the drawing board telling them, “I’m skeptical of your approach, because it won’t work in situation z”, but it’s probably a better idea to have a conversation about the limitations, understand where and when the approach may fail, and discuss mitigation or workarounds. Instead of throwing out the idea of running static analysis tools because of the potential false positives, discuss the false positive problem. Find out what causes them. Tweak the configuration. Do whatever you need to do to ensure the value of the approach.

Over the years, I’ve found value in some pretty stupid approaches. It seems that we should be able to find more value from some the ideas frequently discounted.

Even if we’re skeptical.

Similar Posts

  • Testing Smarter

    The folks at Hexawise just published an interview with me as part of their new “Testing Smarter” series. It’s the typical stuff plus one mini-rant. The interview is here, and there’s a(n empty) reddit thread as well.

  • The Lure of Testing

    People talk a lot about how they got into testing (I was told I was a tester on my first day at a tech support job), but for those of us who have been in testing roles for a substantial amount of time, I think it’s equally important to think about why we have stayed in…

  • Rockin’ STAR West

    The cat’s out of the bag – I’m popping out of my no-conference bubble, and making an appearance at a testing conference (STAR West in October). The theme of the conference is “Be a Testing Rock star”, and while I think that theme begs for an appearance from Michael Larsen, I’ll do my best to…

  • Home Again

    I implied at the end of my last post that I’d follow up after my keynote (I failed – sorry). This was a weird conference for me. While I attended nearly all of the keynotes, I only made it to a few other sessions, and didn’t have as much time to hang out with folks…

  • New Testing Ideas

    I was checking out test conference programs, and found a list of talk titles I found intriguing (this is a sampling of titles from the conference). I’m curious to know how interesting and innovative you think this conference would be. The Art and Science of Load Testing Internet Applications Model-Based Testing for Data Centric Products…

3 Comments

  1. Yes! Good ideas don’t work all the time. And bad ideas often work some of the time.

    I am most skeptical of many of the things that have provided me with the most value. (I’m even skeptical of this claim.) Skepticism is the rejection of certainty; not rejection of everything that is uncertain.

    Thanks for the reminder to question to understand rather than dismiss.

    Ben

  2. I think what you are trying to say is that all practices are heuristic: they may have problem-solving value in context, and they may fail. Sounds good to me.

    My concern is that you seem to:

    1. confuse skepticism with scoffing.
    2. confuse concern with rejection.
    3. confuse rejection with concern.

    Skepticism is not the same as scoffing, although popular culture conflates them nearly always. Skepticism is the fear of certainty. I practice being skeptical of every one of my favorite techniques, because I think that protects me from complacency and self-deception. Skepticism always involves some form of doubt, even if that doubt is rather remote and abstract. Skepticism does not require me to scoff at proposed ideas, however.

    I am concerned about some good metrics, model-based testing, etc. for the same reason I am concerned when I am holding a sharp knife: I am aware of specific dangers and I wish to avoid them. This is not a bad thing. Expressing concern (watch out for that knife!) doesn’t mean I reject the tool or idea. Even when an idea or tool is useful, it still may not be a worthy use of time and resources. I recommend taking objections to your favorite practices seriously; respond to them; deal with them; don’t just complain about skepticism gone wild.

    On the other hand, sometimes I do mean to reject a practice, and my rejection is instead interpreted as some kind of vague concern. For instance, test case counts are a popular and I believe incredibly stupid metric. It seems to me almost impossible to use test case counts in a responsible and useful way. I reject test case metrics. For someone to come back and say something about babies and bathwater is just annoying. There is no baby there; just sewage.

    In your post, you haven’t clearly delineated these issues. But inasmuch as you are saying that ideas and tool may be sometimes good and sometimes bad, sure. I mean, I’m skeptical about that, but it seems reasonable.

    — james

    1. Thanks for the comments, James..

      You’re right – I confused the terms – there is a difference, but I think you (and hopefully other readers) will see the point.

      I like the sharp knife analogy. My fear is that there are many people who say, “I don’t use knives, because I’m afraid I’ll get cut”. We need more people who know the knife is sharp, so they take the time to learn how to use it safely. That’s a gap (at least from what I see in blogs, twitter, etc.)

      As far as test case counts (and subjects like that), I usually dig deep and ask, “Is there any situation where these may be useful?”. Many times, I can think of something, so the effort is worth it (although I continually fail to find an answer to this question in regards to test case counts).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.