The Skeptics Dilemma

For testers, being skeptical is generally a good thing. When someone says, “Our application doesn’t have any reliability errors”, I, for one, am skeptical. I’ll poke and prod and (usually) find something they haven’t thought about. There’s power in skepticism. Last year, I led a team of testers in performing code reviews of production code. My hypothesis was, that while developers perform code reviews thinking, “Does the code do what it’s supposed to do”, testers think, “In what conditions will the code not do what its supposed to do. You can insert the comment about testers being pessimistic (or overly pessimistic) here, but in general, the tester mindset is to question statements that seem…well, questionable.

But it’s easy to go overboard with skepticism. Time and time again, I hear good testers apply their skepticism broadly and liberally. Some (paraphrased) quotes I’ve heard recently include:

  • “Model-based testing is interesting, but it doesn’t work in a lot of places”
  • “I’m skeptical of static analysis tools – sometimes they have false positives”
  • “Metrics are evil, because someone may use them to measure people”

I agree whole heartedly with each of these quotes. However, I worry that folks are throwing the baby out with the bathwater. Model-based testing (just an example), is a wonderful test design technique for stateful test problems. Although occasionally someone will screw up the value of MBT by claiming that it’s the only test approach you’re ever going to need, it’s just another technique (and the perfect technique given the proper context). Static analysis tools are also awesome, but aren’t perfect It’s good to measure some things too, but sure, one can screw it up.

I’m trying to think of anything in my life that works perfectly in every situation, but I’m coming up empty. I run into situations nearly every day where someone has a good idea that will obviously work most of the time – but not always. Given these situations, we could just send them back to the drawing board telling them, “I’m skeptical of your approach, because it won’t work in situation z”, but it’s probably a better idea to have a conversation about the limitations, understand where and when the approach may fail, and discuss mitigation or workarounds. Instead of throwing out the idea of running static analysis tools because of the potential false positives, discuss the false positive problem. Find out what causes them. Tweak the configuration. Do whatever you need to do to ensure the value of the approach.

Over the years, I’ve found value in some pretty stupid approaches. It seems that we should be able to find more value from some the ideas frequently discounted.

Even if we’re skeptical.

Similar Posts

  • Death and Testing

    I’m heading off for a long vacation today, so this is likely my last post of the year. It’s been a crazy year, and I thought I’d end it with something that a lot of my recent posts have been leading to (e.g. this post on tearing down the walls). Some of you will hate…

  • The SDET Pendulum

    At a recent internal forum, I hosted a panel discussion comprised of “senior” level testers at Microsoft. The panel was evenly split between managers and non-managers and we took random questions from the audience on career paths in test. The testers in the panel had experience ranging from 11 to 24 years in test. Some…

  • Getting back to work

    I’m back working again – or I guess I should say je suis de retour. I took a few weeks off in southwestern France, one thing led to another, and I decided to stay a bit longer than I originally planned. I’m working remotely for a week or two before returning stateside. It’s nice to…

  • Israel Trip Report

    This post isn’t really testing related, so skip it if that’s what you’re looking for. Late Saturday night, I returned from my first trip to Israel. I was invited to present at the Intel Software Professional’s conference in Tel Aviv a few months back, and I gave an updated version of my STAR East 2010…

  • Living Virtually

    I wrote about using virtual machines for testing in hwtsam, and gave a talk at STAR West a few years back on using virtual machines for testing. What I remember most about the talk was the group of VMWare employees sitting about 10 rows back (I think to ensure that I didn’t say anything bad…

3 Comments

  1. Yes! Good ideas don’t work all the time. And bad ideas often work some of the time.

    I am most skeptical of many of the things that have provided me with the most value. (I’m even skeptical of this claim.) Skepticism is the rejection of certainty; not rejection of everything that is uncertain.

    Thanks for the reminder to question to understand rather than dismiss.

    Ben

  2. I think what you are trying to say is that all practices are heuristic: they may have problem-solving value in context, and they may fail. Sounds good to me.

    My concern is that you seem to:

    1. confuse skepticism with scoffing.
    2. confuse concern with rejection.
    3. confuse rejection with concern.

    Skepticism is not the same as scoffing, although popular culture conflates them nearly always. Skepticism is the fear of certainty. I practice being skeptical of every one of my favorite techniques, because I think that protects me from complacency and self-deception. Skepticism always involves some form of doubt, even if that doubt is rather remote and abstract. Skepticism does not require me to scoff at proposed ideas, however.

    I am concerned about some good metrics, model-based testing, etc. for the same reason I am concerned when I am holding a sharp knife: I am aware of specific dangers and I wish to avoid them. This is not a bad thing. Expressing concern (watch out for that knife!) doesn’t mean I reject the tool or idea. Even when an idea or tool is useful, it still may not be a worthy use of time and resources. I recommend taking objections to your favorite practices seriously; respond to them; deal with them; don’t just complain about skepticism gone wild.

    On the other hand, sometimes I do mean to reject a practice, and my rejection is instead interpreted as some kind of vague concern. For instance, test case counts are a popular and I believe incredibly stupid metric. It seems to me almost impossible to use test case counts in a responsible and useful way. I reject test case metrics. For someone to come back and say something about babies and bathwater is just annoying. There is no baby there; just sewage.

    In your post, you haven’t clearly delineated these issues. But inasmuch as you are saying that ideas and tool may be sometimes good and sometimes bad, sure. I mean, I’m skeptical about that, but it seems reasonable.

    — james

    1. Thanks for the comments, James..

      You’re right – I confused the terms – there is a difference, but I think you (and hopefully other readers) will see the point.

      I like the sharp knife analogy. My fear is that there are many people who say, “I don’t use knives, because I’m afraid I’ll get cut”. We need more people who know the knife is sharp, so they take the time to learn how to use it safely. That’s a gap (at least from what I see in blogs, twitter, etc.)

      As far as test case counts (and subjects like that), I usually dig deep and ask, “Is there any situation where these may be useful?”. Many times, I can think of something, so the effort is worth it (although I continually fail to find an answer to this question in regards to test case counts).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.