For testers, being skeptical is generally a good thing. When someone says, “Our application doesn’t have any reliability errors”, I, for one, am skeptical. I’ll poke and prod and (usually) find something they haven’t thought about. There’s power in skepticism. Last year, I led a team of testers in performing code reviews of production code. My hypothesis was, that while developers perform code reviews thinking, “Does the code do what it’s supposed to do”, testers think, “In what conditions will the code not do what its supposed to do. You can insert the comment about testers being pessimistic (or overly pessimistic) here, but in general, the tester mindset is to question statements that seem…well, questionable.
But it’s easy to go overboard with skepticism. Time and time again, I hear good testers apply their skepticism broadly and liberally. Some (paraphrased) quotes I’ve heard recently include:
- “Model-based testing is interesting, but it doesn’t work in a lot of places”
- “I’m skeptical of static analysis tools – sometimes they have false positives”
- “Metrics are evil, because someone may use them to measure people”
I agree whole heartedly with each of these quotes. However, I worry that folks are throwing the baby out with the bathwater. Model-based testing (just an example), is a wonderful test design technique for stateful test problems. Although occasionally someone will screw up the value of MBT by claiming that it’s the only test approach you’re ever going to need, it’s just another technique (and the perfect technique given the proper context). Static analysis tools are also awesome, but aren’t perfect It’s good to measure some things too, but sure, one can screw it up.
I’m trying to think of anything in my life that works perfectly in every situation, but I’m coming up empty. I run into situations nearly every day where someone has a good idea that will obviously work most of the time – but not always. Given these situations, we could just send them back to the drawing board telling them, “I’m skeptical of your approach, because it won’t work in situation z”, but it’s probably a better idea to have a conversation about the limitations, understand where and when the approach may fail, and discuss mitigation or workarounds. Instead of throwing out the idea of running static analysis tools because of the potential false positives, discuss the false positive problem. Find out what causes them. Tweak the configuration. Do whatever you need to do to ensure the value of the approach.
Over the years, I’ve found value in some pretty stupid approaches. It seems that we should be able to find more value from some the ideas frequently discounted.
Even if we’re skeptical.