In the Middle

Should you automate everything, or nothing? Should you test everything, or nothing? How about leadership – should you dictate every detail of what your team should do, or give them no guidance at all. The answer for all of these questions – as you’d expect, is “somewhere in the middle”.

In my experience, most people handle the “how much” question when dealing with a range of potential solutions by starting with a reasonable mid-point and working from there – e.g. “let’s automate half of our tests”, or, “We’ll test what’s most important”, or, “I’ll give my team some guidance, and then give them some freedom in how they deal with the details.”

Those options are reasonable, so the technique seems to work. It, in fact, does work – but I think it can be better.

A brainstorming technique I use (someone please tell me if I’ve inadvertently stolen the concept) is to first spend a reasonable amount of time focusing on the extremes – because often, some great ideas for “the  middle” comes out of that brainstorming. Think, for example, what you’d do if you tested everything (yes, I know, impossible, but think about it. You’d likely use an army of vendors, need some sort of coverage metrics, etc. Then think about what you’d do if you tested nothing (you may have developers own unit and functional tests, and would rely on customer feedback for scenarios, etc.). In the end, there may be something you take from both brainstorming sessions when you figure out what “the middle” looks like.

How about we try another example. Let’s say you are testing application compatibility with version 2.0 of the “Weasel” operating system. There were 100 applications written for Weasel 1.0, and you have copies of all of them. How many of those applications do you test? If you go straight to the middle (which, again, isn’t a bad choice), you’d probably prioritize the apps by sales numbers and test the top n number of apps based on how much time you have. Not a bad solution, and one I’d feel comfortable bringing to the team leaders.

But let’s think about the extremes for a bit. What would testing all 100 applications look like? We’d definitely need to outsource the testing – but in order to do that, we’d need some clear directions on what “testing” an application entailed. We could write separate notes for each application, but there are probably come up with something generic (install, uninstall, copy/paste, print, major features, etc.) that could work. This solution is certainly going to be too expensive for Weasel management to approve, but we’re just brainstorming.

Now think for a while what it would be like to test none of the apps (and not piss off customers). Well, if none of the programming interfaces used by the apps changed, they’d all probably still work. But this is Weasel 2.0, so of course we’re going to tweak the APIs. So, maybe it’s possible to profile the APIs used by the Weasel 1.0 apps and diff that against the APIs we’re changing in Weasel 2.0 and then develop an API test suite that ensured API compatibility. There may be something here…

Of course, neither of these solutions is the right answer (nor are my brainstorming sessions complete), But I’ll bet that if you try this approach the next time you’re dealing with a range of possible solutions, you’ll come up with some new ideas on what you may choose to do “in the middle”.

Comments

  1. Very timely as I’m getting ready to assign work out to my team for a handful of upcoming projects. We always seem to aim for the middle, but maybe we need to redefine where the middle is.

  2. Nice post. I agree with your view. Well as you said ” What would testing all 100 applications look like? We’d definitely need to outsource the testing ” Now I have a question how do you evaluate a QA testing partner, how important is it to hire an independent software testing company, please suggest.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.