Leave it to Adam Goucher to beat me to the punch line. When I proposed that breaking down your definition of quality to a manageable set of ilities is a reasonable method for improving customer perceived quality, the logical next step is to try and find out which of the ilities you need to care about. Adam suggestion was:
Want to improve v2? Talk to customers of v1 and ask which of the ilities they suffer the most from. And/or talk to people who didn’t buy your software and ask them which ility chased them away. Those are the ones that count.
Perfect answer, but keep in mind that the questions you ask are critical. You can’t ask “do you want the product to be more reliable”, or even “from this list, choose the one you care about the most”. For former question will always result in a “yes” answer, and the second will probably just result in confusion. Instead, you can ask questions like “tell me what you like most about the software (or what you dislike the most”. Ask open ended questions and take notes – take lot’s of notes. After you’ve talked to a good sample of customers, break the notes into individual comments and start sticking them on a wall. Look for affinity – start grouping items and looking for themes. Then, see if an ility aligns with a theme. Eventually, you’ll have a bunch of big fat quality bulls-eyes on the wall waiting for you to address.
I have one minor nit where Adam missed the mark – you don’t have to wait until v1 is out to collect this data. If your software team is worth their salt, they’ve defined the customer segments they care about far before v1 hits the street. Interview customers from that segment and ask them questions like “this product does foo, what do you expect a high quality product that does foo to do?”, or “what would make you want to use a product like this?” or “what would make a product that does foo unusable for you?”
The fun part is that I’ve sort of done this (in a very general way), and have a list (that certainly won’t work for every piece of software in the world, but is worth discussing). I haven’t yet figured out how to push the dial on these ilities, but but that’s what I’m going to try and figure out – using this blog as a sounding board while I think.
Excellent post. We have way too many leading questions when we do research like this. Opened ended is always better.