I’ve been thinking a lot less about testing activities lately, and much, much more on how we to make higher quality software in general. The theme is evident from my last several blog posts, but I’m still figuring out exactly what that means for me. What it boils down to, is a few principles that reflect how I approach making great software.
- I believe in teams made up of generalizing specialists – I dislike the notion of strong walls between software disciplines (and in a perfect world, I wouldn’t have separate engineering disciplines).
- It is inefficient (and wasteful) to have a separate team to conduct confirmatory testing (e.g. “checks” as many like to call them). This responsibility should lie with the author of the functionality.
- The (largely untapped) key to improving our ability to improve software quality comes from analysis and investigation of data (usage patterns, reliability trends, error path execution, etc.).
I haven’t written much about point #3 – that will come soon. Software teams have wasted huge amounts of time and money equating test cases and test pass rates to software quality, and have ignored trying to figure out if the software is actually useful.
We can do better.
A few links:
I see your point in #2, but I am not sure about the implementation in a non-ideal world.
Many times functional testing of a feature requires roughly the same amount of resources as developing it, unless you refer to what others call “code unit tests”.
I expect a developer to do some sanity testing of her code, and actively participate in test reviews but giving her the responsibility of functional testing seems like a waste.
You’ll have to elaborate on “waste” – how is it wasteful for the author of code to ensure that it works well enough to be deployed? Is it less wasteful to hand the code to someone else, have them learn how it works, then enter bugs on the parts they don’t understand, then wait for the bugs to be fixed (or behavior explained), then test again and see if they find new bugs?
I’d rather focus on large scenarios or exploratory testing, or analyze usage (and it’s a better use of my time).
But I must not get what you mean.
Impressive post. Its really interesting and it gives many useful information.
Agile Testing
What I found a little confusing is the three phrases used in the post, “higher quality software”, “great software” and “…software is actually useful”. The first two need not necessarily match with the last. For an example from another industry, a particular super car may be high quality and great and not actually useful when all one needs to do is run some errands within the town limits.
I would understand “higher quality software” more if I knew the organization’s definition of quality. Which in turn depends on their business objectives.
As always, a great read, Alan. Thank you.
Businesses don’t define quality, customers do.
Looking forward to hearing more about this
Great post, as usual.
My only comment: point #3 is challenging in general because you can’t “code” or “test” your way out of it. It actually requires people and communication.
Alan,
Nice thought-provoking teaser of a blog post.
I look forward to reading more about this, particularly on your third point.
Your thinking here reminds me of a presentation you gave at a testing conference a couple years ago. (I forget which conference.) You had people guess the results of A/B tests that were done at Microsoft. In just 2 or 3 questions, the vast majority of the audience had guessed one of the results wrong. I’m a big believer that in many firms a more holistic approach to software quality would improve things. That more holistic approach would incorporate A/B testing and, in your words, “analysis and investigation of data (usage patterns, reliability trends, error path execution, etc.).”
Given how rarely I see QA / Testing people even talking about this concept (nevermind actually trying to implement it even with tiny non-committal “toe in the water” trials), this vision seems a looooooong way off from becoming mainstream.
– Justin
Delayed reply: That exercise (at STAR) showed that as much as we like to say “we are customer advocates”, that we are NOT the customer and we can’t guess what they want. We need to learn from customers, and need effective ways to do so.
I’m trying to make data driven quality mainstream – at least in my world. The funny thing is, that as I go farther down this path, I see the massive inefficiency in the *old* ways of testing – and by old, I mean the stuff that’s being preached as “new” today.
I suppose that “impractical in reality” or “better use of resources, in reality” would be a better word choice.
I agree that In an ideal world delivering a well tested code into a build is a great thing, I even worked with numerous great developers that actually tested their code thoroughly, and for development teams that spent a considerable amount of time building tools and environments to allow coders to easily test their code (the best example was rewriting an embedded OS so the tested code can be compiled, run and tested on Windows using the same APIs, the worst example was a project with no unit tests at all due to high degree of coupling to the hardware).
On top of the complexity of testing isolated modules (at least in my world) many of the coders I worked with (the few mentioned above were an exception) lacked proper education and experience in testing stuff, and even worst they didn’t have the motivation to learn.
I agree that this can and should change if companies and managers had the proper mindset and priorities set, but in most places coders are measured by their delivery rate, quality being second.
While I’m afraid I agree that you’re correct, I don’t think it’s correct – in fact, what I’m saying is that the way many companies make software today is wasteful.
I’m going to try and change that where I work. I don’t know if I’ll succeed or fail, but I do believe there’s a better way to make software, and I don’t feel right anymore doing it the old way.
Hi Alan, what is the “new” way of testing that you consider to be old?
I’m trying to write about point #3 as well, but with the main focus being on forming a conceptual model of the system under test to guide the efforts of every other part of testing. Analysis and investigation feeds into that. Ultimately it comes back to rate of discovery and feedback loops, and the other part that comes into this is effective communication. Even if you build a tool, it needs to communicate relevant information effectively.
I’m still continuously baffled that I keep encountering testers who classify all testing activities as “automated testing” and “manual testing”, as if writing automated test scripts and executing manual tests are the sum of the whole activity. Maybe I need to get out more and I won’t keep being surprised by this, but something needs to change here.