Who Owns Quality?

On request from Adam Goucher – another excerpt from How We Test Software at Microsoft.  BTW – Adam wrote a review of HWTSAM here – although Linda Wilkinson beat him to the clever title.

This is from a section on quality in chapter 16. It’s something I believe strongly in and would love to hear your comments.

Many years ago when I would ask the question, “who owns quality,” the answer would nearly always be “The test team owns quality.” Today, when I ask this question, the answer is customarily “Everyone owns quality.” While this may be a better answer to some, W. Mark Manduke of SEI has written: “When quality is declared to be everyone’s responsibility, no one is truly designated to be responsible for it, and quality issues fade into the chaos of the crisis du jour.” He concluded that “…when management truly commits to a quality culture, everyone will, indeed, be responsible for quality.”[1] A system where everyone truly owns quality requires a culture of quality. Without such a culture, all teams will make sacrifices against quality. Development teams may skip code reviews to save time, program management may cut corners on a specification, or fudge a definition of “done”, and test teams may change their goals on test pass or coverage rates deep in the product cycle. Despite many efforts to put quality assurance processes into place, it is a common practice among engineering teams to make exceptions in quality practices to meet deadlines or other goals. While it’s certainly important to be flexible in order to meet ship dates or other deadlines, quality often suffers because of a lack of a true quality owner.

Entire test teams may own facets of quality assurance, but they are rarely in the best position to champion or influence the adoption of a quality culture. Senior managers could be the quality champion, but their focus is justly on the business of managing the team, shipping the product, and running a successful business. While they may have quality goals in mind, they are rarely the champion for a culture of quality. Management leadership teams (typically the organization leaders of Development, Test, and Program Management) bear the weight of quality ownership for most teams. These leaders own and drive the engineering processes for the team, and are in the prime organizational position for evaluating, assessing, and implementing quality based engineering practices. Unfortunately, it seems that quality software and quality software engineering practices are rarely their chief concerns throughout any product engineering cycle.

Senior management support for a quality culture isn’t entirely enough. In a quality culture, every employee can have an impact on quality. Many of the most important quality improvements in manufacturing have come from suggestions by the workers. In the auto industry, for example, the average Japanese autoworker provides 28 suggestions per year, and 80% of those suggestions are implemented[2].

Ideally within Microsoft engineers from all disciplines are making suggestions to improve quality. Where a team does not have a culture of quality, the suggestions are few and precious few of those suggestions are implemented. Cultural apathy for quality will then lead to other challenges with passion and commitment among team members.

[1] STQE Magazine. Nov/Dec 2003 (Vol. 5, Issue 6)

[2] The Visionary Leader, Wall, Solum, and Sobul

Give ‘em what they want

Last night, I was sitting in bed reading the latest issue of TapeOp (music recording magazine). I used to be moderately involved in recording music, but these days I mostly just follow the trends and try to stay sharp. TapeOp has a lot of interviews with recording engineers and producers, and it’s great to hear what their thoughts were when they made some of their more famous recordings.

I feel sort of stupid that it took me until last night to notice (yet another)  interesting parallel with music and software. Recording is mostly a waterfall process. You record, then you mix, then you master. Some iteration is possible – you can record one song or a whole album before you mix – but most of the time, you finish recording, then you mix. When you’re dong mixing, you master. What’s interesting, is that there are a massive number of opinions on how to do each of these activities. Which mics are “best”? What rooms are best for recording a jazz combo? Do you record rock guitars with mics perpendicular, or at an offset? When should you use multiple mics? Where do you add eq? How loud do you make the vocals.

Then, there’s mastering – which in my opinion is awful on almost every pop or rock recording made in the last 10 years. Mastering (IMO) ruined the latest Metallica and Springsteen albums (and probably many others that I haven’t bothered listening to).

Whatever I think, the albums sold millions, and were (AFAIK, critically acclaimed). You know why – because despite the mastering – despite the fact they may have not used the best microphones or mic placements possible, it’s what the customer wanted. You can take the most well-rehearsed band in the world – use top notch equipment and fantastic production to recreate their sound exactly. You can add just the right punch and pop and remove any harshness and engineer the best recording ever.

But it doesn’t mean it will sell. Customers want something different, and if you don’t give them what you want, all you have is something that you are proud of, and not something that puts dinner on the table. Along the same lines, you can’t ignore the technical part of the process. Engineering quality still makes a difference, as long as you’re doing the right thing.

Same thing as my current day job.

Improvement through practice

In music, the better you are at the basics, the better you are on the bandstand. Even the pro musicians I know practice almost every day. I think testers (and developers) forget the value of practice too often.

In The Passionate Programmer, Chad Fowler suggests doing the exercises on CodeKata. I checked them out, and sure enough, the Kata are great, and I plan to start working through them. A few years ago, I solved a bunch of problems on project euler as an exercise to keep myself sharp.

As a tester, it’s sometimes hard not to practice. As I interact with software, I often ask myself “what if” …then I try it and see what happens. But this is only “sort of” testing – it’s my tester DNA seeping out into my every day life.

I’ve been thinking about other ways to practice testing. I’m a member of uTest, but I haven’t taken the time to test anything. I suppose I could volunteer to test a non-profit’s web site or find a product I like to seriously beta-test – or I suppose I could look into volunteering a few hours a week in a MS product group.

How else do you practice testing?

GUI Schmooey

I answered a few questions this week about automating GUI tests. One question was about recommendations for GUI automation tools for non-coders, and the other was about how much time to spend on the GUI in an MVC (model-view-controller) application.

The answers were easy. In the first case, I said that they weren’t going to get ROI from the effort, and they should just test the GUI manually. In the second case, I suggested that they do all of the automation ignoring the view/GUI, and test the GUI manually.

I could expand an entire post on why I gave those answers, but it doesn’t matter. I’m going to go out on a limb and make the following statement.

For 95% of all software applications, automating the GUI is a waste of time.

For the record, I typed 99% above first, then chickened out. I may change my mind again. The point is that I think testers, in general, spend too much time trying to automate GUIs. I’m not against automation – just write automation starting at a level below(*) the GUI and move down from there. I’m not saying that you shouldn’t test the GUI at all – I just don’t see why you wouldn’t want to test it manually, and get people knowledgeable in user experience to help. I just think that in most cases we are wasting our time when we try to automate GUIs, and wonder if anyone has the guts to stop.


* What I beam by a “level below the GUI” is automation that works with IAccessible or an object model rather than interacting with UI elements directly.

The Test Test

I am always frustrated and somewhat sad when I hear testers whine or complain that they are not treated fairly; or that they are not respected; or that their development peers look down on them. I’ve been sitting on this post for many months wondering if I should post it or not when this thread popped up over on the JoS boards.

On one hand, I am always happy to offer words of encouragement and advice on how to rise out of the situation or at least to make the best of it. On the other hand, part of me sometimes wants to just say "stop your whining. If you don’t like it that much, quit and find someplace to work where you will be treated fairly and be respected!"

If you want to find a good testing job, you just need to ask a few questions. That said, with all due respect and references to Joel and The Joel Test, I give you "The Alan Test"

The Alan Test – aka "The Test Test"

  1. Are testers influential from day one of the project?
  2. Does the test team own their own schedule?
  3. Does the test manager report to the general manager (and not to development)?
  4. Are career paths for testers and developers equal?
  5. Do the developers value testers?
  6. Do testers have the same working conditions and resources as development?
  7. Do testers use good test case management and source control tools?
  8. Are tests built daily?
  9. Are automated tests and manual tests valued appropriately?
  10. Do testers have the same coding guidelines and rules as developers?
  11. Is there a culture of quality?


Are testers influential from day one of the project?

Notice that I used the word influential and not involved or even hired. From day one of the project, testers should be reviewing specs, giving feedback on schedule, and driving testability. The full test team doesn’t need to be on staff from day one, but someone should be there setting the quality bar early. If testers are not involved (or hired) before coding begins, the organization obviously doesn’t value test (nor quality for that matter).

Does the test team own their own schedule?

The test team should own their schedule and have influence on the overall product schedule. A one-week code complete slip cannot be "absorbed" in the test schedule. If the test team determines they need n days or weeks after code complete to finish testing, they need n days or weeks. Period. If the test schedule is also known as "buffer for the dev team", the organization doesn’t recognize the value of test.

Does the test manager report to a general manager (and not to development)?

Put another way, the test manager should be a peer of the development manager. If the test manager reports to the development manager, development needs drive test, and test has a lesser voice in the product.

Are career paths for testers and developers equal?

If test and development are indeed peers, they should have equal career paths. At Microsoft, we have "levels" that line up with promotions and career paths, and developers and testers have equal opportunity for promotion. Another way to ask this question could be "would you ever pay a tester as much as you pay your top developer?". Don’t fall for the paper trick on this point – as in "On paper, testers can grow as much as developers – look, we have documents". Ask for examples. "How many developers and testers are at your most senior levels in your org". If the organization isn’t willing to promote testers to senior positions, they don’t value test.

Do the developers value testers?

Ask if the developers see test as an ally in creating quality software, or as a gang of hooligans making their lives hard (or as a bunch of robots pushing buttons)? Testers don’t exist to make developers cringe or cry. In a good organization, developers understand that the role of test can be as much about quality assurance as it can be about quality control, and know that the test team exists so that everyone can make a higher quality product.

Do testers have the same working conditions and resources as development?

Would you want to work somewhere where developers had their own offices, dual 22 inch wide-screen monitors and comfortable chairs, while the test team worked in the hallway sitting on milk crates?

Me neither.

Do testers use good test case management and source control tools?

I once tested software on a laptop, in a car on the way to drop off the master for duplication (at least I wasn’t driving). I’m a fan of ad-hoc testing, but this was over the line (I don’t work at that company anymore).

Testing is a creative activity, but some structure around recording test cases and related test code is critical on a professional test team. A test case management tool is necessary – as is the ability to version test cases and test code. If the test team doesn’t value this, they probably don’t really care that much about testing.

Are both tests and product code built daily?

Automated tests (assuming they are compilable code and not scripts) should be built at the same time, and in the same process as product code. This is particularly important in situations where the test code calls APIs or other functionality in the product code, as it provides a small level of testing at build time (function signatures and other header resources).

Do testers have the same coding guidelines and rules as developers?

Put another way – Is test code treated the same as product code. If the test team is writing unmaintainable buggy code, you could hardly expect the development team to respect them. Test code is just as important as production code, and is in need of similar efforts.

Are automated tests and manual tests valued appropriately?

Does management have unrealistic goals of test automation, or do they devalue all manual testing? Unrealistic goals for automated tests indicate that management doesn’t understand testing well enough for you to want to work there. Similarly, if all automation is de-valued by management, this indicates that management doesn’t understand testing well enough for you to want to work there. Ask about the product and testing goals, then ask how automated and non-automated tests support those goals. Ask for examples of tests on that team that are automated and for examples of manual tests. If a team tries to automate too much – or not enough, it’s a sign that you probably don’t want to work there.

Is there a culture of quality?

Finally, you need to determine if quality is something the organization tries to test into the product, or if it’s something that drives everyone on the team. Do developers "throw code over the wall" to test, or are they embarrassed and apologetic when bugs are found? Are bugs fixed as they are found, or are they left to fix at the end. In order to meet schedule, are bugs punted, or are features cut?

Eleven simple questions. Eleven questions where I would bet the majority of the answers for many testers are "no". I wouldn’t work in an organization that scored less than 9. Sadly, many organizations are much, much lower.

Of course, if you know of an eleven, please let me know where to send my resume. :}