I’m in Frankfurt for a few days, and tomorrow (Wednesday), I am giving a keynote at German Testing Day on Test Innovation at Microsoft.
At this point, I’ll give you a few moments to chuckle, laugh, or make snide remarks about the topic. I think I’ve heard it all before; “testers at MS just write automation all day”, “testers at Microsoft don’t care about quality”, or “testers at MS don’t get testing” (all quotes I’ve heard or read about MS testing – and all, interestingly, by people who have never worked at Microsoft). This post is, I suppose, inspired by my talk tomorrow, but is more of a general example than a reflection of the topics of tomorrow’s presentation.
I think our lack of open-ness in how we test and innovate in testing at Microsoft is (IMO) one of our biggest flaws as a company – or at least one of our biggest flaws in being a good member of the worldwide testing community. We tried to make a dent with testercenter, but that project is all but dead. A few of us who test at MS share what we do, but none of us do a great job sharing our innovations. I hope we can get better at that soon. The short and honest truth is that I can easily name at least fifty testers at MS whose testing skills and ability to innovate are equal or better than any tester you know. I suppose in a company of nearly 10k testers that’s not really a big deal, but it’s really hit home as I’ve prepared for this presentation. I have some ideas for getting some of these people and their ideas on the global radar, but for now, you’ll just have to trust me.
I’m probably not the right person to lecture (or write) about innovation, but I can tell you what kinds of innovative testing solutions tend to be successful (at MS at least). A lot of innovation starts with the “There’s got to be a better way” approach. The worst rut to get into in software development is the “we’ve always done it that way” syndrome. Innovative testers stop, look at the big picture, and ask, ‘”is there a better way to do this?” Some of the biggest test innovations at MS have been solutions that take something complex or difficult (or “impossible”), and making it brain dead simple. One tester created automatic filtering of code coverage reports based on a specific change set – in other words, he made it brain dead simple to know if you (or any tester) tested each changed line in a specific fix or feature addition. Some of our games testers have created a system that allow the bug triage team to navigate to the exact place in the game where an error occurs from the bug tracking system (and conversely, the ability to create a bug from the place in the game where the tester observed the error).
Other innovation happens when someone finds a way to implement a known idea into a testing process. Testing localization is a huge (and hugely expensive) task. Ross Smith (and a few of his cohorts) had the idea to optimize the task via crowdsourcing and gaming elements.The results have been phenomenal.
Eric Sevareid once said,
The chief cause of problems is solutions.
Many years ago, test systems at MS reached a level of maturity and scale that enabled teams to easily run millions of tests. These systems were the culmination of several innovations, but eventually led to other problems. The prime example of a problem created by the test system innovation is the challenge of what to do with all of those failed tests. Think for a moment. If you run a million tests and have a 99% pass rate (I know, but this is just an example), you have ten-thousand failures to investigate. Automatic failure analysis (discussed in HWTSAM) is an innovation that greatly reduces the time testers need to investigate automation failures (my stance is that automation is worthless unless every aspect of automation beyond test authoring is not also automated). We’ve also made great strides in test selection (let’s say you only had time to run five thousand of those million tests – which would you run?).
Of course, innovation occurs in the testing itself (as well as in the systems surrounding the testing). We’ve done amazing things with test generation, test design, and run time analysis.
Some of which I’ll share with the audience tomorrow – and likely in this blog sometime in the future.