bookmark_borderWriting About Testing

I’m leaving early tomorrow morning for the Writing About Testing conference. I wasn’t able to attend WAT last year, but I’m looking forward to the trip, the meeting and talking with a group of great testers (who all enjoy writing about what we do).

I don’t think there was ever a point when I thought, "I want to be someone who writes about testing" – it just sort of happened. I began blogging just short of seven years ago (my first ever blog post is here), and originally intended for the blog to be a place to interact with customers of the product I was working on (at the time Windows CE). Over time, I began to write a bit more – honestly, what I wrote was mostly crap, but before too long, I recognized that blogging was a great chance to practice writing, and I’ve spent most of my blogging time since then trying to get better.

Sometime not too far after that, I submitted a talk on metrics for a STAR conference and Lee Copeland asked if I was interested in writing an article for Better Software. Never being one to turn down an opportunity, I said sure and took another step in my attempts to write something meaningful.

So, I kept blogging, and wrote a few other Better Software articles. Then, in a story I’m sure I have told before, I sort of stumbled into writing How We Test Software at Microsoft – I had no idea what I was doing, but somehow made it through the process and remain (mostly) proud of the work. Of course, I continue to blog (which is obvious if you’re reading this – right?), I’ve also made a recent effort to write more – mostly in an attempt to explore my thoughts to see if they go anywhere.Several months ago, I began writing at 750words.com (almost) every day. I passed the 100,000 word mark yesterday. I continue to be challenged by the practice of writing and constantly experiment with ideas on how to get my ideas across clearly.

As you can tell, I’m still working on that.

The point is that writing is never something I planned to do. Now that I think about it, testing was also something I never planned to do. Yet I do both every day and love both. I’m somewhat afraid to to discover the next thing I’ll love that I never planned to do.

I probably won’t blog from WAT, but will definitely share some thoughts next week. Hopefully (for all of us), I learn something new.

bookmark_borderTo Automate…?

To Automate, or not to Automate, that is the question that confuses testers around the world. One tester may say, "I automate everything, I am a test automation ninja!". Another tester may say, "I do not automate, if I automate I fail to engage my brain properly in the testing effort!". Another may say, “I automate that which is given to me to automate.” Yet another may say, "I try to automate everything, but what I cannot automate I test manually."

All of these testers are wrong (in my opinion, of course). So wrong, that if I had the power, I’d put them in the tester penalty box (sorry, hockey playoffs on my brain) until they came to their senses.

Good testers test first – or at the very least they think of tests first. I think great testers (or at least the testers I consider great) first think about how they’re going to approach a testing problem, then figure out what’s suitable for automation, and what’s not suitable. I say that like it’s easy – it’s not. My stock automation phrase is "You should automate 100% of the tests that should be automated". nearly every tester knows that, but finding the automation line can be tricky business.

I have my own heuristic for figuring this out – I call it the "I’m Bored" heuristic. I don’t like to be bored, so when I get bored, I automate what I’m doing. When I’m designing tests, I try to be more proactive and predict where I’ll get bored and automate those tasks.

Just today, I was fixing some errors reported by a static analysis tool and found myself doing the following.

  • copy the file path from the log
  • check out the file from source control
  • load the file in an editor
  • make the fix (or fixes)
  • rebuild to ensure the error was fixed

After the third time, I was bored. I spent the next two minutes and fifty-one seconds writing a doskey macro that would pull the file name from the log, print it to the console (for easy copying), check out the file, and finally, load the file into the currently open editor. Given that I had another 40 files to go through, I consider that a good automation investment.

In HWTSAM, I think I told the story about my first week at Microsoft. My manager handed me a list of test cases (in excel) and told me to run the tests. I glanced at the list of eighty or so tests and asked when he expected me to finish automating them. He said, "Oh no, we don’t have time to automate those tests, you just need to run them every day."

As an aside, let me say that I think scripted manual tests are one of the most wasteful efforts any tester – no, anyone in the world – can take on. I know that some readers will cite examples of where manual scripted tests are valuable, but for the record, I despise them more than my daughter despises brussel sprouts (you don’t know my daughter, but you may have heard her screams of dismay across the country the last time I tried sneaking a few on to her plate).

So anyway, I started work on a Monday, got bored running those tests by Tuesday, and automated all eighty tests on Wednesday. I used my new found spare time to find all sorts of other issues (and to automate other scenarios). I don’t know if I ever told my manager that I automated those tests, but he was pretty dang happy with my results.

Not all automation efforts work this way (I’m talking to you, pointy haired manager). The dream panacea of automation for some folks is that testers will write a plethora of automation, then they’ll have time to do all kinds of additional testing while the automation runs seamlessly in the background. If all automation efforts were created equal (and by that, I mean equally simple), and if testers took the time to write reliable and maintainable code, this could be possible, but I don’t know anyone that lives in that world. Some things are difficult to automate, and we can waste our time trying to automate those tasks. Sometimes we write fragile tests because they appear to work (the illusion of progress). Then reality sets in and we discover we’re spending a big chunk of our time investigating and fixing failed tests (but that’s another story).

My parting advice is to remind you all that as software testers, our job is to test software. That probably sounds stupid (it sounds stupid to me as I type it), but test automation is just one tool from our tester toolbox that we can use to solve a specific set of testing problems. Like most tools, it works extremely well in some situations, and horribly in others. Use your brain and make the right choices.

bookmark_borderMore on the Coding Tester

Every day, it seems, I come across an article, a forum posting, a blog, or a tweet bemoaning the end-of-test-as-we-know-it because some company has hired / is hiring people with programming knowledge to become testers. I’ve written about coding-testers (here, and in other posts) before (as have many others that I’m too lazy to look up right now), and I think most folks recognize that /in some situations/, having testers who code is an advantage.

It’s important to note that the role of a coder-tester is NOT to automate everything (everyone knows that you should automate 100% of the tests that should be automated). Writing automation is one task of a coding tester, but certainly not the primary focus (yes, I know that in some companies, automation “experts” only write automated tests. In my opinion, these people are not really testers, so they don’t count).

I, my colleagues, and many of the best testers I know are good testers first (well, at least my colleagues are), but have programming chops that they can use to solve difficult testing problems. Automating a bunch of rote tasks (assuming robustness, accuracy of the verification, maintainability, and numerous other attributes) may be a difficult testing problem, but it’s just one testing problem. Good testers simply use the tools in their toolbox to solve the problems and challenges they run into.

My head isn’t in the sand though – I know that there are a lot of people hiring testers out there who are thinking, “I want to hire a bunch of coders to do my testing, because then they can automate everything!).” That, of course, is stupid, and I’m sorry that situation exists. If you run into these folks, feel free to send them my way, and I’ll be happy to explain a thousand other ways programming knowledge can help someone be a better tester, and why attempting to blindly automate everything is pure idiocy.

To be clear, I do not think that all testers need to have a computer science degree (remember, I have a graduate degree in music composition). I also don’t think testers must be able to program. It depends completely on what you want testers in your organization to do. It’s certainly possible to do great testing without using any programming skills – you just need to think about the role you want testers to perform (or what you want to call testing), and hire the people who fit that role.

On the other hand, I find it a bit silly to think that just because someone knows how to program, that they are somehow “tainted”, and will be unable to look at a program from the proper “tester angle” because they have a notion of how the stuff under the hood actually works. I’ve worked with a hundreds of testers who can code, and hundreds more who could not. In fifteen years of working with testers from both camps, I’ve seen no correlation of increased customer empathy or testing talent emerge from either camp. My study is only anecdotal, so if there are specific studies in this area, please point me to them.

A final thing to remember for anyone who thinks they’re somehow closer to the customer due their background (or lack of a background) is that you are not the customer. It doesn’t matter if you’re a former bank CEO testing a financial suite – you’re still not the customer. I think it’s critical to use whatever means you have to learn about the customer, or to understand the customer, but you will never be the customer. Regardless of the skills you (don’t) have.

bookmark_borderWhat is Testing?

Despite the lame title, this isn’t yet another post attempting to describe what testing is – it’s just something I wanted to share inspired by a small discussion last night (although if you browse the testing blogosphere, you’ll find a lot of good discussion on this topic lately).

I went to Seth Eliot’s talk at SASQAG last night. Afterwards, Seth, Keith Stobie and I were talking (actually, Seth and Keith were talking, and I was listening), and Keith mentioned that some of what Seth talked about wasn’t actually testing. Seth said, "sure it was.", but Keith disagreed. Finally Keith said, "not every bug finding activity is testing." An example was that Seth was talking about analyzing customer log files in order to find bugs, and calling it testing. I suppose technically, that’s an analysis activity and not a testing activity…or is it? Another example are code reviews – they’re a valuable bug finding activity, but certainly not a test activity…

I think it’s easy to blur the line between what testing is, and what testers do. I’m not convinced it’s correct to blur the line (or incorrect for that matter), but I do think it’s a frequent cause of confusion among testers. If you were to ask a tester on my team, for example, what they do, they would say "I’m a tester". But if you asked them to elaborate, they’d say something like, "I review code and product designs, I analyze customer data, I write automated tests, I debug and analyze test failures, I identify risks (and investigate risk mitigation), I measure things, I explore the application, work on cross-group initiatives, I work with developers, I debug, I try out new ideas, I work on quality initiatives, etc.).

I suppose you could look at that in two different ways. One is, "wow, the testers on your team do a lot of stuff," or "wow, the testers on your team don’t do very much testing". Either answer is fine, as the answer depends entirely on your definition of testing, and whether you think testers should only do testing, or whether they perform analysis and prevention tasks as well.

The question is, how much of this is testing? I think I’ve always considered all of these things to be part of the testing world, but I know others see testing as only one or two of those activities. I’m not saying anyone is right or wrong in coming up with their definitions of testing; instead I realize even more than ever that the different views can cause communication problems (as of right now, I’m sort of leaning toward calling very little of this stuff testing, but I may change my mind tomorrow). I think the view differences are largely contextual – on many teams testers do just a subset of that list, and that’s perfectly fine. I suppose that’s just another reason why titles mean little, and experiences and activities mean much, much more.

I was going to ask for reader comments on what testing was, but I don’t think consensus (or verification that there are different views) is important here. What I think is important is that every tester and test team define the role of testing for their context, and do what’s feasible and practical given the product, team, schedule and other factors.

bookmark_borderCareer Tips – Media Links

Thanks to everyone who attended the presentation today. Video and slides are below. For those who asked – some of my other online talks are here:

Thoughts on test strategy – http://www.vimeo.com/15853235

Code reviews for testers – http://www.vimeo.com/16753993

 

Ride the Gravy Train (and other career tips) from Alan Page.

 

bookmark_borderWorking on Lync

I recently wrote a bit about my role on the Lync test team at Microsoft. I thought it may be interesting to share a bigger picture of what the team does for those interested in how we do what we do.

At the surface is the "who" and the "what" of the team. We have a team of eighty or so testers (smallish on Microsoft scale, but otherwise large/huge) who test the Microsoft Lync client application (instant messaging, presence, voice and audio, desktop  / application sharing, and a few other odds and ends). Our role is like most test teams – we test the product, provide information, and help inform (or make) product decisions.

The "how" part of our work is (to me) much more interesting (and a big reason why I love working on this team). We strive to have a culture high in trust. We have few (none that I can think of, but I’m sure I’m missing something) top-down mandates. We believe that the people closest to the work are in the best place to make decisions on what the work is that they need to do. We obviously give more guidance to employees new to the team and managers are involved in coaching, but for the most part, we trust the people on the team to do what they think is best. For example, we have no requirements on test case counts, bug find rates, code coverage rates, or anything else like that. If someone decides that working on a side project for a few days will help them get their job done better, they don’t need to clear it with anyone – they just do it. Failing is ok (and expected – if you’re not failing, you’re not trying hard enough).

If you’re worried (and I know some of you are), it’s not chaos – we’re surprisingly efficient and good at what we do. However, I think that if you’re going to create a high-trust work environment, that  you need to provide just-enough structure to keep people headed in (more or less) the same direction. On our test team, we have a set of five guiding principles / values that our team uses to help figure priorities and our work.

At the top of the (non-prioritized) list is self-organizing teams. For example, we just finished a "quality milestone", where we had a few months to prepare for the upcoming release cycle. Like many teams, we had a pile of technical debt from our previous release, some big work items we had to do to get ready for the next release, plus some ideas for new tools that would make us more efficient down the road. In many organizations, test management would draw up a plan of what needed to be done, select who was working on what, then align the team on the work. On our team, we basically just told people to get to work. Because we trust them to discover the most important work, and because we value self-organizing teams, that’s what they did. They formed teams and tackled everything they would have if we had a top down mandate. In an extension of this principle, when we formed feature teams for the product cycle, we let people choose their own team. The idea has worked extremely well – we have unheard of (on the low side) levels of attrition, and people are excited about what they work on. We have plenty of other opportunities for self-organizing teams, but I’ll have to save those for another post.

Another bit of structure that helps when the work isn’t generated top down is showing value frequently. Sharing progress frequently a great way to discover what each other are doing (which is fantastic for learning as well as plain-old information sharing), and a way to celebrate our successes. As I mentioned above, we encourage failure, but one of things that needed for this is the ability to fail quickly. It’s one thing to hit a blocking dead-end a week into a project, but without sharing progress frequently, you could go months before discovering your idea isn’t going to work – and that’s not so good.

We also drive our progress and priorities through the value of continuous improvement and innovation. This simply means that we want everyone to think frequently about how we can be better. For example, we expect people to frequently identify practices that our team(s) follows because, “that’s the way we’ve always done it”, and to ask, “Is there a better way to do this?” We encourage (and expect) everyone on the team to contribute to the overall improvement and innovation.

We also put a big emphasis on customer focus. We do a ton of work in analyzing customer data, interacting directly with customers and on customer-focused test design (something I’ll be talking about at a few upcoming conferences).

And finally, we have fun. It may seem strange to have this as a guiding value, but we think it’s important critical! Whether it’s playing on the Lync Test intramural softball team, or playing Xbox in one of the conference rooms, or just taking time off to take a walk with teammates and joke about the guy who blogs about the test team, we value balancing work with play.

I don’t know of a lot of other test teams like this, but it’s certainly a fun and interesting place to work.

What do you think of our test team?

bookmark_border“Ensuring Software Quality”…maybe

I ranted a bit on twitter last week about this book excerpt from Capers Jones. I’ve always had respect for Jones’s work (and still do), but some of the statements in this writing grated on me a bit. This could be (and is likely) because of how I came across the article (more on that somewhere below), but I thought I’d try to see if I could share my thoughts in more than 140 characters.

Jones starts out this chapter by saying that testing isn’t enough, and that code inspections, static analysis, and other defect prevention techniques are necessary – all points I agree with completely. His comments on test organizations ("there is no standard way of testing software applications in 2009") are certainly true – although I personally think it’s fine – or even good, that testing isn’t standard, and would hope that testing organizations can adapt to the software under test, customer segment, and market opportunity as appropriate.

Jones writes, "It is an unfortunate fact that most forms of testing are not very efficient, and find only about 25% to 40% of the bugs that are actually present". If all bugs were equal, I would put more weight on this statistic – but bugs come in a variety of flavors (far more than the severity levels we typically add in our bug tracking systems). I have no reason to doubt the numbers, and seem consistent with my own observations – which is why I am a big believer in the portfolio theory of test design (the more ideas you have and the better you can use them where they’re needed, the better testing you will do). I believe that with a large variety of test design ideas, that this statistic can be improved immensely.

As I re-read the excerpt sentence by sentence, there are few points that I can call out as completely wrong – but there are several themes that still don’t sit well with me. They include:

  • The notion that defect removal == quality. Although Jones calls out several non-functional testing roles, he seems (to me) to equate software quality solely with defect removal. Quality software is much more than being free of defects – in fact, I am sure I could write a defect free program that nobody would find any value in. Without that value, is it really quality software?
  • Jones talks about TDD as a testing activity, where I see it more of a software design activity. But more importantly, TDD primarily finds functional bugs at a very granular level. His claims that defect removal from TDD can top 85% may be true, but only for a specific class of bugs. If the design is wrong in the first place, or if the contracts aren’t understood well enough, a "defect free" TDD unit can still have plenty of bugs.
  • Jones claims that, "Testing can also be outsourced, although as of 2009 this
    activity is not common
    ." I don’t have data to prove this wrong, but anecdotally, I saw a LOT of test outsourcing going on in 2009.

 

I found this article while searching for the source of this quote (attributed to Capers Jones here).

“As a result (of static analysis), when testing starts there are so few bugs present that testing schedules drop by perhaps 50%.”

I’m a huge fan (and user) of static analysis, but this quote – which I hope is out of context, bugs the crap out of me. We do (and have done) a lot of static analysis on my teams at Microsoft, and we find and fix a huge number of bugs found by our SA tools – but in no way does it drop our testing schedule by 50% – or even a fraction of that. I worry that there’s a pointy haired boss somewhere that will read that quote and think he’s going to get a zillion dollar bonus if he can get static analysis tools run on his team’s code base. Static analysis is just one tool in a big testing toolbox, and something every software team should probably use. SA will save you time by finding some types of bugs early, but don’t expect SA to suck in source code on one end and spit out a "quality program" on the other end.

There are plenty of things I agree with in the Jones book excerpt. The comments on productivity and metrics are on the money, and I think the article is worth reading for anyone involved in testing and quality. It’s likely that my perception of the paper was skewed by the quote I found on the CAST software site, and hope that anyone reading the article for the first time reads it with an open mind and forms their own opinions.

bookmark_borderOnline Presentation: Career Tips for Testers

I haven’t done an online presentation in quite a while now. I’m always happy to share my thoughts on testing with anyone who wants to listen. It was nice to have the opportunity in January to present to a rather large audience on test design as part of a SoftwareTestPro web cast, but I enjoy lower key events as well.

I’m planning a talk on career tips for testers on April 20th. This is a variation of a talk I gave internally at Microsoft a year or so ago, but I’ve discovered since then that the tips are (mostly) relevant to testers outside MS as well. As you may know if you’ve been reading my blog for a while, I’m somewhat passionate about tester career growth – in many ways because I want testers to grow in skill and experience so we can advance the testing craft much faster in the next ten years than we have the past ten years.

Here’s some more information.

Ride the Gravy Train and other Career Tips for Testers.

Abstract:

I’ve collected quite an assortment of interesting (bordering bizarre) career advice over my eighteen years (so far) in testing, and thought it would be fun to share. I’ll share tips on leadership, finding the right work and career growth that (I hope) should be helpful to anyone wanting to grow as a software tester. Some more information on the original presentation can be found here and here.

No RSVP is necessary. Just add the presentation to your calendar, and join the meeting. You may want to join a few minutes early to make sure everything is working. I’ll plan to start at 10:05 (PDT) to give the procrastinators a chance to join.

If you have questions you’d like to make sure I address, please post them here. I’m also open for suggestions for future talks if you have ideas or want me to repeat something from a conference.

bookmark_borderMy job as a Tester

Yes - that's my real phone numberI have somewhat of a non-traditional test role at Microsoft. I’m a tester (that’s what my business card says), but I don’t own any specific components or features. I test, but mostly where I want to discover something for myself, experiment with an idea, or when I’m coaching others. The bulk of my time, in fact, is spent coaching and mentoring other testers on the team, as well as working with the test managers on the team and our test director to make sure we’re doing the "right thing" with both strategic and tactical decisions and goals.

I should mention for those of you who don’t already know that I don’t manage anyone. One of the things I really like about Microsoft is that there’s a clear career path (including stock and pay scale) for non-managers and managers. On the Lync team, for example, I report directly to the test director (the test managers are my peers), and the director and I happen to be the same "level". Several times in my career, I’ve worked for a few managers who were a lower level than me, and it hasn’t caused a problem. In my own eyes, as well as the managers I’ve worked for, I am a better asset to the team in a non-management role (not, I hope, because I’m a horrible manager, but because I can accomplish more and have a wider span of influence without the "burdens" of management).

When I started on the team just a bit over a year ago, I asked my manager what he wanted me to do. He said, "Do whatever you want". My answer (possibly obvious if you know me) was, "Great – that’s exactly what I was planning to do." Doing "whatever I want" doesn’t mean I sit around all day – at least not to me. For me, it means that it’s my job to figure out what needs to be done and find a way to get it done. I try to recognize where gaps in knowledge or skill are anywhere on the team and find ways to address the gaps. I’ve had success in my career having the right balance between breadth of knowledge, organizational influence, and tester intuition (or plain dumb luck) to find and address areas where a team needs to improve. The code review work I presented at PNSQC last year (paper here) is one example of this, but I’ve also spearheaded efforts in exploratory testing, code coverage, test design, test automation and test strategy on the team. Mostly, I try to keep an eye on the big picture of what I think the future of our team needs to look like, then make sure we’re making the right decisions and investments to move in the right direction. It’s fun – and since the future changes frequently, so does the job.

The coaching and mentoring parts of the role give me occasional opportunities to get my hands dirty testing, but I try to make sure I have other opportunities to keep my mind wrapped around the technical aspects of testing as well. I’ve most recently been working on improving test code quality (through a combination of guidelines, code reviews, static analysis and (mostly) culture). It’s a fun problem (for me) to solve because it involves both organizational and technical challenges, so it gives me an opportunity to make good things happen for the team while spending a bit of time grepping through code – fixing bugs and solving problems.

Probably worth mentioning is that I chair two cross-company testing communities – one is a collection of other senior non-management testers (some, at least, with roles similar to mine), as well as a community of the top 3-4% of all testers at Microsoft. I am a huge believer in the power of community and networking and although the work I do to keep these communities going doesn’t impact my day-job directly, I consider it a critical part of what I do.

While my role is (I think) somewhat unique among industry testing roles, it’s not completely uncommon at Microsoft. I’m curious, however, if this sort of role exists elsewhere – or what you would call the role if it existed (other than  Tester / Thinker)?

bookmark_borderMy Role in Certifications

I thought I could raise a bit of controversy by announcing that I’ll be spending a reasonable chunk of my time working on certifications.I teased the twitter-verse with a few hints, wondering if the wolves would pounce on me, but 1) I don’t have that many followers, and 2) since I’m always pissing people off, I don’t think anybody really cared that much.

I don’t talk that much about what I do in my day job on the Lync team. Among a variety of other things (that I really should get to in future blog posts), I’ve recently committed to some significant certification work.

No – not tester certifications – did anyone bite?

I recently accepted the role of chair of the Unified Communications Interoperability Forum (UCIF) working group on testing and certification. Basically, that means I’m going to work on testing tools and ideas (and eventually some sort of certification) for making sure UC software and devices from different vendors play nice with each other. I did some work with self-certification of embedded hardware when I worked on Windows CE, so although the work is a bit outside of my normal realm, at least I have an inkling of what’s in store for me. The rest of the working group members all (I assume) have a lot more experience in UC, but I’m hoping I can rely on my testing experience to help us figure out exactly what needs to be done.

If anything significant happens (including me being de-chaired for ineptitude), I’ll post it here. I’ll go back to my normal rants shortly.