Don’t like something – fix it

Perhaps it’s just the nature of the tester, but I’ve seen a lot of complaints from testers recently. “Managers do the wrong thing”, “Testers need to do more ‘x’”, “Testing isn’t taken seriously”, “These people don’t understand what I do”, gripe, mumble, etc. Of course, it’s easy for me to tell you to quit your griping and fix it (in fact, I’m sure I’ve done that in previous posts), but solving problems is much bigger than that.

Let’s say, for example, that you want to make a change in your organization (I’ll leave the exercise on how to change the world) for another post. Just for fun, let’s say you would like to do a lot more Exploratory Testing in your organization (bad joke removed – should have picked a different example).

What do you do first?

The number one answer I expect (and I’m usually right) is that you need to convince management that ET is great (or that you should do more of it), and they’ll make a top down decree, and everything will be unicorns and rainbows(tm).

Bzzzt!

Management is one faction, but there are more. What if the rest of the testers on the team don’t see the value in ET? What if they don’t know how to do it? What if your customer demands that you only deliver automation results (or something else silly). What other factions can you identify? You need to think about everyone with an interest in the results – then take time to understand where they’re coming from and how the change impacts they’re thinking. With any change, you have gains and losses. What do you gain by doing more ET (note: also define “more”). What do you potentially lose by doing more ET? Top down edicts rarely work, so if you want a chance of success, don’t start there. Identify your factions, and come up with a strategy for working with each of them.

The important thing to remember is that you don’t change the process, you change the people. If you don’t think about how change impacts people, you will probably fail. Whenever I’m dealing with a performance gap, the six boxes model helps me think about how change happens.

Environmental and Team Factors

Expectations and Feedback

· Roles and performance expectations are defined; employees are given relevant feedback.

· Work is linked to business goals.

· Performance management system guides employee performance and development.

Tools and Processes

· Materials, tools and time needed to do the job are present.

· Processes and procedures are clearly defined and enhance individual performance.

· Work environment contributes to improved performance.

Consequences and Incentives

· Financial and non-financial incentives are present

· Measurement and reward systems reinforce positive performance.

· Overall work environment is positive, employees believe they have an opportunity to succeed; career development opportunities are present.

Individual Factors

Knowledge and Skills

· Employees have the necessary knowledge, experience and skills to do desired behaviors.

· Employees are cross-trained to understand each other’s roles.

Capacity

· Employees have the ability to learn and do what is needed to perform successfully.

· Employees are recruited and selected to match the realities of the work situation.

Motivation

· Motives of employees are aligned with the work.

· Employees desire to perform the required jobs.

The six boxes model (sort of based on Maslow’s hierarchy of needs) is a model to help think about all of the factors that go into change.

Box 1 is where management can help. Defining the expectations and feedback loop for the change helps people understand what they need to do.

Box 2 pertains to tools (e.g. sysinternals.com tools), and resources (including computers, a quiet place to work, etc.).

Box 3 is where most change efforts fall short. This is the “what’s in it for me” category. Prizes, bonuses and other material rewards fall into this category, but it can also (and often more effectively) be some other type of reward. The points awarded on xbox live or on stackoverflow are a form of box 3 reward. Done really well, box 3 can be satisfied by making the job more fun and interesting, and creating higher quality software.

Box 4 deals with the skill gap. For our example, it is the plan for how to teach or demonstrate necessary skills for ET, and may include instruction, reading material, coaching, etc.

Box 5 is about the people on the job. Are they capable of carrying out the necessary tasks? If not, you probably won’t be successful.

Box 6 is dependent on the other boxes. Binder says that if the other boxes are positive then this one is positive. My view on box 6 is that box 6 is free will – and you don’t mess with free will. Sure – keep box 6 in mind, but don’t f with it.

Now that you’ve thought of your factions, and mapped out the human element of the change, you’re just about ready to go. Before you start, remind yourself that while you have a plan, and you’ve anticipated as much as you can, things will change. You need to adjust your plan (revisit the motivations of your factions, examine your six boxes evaluation as you learn more information – hey – this sounds like “exploratory leadership)). Organizational change is often a moving target and being ready for that will help you be successful.

If all of this sounds like too much work, there’s always plan b – quit and go shopping.

Talkin’ bout evolution

Purely random non-testing info here, but important if you ever need to pick me out of a crowd. I’ve been having hair issues recently, and it’s been pointed out that I’ve become difficult to recognize. This blog post is your clue to finding me should I ever fall off the grid.

Once upon a time (you know – college), I had long hair – long enough that I could reach behind my back and touch it. I kept it long until about 1999 or so. One day I got up, decided I was done with long hair and got it cut off (not all of it – yet at least).

As of 2005 or so, I looked sort of like this:

Alan Page 2005 01

A few years later, my job in EE, along with prolonged exposure to small children (oldest was born in 2004) turned me into this (2007 or so – this is, incidentally, the pic I used for hwtsam, as well as my STAR East bio)

Page

I had a problem last spring. I needed to get my hair cut, but couldn’t find the time. It bugged me so much that I took matters into my own hands and cut it all off. For the entire summer, I looked kind of like this.

image

A few weeks ago, I determined that hair actually provides warmth during cold weather. Today, I look like this:

image

I expect to have something resembling a full head of hair by the time STAR rolls around.

And now, you’re caught up. My head fashion show is over.

Some (more) writing tips

My last post contained one of the tricks I use when writing – how I use iteration when I’m writing. It’s one technique I use to make steady progress and avoid writers block. Although that post was about iteration in general, it reminded me that I have a few other tricks that I wanted to share.

Before you eagerly read on, please remember that I’m not an expert writer. I have some experience, but I don’t claim to have it all figured out. I think it takes much more than a handful of magazine articles and part of a few books under your belt to claim you actually know how to write or are anything remotely close to an expert, so please take my advice with a grain of salt. That said, there were a few other techniques I used when writing both hwtsam and my chapter in Beautiful Testing that help me a ton when writing.

First – if you want to write, dedicate some time for it. I found that I needed at least an hour to be effective, and would often block off 2-3 hour chunks of time on weekends. In one prolific stretch of writing, I took a week off of work, and wrote every day from 9-12 and again from 1-4. In my hour “off” I would go for a run and grab some food – the break was very energizing. I think I wrote 3 full chapters of hwtsam that week.

My next trick was to employ the (10+2)*5 trick (I wrote a windows sidebar javascript applet to help with this). The idea is simple. Write for 10 minutes – don’t stop. If you feel blocked, move to the next section (see my last post on iteration for more details). Refuse to be distracted by anything during those 10 minutes. When 10 minutes is up (my applet sounded a bell and changed color), do something else. Sometimes I would stare at the ceiling, sometimes I would check email, and sometimes I would glance at my rss feed. Even if you’re “on a roll”, stop and take a break. When the 2 minutes is up, repeat. Then repeat a few more times. Once you’ve gone through five of these 10+2 minute cycles, an hour is gone (and if your experience is like mine, you made a heck of a lot of progress). At this point you can launch straight into another hour, or you can do what I did and take an extended stretch and refill the coffee break. I have no idea if this will work for anyone else writing about testing (or any other subject), but I can tell you that I don’t recall ever having writers block – in fact, I sort of believe it doesn’t actually exist, but I’ll save that discussion.

Final tip (and also a sure writers block stopper) is something I picked up from Hemingway (not directly, but through the Paris Review). The tip is to stop writing every day before you’ve run out of ideas (I couldn’t find the actual quote where I read this, so bear with me). Think of this scenario: you’re cranking out some prose at the end of the day. You have a fantastic idea for a sample, story, or exposition – something that you’re ready to crank out and know what you want to do. My advice is to NOT write it. Instead, jot down a few notes that will help you get things going the next day and then call it a night. Otherwise, if you write your brilliant work at the end of the day, you’ll have to start from a clean plate the next day, and starting writing from a blank page is hard. I find it so much easer to get in the flow of writing if I already know exactly what I’m starting with in the morning. Sometimes a night of sleep also helps to vet the idea a bit more. Again, ymmv.

Hope there’s some useful stuff in here.

Iterate, Iterate, and Iterate again

I’ve been a big fan of iterating since before I knew I was doing it. When I first read The Pragmatic Programmer nearly ten years ago, I was delighted to read about the concept of Tracer Bullets applied to programming. The concept of tracer bullets (based on guns firing an occasional phosphorous round in order to aim in the dark) is to start with a skeleton implementation and slowly add functionality rather than try to deliver the whole ball of wax at once. The concepts rang true to me before I really understood what Agile and TDD were all about, and I was happy to see that people who knew what they were talking about confirm that my typical approach to software development had some merit. To this day, that’s generally the way I write code (shoot me – I don’t always use TDD for the crap utilities I write). When I write code, I start with barely more than an empty function, then I add, test, iterate and refactor until the code does what I wanted it to do. I’m not smart enough of a programmer to do it any other way, and I (usually) get the expected result in the end.

But I iterate everywhere. At work, I put together skeleton project plans. Then I slowly fix them and add deliverables and dates slowly until I have something that works. When I write music, I start with a basic structure – sometimes a melody, sometimes a rhythm, and sometimes a chord progression. I slowly plug stuff in, add and remove parts, and repeat until I have something I like.

I’ve found iteration most beneficial in writing. When I write seriously (as in hwtsam or my chapter in beautiful testing, or many of the articles I’ve written rather than this blog), I always iterate. I usually start by creating an outline, and making the outline headings the subheadings in the chapter. I don’t worry about coming up with clever names, I just make sure the order looks right. Then (either immediately, or in another “writing session”), I’ll start filling in some text below the subheadings. When I get blocked on one section, I stop and move on to the next section. Sometimes I only write something like “talk about cyz configuration testing here” – either because I don’t have the data I need yet, or more often, because I don’t feel like writing about xyz configuration testing yet. In a later session, I may make another pass, or I may focus on adding specifically to another section or two. I add sections and remove sections as needed. Eventually, I find (or at least try to find) themes I can link together. Finally, once I “think” it’s done, I close the file and come back to it in a day. Then I read it, ask myself “what the f…heck was I thinking”, and make edits and rewrites, save then close. Then I do it again. Then I do it at least one more time. Eventually, it “ships” and I’m done (but I seem to always find stuff I want to change).

I can’t imagine not iterating on any task with a semblance of complexity – but not everyone seems to be on board with my approach. I was talking with a colleague some time ago who was putting on a series of collaborative events at MS. I was eager to help, so I asked them for details – e.g. how long will it be, how will you break it down, what are the outputs and a few other similar questions. He answered, “the strategy doc is almost done, and when it is, we can start thinking about the execution”. Yuk – that just seems wrong to me. Perhaps I’m a cowboy, but this is another case where I’d rather settle on the basics, try it out, and adjust. Sure, you need a vision / strategy, but I don’t think you don’t need a 10 page doc written before you get some people in a room to work together.

Or perhaps I just need to plan more – or set up a pre-planning meeting to discuss the preliminary plan – but not likely.

Ur doin it rong

I’d like to offer a bit of advice for everyone in the world (but especially to software testers). In just about every thing you do, every day of your life, it is possible to do something wrong. My challenge to you is to think deeply about how you can do things “right”.

Some examples:

  • If you’re spouse asks “do I look fat|stupid in this”, the wrong answer is almost always yes. This doesn’t mean that answering questions directed by your spouse is wrong, it just means you need to think about the right way to answer this question
  • While driving to work|school|the mall, it is completely possible for you to swerve into oncoming traffic or drive down the sidewalk. However, just because you can do this, doesn’t mean you should, nor does it mean that driving is dangerous and that you shouldn’t do it anymore.
  • Right now, I have the ability to blow away a massive number of important documents (yes, they could be eventually restored from backup, but I could cause big problems). This doesn’t mean that I shouldn’t have write access anywhere on the corporate network, it means that I’ve been trusted to do the right thing and that I should honor that trust.

I get annoyed when I see testers dismiss things flippantly because it’s possible to do it wrong (and more annoyed when they choose other stuff to do wrong instead). It’s asinine to call something stupid because you can mess it up by not thinking, yet it seems to be common practice.

So here’s my advice for everyone. Do whatever you want until it doesn’t work for you. Find out what works and doesn’t work by hypothesizing, experimenting and thinking. Reflect on what you observe and how you interpret that observation. Use that knowledge to fuel more hypothesizing and experimenting. If you have to discard an approach based on this process, you did the right thing. If you failed to ask yourself why an approach is or isn’t appropriate to your context, it is you who have failed.

The above paragraph works for testers too.

Conflicting Results

I’m a huge soccer fan, and I’m happily following the MLS Cup even though the local team was eliminated last week. Last night’s match between Real Salt Lake (RSL) and the Chicago Fire went to penalty kicks before one team finally prevailed. After the game ended, I went to mlsnet.com to watch the highlights and check out some of the stats. When I got there, the front page had this headline and teaser:

image

Quick – which team won? Did the Fire edge Real Salt Lake, or dir RSL outlast the Fire?

If you read a bit more, you’ll see that “RSL will face the Galaxy in the 2009 MLS Cup”, so if you go with majority rules you’ll be correct, since RSL did indeed edge the Fire last night. Headline errors aren’t all that uncommon (e.g. Dewey Defeats Truman), so I don’t fault the news site at all. Unfortunately, a very close relative of error, the false positive, has been bugging the crap out of me lately, and this headline reminded me that it’s past time to share my thoughts.

Let’s say you have 10,000 automated tests (or checks for those of you who speak Boltonese). We had a million or so on a medium sized project I was involved with once, so 10k seems like a fair enough sample size for this example. For the purpose of this example, let’s say that 98% of the tests are currently passing, and 2% (or 200 tests) are failing. This, of course, doesn’t mean you have 200 product bugs. Chances are that many of these failures are caused by the same product bug (and hopefully you have a way of discovering this automatically, because investigating even 200 failures manually is about as exciting as picking lint off of astroturf). Buried in those 200 failures are false positives – tests that fail due to bugs in the test rather than bugs in the product. I’ll be nice and say that 5% of the failures are false positives (you’re welcome do do your own math on this one). Now we’re down to 10 failures that aren’t really failures. You may be thinking that’s not too big of a deal – it’s only 1% of the total tests, and looking at 10 tests a bit closer to see what’s going on is definitely worth the overall sacrifice in test code quality. Testers in this situation either just ignore these test results or quickly patch them without too much further thought.

This worries me to no end. If 5% of your failing tests aren’t really failing, I think it’s fair to say that 5% of your passing tests aren’t really passing.  I doubt that you (or the rest of the testers on your team) are capable of only making mistakes in the failing tests – you have crappy test code everywhere. A minute ago, you may have been ok with only 10 false positives out of 10k tests, but I also think that 490 of your “passing” tests are doing so even though they should be failing. Now feel free to add zeroes if you have more automated tests. I also challenge you to examine all 9800 tests to see which 490 are the “broken” tests.

Yet we (testers) continue to write fragile automation. I’ve heard quotes like, “It’s not product code, why should it be good”, or “We don’t have time to write good tests”, or “We don’t ship tests, we can’t make it as high quality as shipping code”. So, we deal with false positives, ignore the inverse problem, and bury our heads in the sand rather than write quality tests in the first place.  In my opinion, it’s beyond idiotic – we’re wasting time, we’re wasting money, and we’re breeding the wrong habits from every tester who thinks of writing automation.

But I remain curious. Are my observations consistent with what you see? Please convince me that I shouldn’t be as worried (and angry) as I am about this.

What I Do

When I meet new people, they often ask, “what do you do?” The answer I give initially, and the one I hope to get away with is, “I work at Microsoft.”

It rarely works. They inevitably follow up with, “what do you do there?” – which, for better or for worse is a much more difficult question to answer. Depending on their technical knowledge (and my mood), I’ll say something between “I work on a team that does technical training, internal consulting and cross-company projects for engineers”, “I’m the Director of Test Excellence, and “I stop people from being stupid”. It was much a much easier question to answer when I worked on a product team, but I like the job, so I’ll deal with the moments of awkwardness.

I thought I’d write down a longer answer for those who are curious (or want to help me with a better definition).

The biggest thing I’m working on this year is helping engineers across the company have a common concept of software quality. This includes working with marketing on customer perception of quality and a lot of talking with people from around the company to see which practices are common, and discover some practices that should be shared more widely. It’s a hard problem to solve (and there are a lot more pieces to it), but it’s a fun challenge.

I’m also working on a variety of small projects to increase collaboration among testers and other engineers at the company. With nearly 10,000 testers, there’s not nearly enough sharing of ideas, practices or innovation among people solving problems that are likely much more similar than people realize. Every time I see a duplication of effort or the same question asked on a distribution list for the 3rd time in a month I’m reminded of how much more work there is to do in this area.

Edit: I forgot a big chunk worth adding

A reasonably sized chunk of our organization’s work is technical training for engineers at Microsoft. My team teaches some classes, but I work with vendors to teach and design a fair number of test related courses world-wide. I also own scheduling and prioritization of technical courses for what we call MSUS (shorthand for all MS engineers in the US outside of Redmond). It sort of a thankless job, but needs to be done and I don’t mind doing it.

The bulk of the rest of the time goes to what I call – “being Alan”. I organize and schedule meetings for our company-wide test leadership team and test architect group, and chair our quality and test experts community. I also function as chief-of-staff for my boss’s staff meetings (he attends the meeting alternate weeks, and I take care of the agenda and flow of the meetings every week). I participate in a few virtual teams / micro-communities (e.g. cross-company test code quality initiatives or symposiums on testability). I’m on the board for sasqag, and put in a few hours a month keeping things alive on the msdn tester center. I give talks to internal teams a few times a month and mentor half-a-dozen testers in various roles around the company. Finally (and most importantly), I manage a team of 6 people who work on similar projects, as well as teach testing courses. It helps a lot that the team is so smart and so motivated, because I’m most likely not the best manager in the world.

Beyond all that, I spend probably too much time staying connected with what’s going on in the testing world outside of MS. I blog a few times a week, speak at a few conferences a year, and tweet once in a while. It can be a balance problem some times, but I think it’s important enough to make a significant effort to keep up.

There’s probably more, but I think that covers most of it. Now you know.

Stuff I Wrote

I just put together a collection of my published works (it’s not a long list). I also have an article coming out in a Korean testing magazine – I’ll see if I can get a link once it’s out.

I’ve been writing less lately while I turn my attention toward my often neglected day job. I have a few projects on the horizon, and I’ll add them to the list if (or as) they come to fruition.

It’s a Beautiful Day

I may have mentioned this on the old blog, but I’m pretty sure I haven’t mentioned it here yet. O’Reilly media recently released Beautiful Testing – a collection of essays from a variety of testing professionals (including yours truly).

Book cover of Beautiful Testing

I received my copy over the weekend (much to the annoyance, I’m sure, of several other authors who are about to rebel against the empire if their copies don’t show up soon). I’m happy to have mine, and although I read the entire book in digital format, I’ve been flipping through it off and on for the last 3 days. I’m thrilled to be a part of it, but I have to tell you that I’m more excited after reading it again and finally holding it in my hands. The variety of information, styles, and knowledge is fascinating – each one opening up different possibilities and questions to ponder. It’s a fun read that I hope you check out. Best yet, the proceeds from the book all go to buying mosquito nets to help prevent malaria in Africa – what a great opportunity to get some practical testing advice and help out those in need!

Settling on Quality?

Oh my – another quality post. I’m afraid I’m starting a trend for myself, but I have a story to share.

As all gainfully employed workers in the tech field will tell you, we all have side jobs as tech support for all members of our immediate and extended families. This weekend, my mother-in-law opened a support ticket with me regarding her laptop – it was crashing randomly (that’s all the details you get when your m-i-l opens a support ticket).

So – I turned on her laptop, let it boot, then dealt with message after message from applications starting up and telling me stuff I didn’t care about. A backup program telling me that it needed a product key, an external hard drive utility telling me the drive wasn’t connected (duh), and an OEM replacement for windows wireless config launching to tell me I’m connected to a wireless network. The experience was annoying. But there’s a bigger problem. As I was looking at the 3 different web browsers installed and the few dozen or so other random programs and utilities installed, my first thought was “no wonder she’s having computer problems – she’s installed every app under the sun”. I always try to keep my main work machines somewhat “clean” – only installing applications I consider tried and true for worry that they’ll mess something up. Then I realized that’s wrong – I should be able to install whatever the hell I want without fear of losing overall quality (who knows – maybe I can and it’s all a mental problem on my end). The point is, that we (computer users) don’t seem to expect software to work. We’re not as surprised, alarmed, or pissed off as we should be when software doesn’t work correctly. Honestly – I’ve belittled people in the past for calling things bugs when they’re 99.99% user error, but I was wrong – user error or not, that .01% matters.

Ok, so software sucks. It really doesn’t matter – it’s still a profitable industry. That’s true, but I wonder how long it will be true. I wonder if something horrible (even worse than Windows ME** :}) has to happen before the world demands higher quality software. My hope is that we can start making better software long before something like that happens.

Oh – as far as my mother-in-laws computer goes, there was a crash dump on the machine. I attached a debugger and poked around a crash in the wireless driver. I put a later rev of the driver on the machine and so far, so good. I hope it stays that way…for at least a little while.