Peering into the white box–Live!

On Thursday, November 11, at 4:00pm PST, I’ll be delivering another free web cast thing using Microsoft Lync (pre-setup instructions here).

Signin information here:

Join by Phone:

Find a local number 

Conference ID: 86935295

The talk will be a re-hash of my PNSQC talk (shortened a bit and turned into more of a “how to, and why” rather than a strict case study). The mini-abstract is:

Code reviews are a long-time staple of software development – but what happens when the test team takes part in code reviews? We’ll talk about how my team at Microsoft has done this, what we’ve found, and why you may want to do something similar where you work.

I’ll post sign-in instructions here about 24 hours in advance. Mark your calendar now!

oh – if you have ideas or suggestions for future webcasts, post them in the comments below.

Producers, players, and software

Chris McMahon (who has an extensive music background) recently gave a wonderful presentation at SoftwareGR where he stated:

The work of a performing artist is exactly the same as a functioning agile team.

He goes on to explain that it isn’t a metaphor – it’s exactly the same thing. And I believe him.

Later in the talk he discussed the role (including stories you have to hear) of the Producer in music and pondered why we don’t have more of these folks in software engineering

Where are the Producers in software development? Where are the people that can turn a (software) flop into a hit? (paraphrased – sorry Chris)

I listened to the talk last week, made a mental note to send some sort of thank you to Chris and went on about my life.

This weekend, I spent a bunch of time organizing the “music room” in my house. Several months ago, I “acquired” a room in my house for my stuff, but due to travel and family issues, I’ve only recently got the room sort of in shape. I celebrated by reacquainting myself with the “stuff” I’ve acquired over the years. I’m horribly out of practice, but I was surprised how much came back quickly.

Anyway – I was sitting on the couch thinking about how every bit of equipment – each of my saxophones, my flute, clarinet, harmonica, microphones, drums, guitars (and even my ukulele!) have a story…then I started thinking more about what Chris had said and reflected on my own musical background.

When I was playing music more, I was much more of a generalist than an expert – and I could read (music) and learn faster than most of my peers. I was also good at breaking down difficult problems (why does this sound like shit) to smaller solvable pieces. This was my ticket – I got gigs because I was versatile. I played both vibraphone and saxophone in one of the best collegiate jazz bands in the country, and both timpani and clarinet in the university orchestra (none of these, btw, at the same time. I played in rock bands because I could play both saxophone and guitar – and sing (the latter two, only good enough for rock and roll). I knew when to help, when to blend, and knew when to lead. I was never the best soloist, and was rarely the best musician on stage or in the studio, I certainly made my share of mistakes, but I still found a way to make my groups better and have more fun – and people recognized that. My peers also knew if there was something I couldn’t do, that I would be able to figure it out. I wasn’t always the first call for a session or to fill in, but my name always came up.

That part of my life seems so far away, but I realized that it’s exactly what I do today, and it’s probably why I’m so happy with what I do (even though I work for “the man” now). I never know the most about automation or tools or test techniques or leadership, but I know a “little about a lot” and definitely enough to be helpful and to make teams better. I’m still the “big picture” guy who figures out why shit sucks and finds the best way to improve without damaging egos and “the creative process”. I like being the session player / consultant, and I have no problem letting the better soloists step up to the mic when it’s their turn.

Maybe someday I’ll be a (software) producer – but at the very least, I hope I get a good chance to work with a great one.

Who owns quality?

I think I’m finally caught up and recovered from my brief North American tour last week. While it was fun to present at two conferences plus a customer talk in four days, I missed out on a the second day of both conferences, as well as the opportunity to meet nearly as many people as I would have liked.

I thought I’d write a bit about something I heard more than once last week. I summed up my reaction in a tweet.

I forget how often some companies blame testers for escaped bugs. It’s not their fault.

My flippant follow up to the “bugs are tester’s fault” statement is typically “How can it be test’s fault? We weren’t the ones who put the bugs there in the first place”, but there’s some truth to the remark. Think of it this way – If testers are responsible for letting bugs “slip” through to customers, you have an engineering system where programmers are paid to insert bugs into the software, and where testers are penalized for not finding all of the needles in the haystack.

That doesn’t seem right. Delivering quality software isn’t a game where programmers insert bugs for testers to chase after – everyone on the team has to have some skin in the game on delivering quality (and value) to the customer

There’s also been a lot of buzz recently about the “cost of testing”, where testers (or teams of testers) attempt to justify the investment in testing. I have to admit that this bugs me – I don’t believe in the cost of testing – I believe in the cost of quality, and that the cost (and effort) is shared among the entire software team.

In hwtsam, I wrote:

Many years ago when I would ask the question, “who owns quality,” the answer would nearly always be “The test team owns quality.” Today, when I ask this question, the answer is customarily “Everyone owns quality.” While this may be a better answer to some, W. Mark Manduke of SEI has written: “When quality is declared to be everyone’s responsibility, no one is truly designated to be responsible for it, and quality issues fade into the chaos of the crisis du jour.” He concluded that “…when management truly commits to a quality culture, everyone will, indeed, be responsible for quality.”(STQE Magazine. Nov/Dec 2003 (Vol. 5, Issue 6))

Culture is hard to change, but it’s imperative for making quality software. If your programmers “throw code over the wall” and are surprised when the test team doesn’t find bugs, you should rethink your work environment (or resolve yourself to the fact that your job is to clean up the programmers mess and that quality is nowhere near your control).


I just finished a talk at QAI Testrek on using personas. Again, I do a horrible job evaluating how my own talks go, but people asked a lot of good questions at the end, and I didn’t swear too many times, so I’ll call it a success.

The gist of the talk is that creating user personas is a great way to empathize with the customer and get inside their head (remember – you are not the customer!). I kicked off the talk with the A/B game I played at STAR East last year – this was a smaller audience, but I stumped the room again. I’m not sure how many more times I can get away with that before people catch on.

I mentioned that we use personas internally at Microsoft to talk about tester career paths. A generic form of those personas are located here if you haven’t seen them before.

That’s it for now – time for me to go see what else is happening here.

Thoughts on PNSQC

PNSQC is still going strong (although it may be over by the time I post this). Sadly, my North American test conference tour forced me to miss the second day of the conference in order to make it to Toronto in time for the QAI Testrek conference.

My opinion of PNSQC is the same as always – it’s a fantastic conference, and an incredible value for the price. Those who have followed this conference for some time may remember a time in the not so distant past when Microsoft speakers were rare. This year’s conference, however had Microsoft speakers in nearly every time slot (including a keynote by Harry Robinson).

I enjoyed Jim Sartain’s (Adobe) talk on pushing quality upstream – especially a system he uses to track “early bugs” (errors found in unit test or review). I would have really liked to see more depth or examples around this, as I think there’s a lot more complexity to this than he led us to believe.

The highlight of my one-day conference was Tim Lister’s keynote. First of all, I’m a huge fan of Peopleware (if you are a manager, buy a copy now. If you are not a manager, buy a copy for your manager). Then, as if I wasn’t already making lovey eyes at Tim from the front row, he talked about patterns – mentioning both the Alexander and GoF books before diving into examples of organizational patterns. I enjoyed his stories and style. It was a great start to the conference.

At lunch, I attended a discussion on solving complexity (hosted by Jon Bach). The big takeaway for me was an increased awareness that complexity is in the eye of the beholder – what seems complex to you may not seem complex to someone else. This deserves another post…but no promises.

My presentation: “Peering into the white box: A testers view of code reviews” had a good turnout (better than I expected for a talk on code reviews). I can’t really evaluate my own talks very well, but it seemed to be ok. My slides (which as usual, may not make sense on their own) are below.

Another highlight was meeting @mkltesthead (Michael Larsen) in person. I gave him a copy of hwtsam for his troubles. I also gave a copy to the top tweeter of the conference, @iesavage (Ian Savage). I’m only sorry I didn’t get a chance to meet more people…but there’s always next year.

Strategy Thoughts – Resources

Thanks to everyone who attended today’s brief web seminar. I’ll try anther one in a few weeks (topic suggestions are all welcome).

Here is the video – audio tends to fade at some points (I’m investigating root cause), but hopefully it’s somewhat useful. Remember – this is my approach to (what I call) strategy – not a template that you need to follow to create your own. I hope you find some nuggets you can use.

Thoughts on Test Strategy from Alan Page on Vimeo.

And here are the slides.

Free Test Stuff

The background:

I posted some thoughts on how I approach test strategy and what the term means to me a week or so ago. The Rat Pack had some questions on the post and I answered them. But the conversation made me realize that it may be beneficial if I take some time to talk not only about what I choose to put into a test strategy, but why I approach the strategy that way. Chance are that my approach won’t work for you, but my “approach to the approach” likely will.

So I’m going to give one of those web-talk prezo things on Thursday, October 14 at 9:00am Pacific time.

Abstract: Come hear Alan Page talk about how he defines a test strategy and how he decides what to include.  Ask questions, get answers, learn something, have fun.

I’ll post the login information here prior to the meeting – just add the time to your calendar, come here a bit early and sign in and we’ll see what happens. I’ll post details on twitter as well.

Important Logistics:

The discussion will end at 10:00am. I’ll plan to talk from 9:05 to 9:30. We’ll use the rest of the time to discuss your thoughts on the subject and to answer a whole  bunch of questions.

I’ll be presenting from the brand new Microsoft Lync console – this gives  you a few options that you can use to attend the meeting

  • You can use the Lync Attendee Only Console. This comes in an Administrator version and a User level version. (the main difference being that you can install the non-admin version if you’re not an administrator). If you want to use this, just install it ahead of time and you’ll be ready to go next Thursday.
  • If you don’t feel like installing software, you can use the web client. I’ve tested it on IE8, IE9, Firefox, and Chrome, so hopefully it will work for you. It does, however, require Silverlight, so if you install that ahead of time, you should be in good shape.
  • We do have a Mac client, but as far as I know, it’s not available as part of the public RC (sorry).
  • If you work at MS, and you really want to hear what I have to say…you’ll figure out what to do.

As an added incentive – since you are all testers, you’re welcome to do anything you want to try to make the meeting go badly (short of heckling me too much). I may regret this, but I challenge you to make bad (software) things happen while we talk about test strategy. My goal, of course, will be to be so engaging that you forget to break stuff – we’ll see how it goes.

Dabbling in the QA business

A week or so ago, I tweeted the following:

I loathe the use of "QA" as a verb to replace test. E.g. "Facebook needs to QA their features better" – that doesn’t make any  f*ing sense.

Paul Carvalho (@can_test) didn’t agree – he stated that test is a subset of qa, and that since he considered much more than testing when performing his activities that what he did was “qa a feature” (massive paraphrase – sorry).

On one hand, I could say that good software programming is also a subset of QA, so a dev could “QA up” a feature rather than code it up under that logic. I’ve been whining about QA vs. Test for half a dozen years, and Michael Bolton more recently (and more eloquently) wrote up his thoughts on the subject early this year. I think anyone who pays attention knows that QA is widely misunderstood. I think Paul gets it – he knows the difference between testing and quality assurance, and what he does is more than just testing, so he may actually do some quality assurance activities along with his activities. I don’t details on his testing activities, so I can’t guess any more than that.

In regards to Bolton’s thoughts, where I differ is that I think that testers can dabble in Quality Assurance. This statement doesn’t apply if you only test apps after they are complete or mostly complete in order to quickly find any “ship-stoppers”, but when development and test are well integrated, there is a possiblity. In my world, testers get to be involved from the earliest inceptions of pre-design. Now, it’s one thing to be “in the room” during a design discussion, but many testers are capable of inserting some QA type goo into the discussion, as well as finding ways to improve the creation of software throughout the product cycle. Some examples include:

  • Testers ask things like “How are we going to test this?”, and “how will we know if we’re successful?” during design reviews.
  • Testers can look at the most severe bugs found during the last product cycle and develop tools or checklists to prevent those types of bugs from occurring in the future.
  • Testers can instigate process change. I’ve never been on a team where testers weren’t involved in highlighting inefficiencies in software engineering processes and implementing changes to improve on those inefficiencies. Normal leadership gotcha’s on organizational change apply (i.e. it’s hard, but certainly possible)
  • A HUGE point I agree on with Michael (and others) is that test is not the gatekeeper of quality. When I hear testers talk about “signing off” on a release, I cringe. What testers can do early is say “this is the information we’re going to provide – and we think you can make a ship decision with this information”. Then instead of signing off on a release, testers can say “only half the tests are running, and half of those fail. Customers are complaining, and perf is in the toilet. scheduled ship date is next Tusday – it’s your call”

Good testers have a critical eye – not just for product issues, but in how the product is made – so it’s not completely out of the question for capable testers in the right environment to dabble a bit in QA activities.

The summary – if you’ve both read this far, and I’ve lost you is:

  • QA and testing are different
  • IMO, QA is not an activity you do to a feature
  • In the right circumstances, some people in test roles can perform QA activities
  • But QA activities are generally separate from test activities

Got it?

Presentation Maturity

Test conference season is upon us, and so begins the onslaught of “slides” from  powerpoint / keynote / (google docs presentation app whatever-it’s-called). I have seen hundreds of presentations on a wide variety of subjects over the years and thought I’d share what I know about what a presentation tells you about the presenter.

The “freshmen” presentation

“New” presenters typically have a lot of slides with a lot of bullet points. If someone says this is their first time ever presenting, you will want to sit in the front row where you can view the 8-point type clearly. If you need to, you can scootch your chair a bit farther forward for clarity. If you can’t get a seat in the front, don’t worry – fortunately these people will read every bullet point. If you are confused about the topic, the freshmen also have that covered – their first few slides typically contain definitions from wikipedia or, complete with pronunciation guides (“metric” is a very difficult word to pronounce).

Design is typically black text on a white background (aka the default powerpoint design)

The “sophomore” presentation

The sophomore presentation experience is all about design. Two main things differentiate the sophomore from the freshmen. The presenter has some experience (i.e. they’ve explored powerpoint more). In order to show their presentation maturity, their presentations now use one of the “fancier” design templates available. Most often, these use dark text on a slightly darker background – something that looks “advanced” on a laptop screen, but looks like an oil spill on a portable conference projector. There is slightly less text per slide than the freshmen, but they make up for the space by splattering bits of clip art on each slide. Sometimes the clip art has something to do with the topic, but the main rule is that it has to fill up dead space.

Speaking-wise, sophomores don’t generally read every slide. Because they are experienced in presentation, they no longer practice presenting with their slides, and because they no longer practice with their slides, they tend to forget what they’re talking about.

The “junior” presentation

Now, you’re beginning to see the cream of the crop. These people have read about presenting, and are often (self-proclaimed) “experts”. For example, they’ve read that bullet points are bad, and pictures are good. Their presentations are filled with full page photos stolen from web sites taken on their trip around the world. The photos are very nice and give the audience something to focus on. Unfortunately, the photos rarely have anything to do with the presentation. And – since the juniors don’t practice their presentations either, they often end up talking about what’s in the photo rather than what they meant to talk about. You know when you’ve attended a presentation by one of these folks, because you’ll walk out talking about how good the slides were rather than saying anything about the content.

The “senior” presentation

These are the people you pay to see. They may use any of the techniques above – pictures are a must, as is enough text to show off their credibility. Also – and this is very important – senior presenters absolutely must dedicate at least 25% of their allotted presentation time to talking about themselves. If you are a senior presenter, it is imperative that you sound like you know you’re stuff, and to do that, you need to establish credibility. These people may include the definition, but the difference is that they invented the word!

Post-graduate presentations

These folks tell stories and structure their talk so that you remember the important points and why those points are important to remember. Slides don’t matter – they can be as effective with bullet points as they can with a picture of a cow farting. At a typical software conference, there are 2 of these (give or take 2). But they’re worth the search.

Trust and Testing

I threw out a few trust-based tweets this week.

I’m a big fan of #trust in the workplace – hire smart people, coach & guide them, but give them plenty of freedom to do the right things

You don’t have to (and shouldn’t) involve everyone in decisions. But build trust by sharing why and how you make decisions.

I’ve been thinking a lot about trust lately – specifically trust in the workplace and how much it benefits an organization. Organizational trust is a popular concept (assuming you know where to look). The Management Innovation Exchange (I’m a huge fan); there’s The Management Trust, and Management by Trust. Stephen Covey (no – not that Stephen Covey, his son) wrote a book on trust, and it’s in HBR frequently.

But too many leaders – no wait, they can’t be leaders if they ask this way. Too many people fail at trust. I’ve probably seen fifty tweets and blogs complaining about management keeping a thumb on workers by counting test cases, bugs, and time-on-task. My simple response to these people is to just quit (or find some less dramatic way to find a new boss if possible) – life’s too short to wallow in an organization without trust (Ideally, you should find a new job first, and I know that’s not as easy as it sounds these days, but still…).

Fortunately, I work in an organization where trust is a huge part of the way we operate. We expect people to work hard and do what they think is right. Doing the right thing – and having the freedom to explore is critical in testing (and in software engineering). I’m a fan of metrics, and think when they’re used right, they are extremely valuable and help immensely in decision making (conversely, I know of people so freaked out by metrics misuse that they won’t use them at all – which is nearly as stupid as using them poorly).There is so much more to software testing than functional tests – I just don’t understand how there’s a chance in hell for testers to do what they’re supposed to do if they don’t have the freedom to change course when they need to without fear of repercussion (or missing their quota).

I know there are some people who actually prefer to have very explicit instructions on what to do – they work better if they are told what to do (and when to do it). I don’t fault these people – I hope they all find jobs with the managers who insist on giving explicit directions and hovering over their employees while they follow those directions.

Of course, “trust” doesn’t mean “do whatever you want, even if it’s useless” – you still need to show value, and you need to be able to explain what you did and why you did it – but hopefully that’s a better alternative than mindlessly running test cases all day.