bookmark_borderPersonas

I just finished a talk at QAI Testrek on using personas. Again, I do a horrible job evaluating how my own talks go, but people asked a lot of good questions at the end, and I didn’t swear too many times, so I’ll call it a success.

The gist of the talk is that creating user personas is a great way to empathize with the customer and get inside their head (remember – you are not the customer!). I kicked off the talk with the A/B game I played at STAR East last year – this was a smaller audience, but I stumped the room again. I’m not sure how many more times I can get away with that before people catch on.

I mentioned that we use personas internally at Microsoft to talk about tester career paths. A generic form of those personas are located here if you haven’t seen them before.

That’s it for now – time for me to go see what else is happening here.

bookmark_borderThoughts on PNSQC

PNSQC is still going strong (although it may be over by the time I post this). Sadly, my North American test conference tour forced me to miss the second day of the conference in order to make it to Toronto in time for the QAI Testrek conference.

My opinion of PNSQC is the same as always – it’s a fantastic conference, and an incredible value for the price. Those who have followed this conference for some time may remember a time in the not so distant past when Microsoft speakers were rare. This year’s conference, however had Microsoft speakers in nearly every time slot (including a keynote by Harry Robinson).

I enjoyed Jim Sartain’s (Adobe) talk on pushing quality upstream – especially a system he uses to track “early bugs” (errors found in unit test or review). I would have really liked to see more depth or examples around this, as I think there’s a lot more complexity to this than he led us to believe.

The highlight of my one-day conference was Tim Lister’s keynote. First of all, I’m a huge fan of Peopleware (if you are a manager, buy a copy now. If you are not a manager, buy a copy for your manager). Then, as if I wasn’t already making lovey eyes at Tim from the front row, he talked about patterns – mentioning both the Alexander and GoF books before diving into examples of organizational patterns. I enjoyed his stories and style. It was a great start to the conference.

At lunch, I attended a discussion on solving complexity (hosted by Jon Bach). The big takeaway for me was an increased awareness that complexity is in the eye of the beholder – what seems complex to you may not seem complex to someone else. This deserves another post…but no promises.

My presentation: “Peering into the white box: A testers view of code reviews” had a good turnout (better than I expected for a talk on code reviews). I can’t really evaluate my own talks very well, but it seemed to be ok. My slides (which as usual, may not make sense on their own) are below.

Another highlight was meeting @mkltesthead (Michael Larsen) in person. I gave him a copy of hwtsam for his troubles. I also gave a copy to the top tweeter of the conference, @iesavage (Ian Savage). I’m only sorry I didn’t get a chance to meet more people…but there’s always next year.

bookmark_borderStrategy Thoughts – Resources

Thanks to everyone who attended today’s brief web seminar. I’ll try anther one in a few weeks (topic suggestions are all welcome).

Here is the video – audio tends to fade at some points (I’m investigating root cause), but hopefully it’s somewhat useful. Remember – this is my approach to (what I call) strategy – not a template that you need to follow to create your own. I hope you find some nuggets you can use.

Thoughts on Test Strategy from Alan Page on Vimeo.

And here are the slides.

bookmark_borderFree Test Stuff

The background:

I posted some thoughts on how I approach test strategy and what the term means to me a week or so ago. The Rat Pack had some questions on the post and I answered them. But the conversation made me realize that it may be beneficial if I take some time to talk not only about what I choose to put into a test strategy, but why I approach the strategy that way. Chance are that my approach won’t work for you, but my “approach to the approach” likely will.

So I’m going to give one of those web-talk prezo things on Thursday, October 14 at 9:00am Pacific time.

Abstract: Come hear Alan Page talk about how he defines a test strategy and how he decides what to include.  Ask questions, get answers, learn something, have fun.

I’ll post the login information here prior to the meeting – just add the time to your calendar, come here a bit early and sign in and we’ll see what happens. I’ll post details on twitter as well.

Important Logistics:

The discussion will end at 10:00am. I’ll plan to talk from 9:05 to 9:30. We’ll use the rest of the time to discuss your thoughts on the subject and to answer a whole  bunch of questions.

I’ll be presenting from the brand new Microsoft Lync console – this gives  you a few options that you can use to attend the meeting

  • You can use the Lync Attendee Only Console. This comes in an Administrator version and a User level version. (the main difference being that you can install the non-admin version if you’re not an administrator). If you want to use this, just install it ahead of time and you’ll be ready to go next Thursday.
  • If you don’t feel like installing software, you can use the web client. I’ve tested it on IE8, IE9, Firefox, and Chrome, so hopefully it will work for you. It does, however, require Silverlight, so if you install that ahead of time, you should be in good shape.
  • We do have a Mac client, but as far as I know, it’s not available as part of the public RC (sorry).
  • If you work at MS, and you really want to hear what I have to say…you’ll figure out what to do.

As an added incentive – since you are all testers, you’re welcome to do anything you want to try to make the meeting go badly (short of heckling me too much). I may regret this, but I challenge you to make bad (software) things happen while we talk about test strategy. My goal, of course, will be to be so engaging that you forget to break stuff – we’ll see how it goes.

bookmark_borderDabbling in the QA business

A week or so ago, I tweeted the following:

I loathe the use of "QA" as a verb to replace test. E.g. "Facebook needs to QA their features better" – that doesn’t make any  f*ing sense.

Paul Carvalho (@can_test) didn’t agree – he stated that test is a subset of qa, and that since he considered much more than testing when performing his activities that what he did was “qa a feature” (massive paraphrase – sorry).

On one hand, I could say that good software programming is also a subset of QA, so a dev could “QA up” a feature rather than code it up under that logic. I’ve been whining about QA vs. Test for half a dozen years, and Michael Bolton more recently (and more eloquently) wrote up his thoughts on the subject early this year. I think anyone who pays attention knows that QA is widely misunderstood. I think Paul gets it – he knows the difference between testing and quality assurance, and what he does is more than just testing, so he may actually do some quality assurance activities along with his activities. I don’t details on his testing activities, so I can’t guess any more than that.

In regards to Bolton’s thoughts, where I differ is that I think that testers can dabble in Quality Assurance. This statement doesn’t apply if you only test apps after they are complete or mostly complete in order to quickly find any “ship-stoppers”, but when development and test are well integrated, there is a possiblity. In my world, testers get to be involved from the earliest inceptions of pre-design. Now, it’s one thing to be “in the room” during a design discussion, but many testers are capable of inserting some QA type goo into the discussion, as well as finding ways to improve the creation of software throughout the product cycle. Some examples include:

  • Testers ask things like “How are we going to test this?”, and “how will we know if we’re successful?” during design reviews.
  • Testers can look at the most severe bugs found during the last product cycle and develop tools or checklists to prevent those types of bugs from occurring in the future.
  • Testers can instigate process change. I’ve never been on a team where testers weren’t involved in highlighting inefficiencies in software engineering processes and implementing changes to improve on those inefficiencies. Normal leadership gotcha’s on organizational change apply (i.e. it’s hard, but certainly possible)
  • A HUGE point I agree on with Michael (and others) is that test is not the gatekeeper of quality. When I hear testers talk about “signing off” on a release, I cringe. What testers can do early is say “this is the information we’re going to provide – and we think you can make a ship decision with this information”. Then instead of signing off on a release, testers can say “only half the tests are running, and half of those fail. Customers are complaining, and perf is in the toilet. scheduled ship date is next Tusday – it’s your call”

Good testers have a critical eye – not just for product issues, but in how the product is made – so it’s not completely out of the question for capable testers in the right environment to dabble a bit in QA activities.

The summary – if you’ve both read this far, and I’ve lost you is:

  • QA and testing are different
  • IMO, QA is not an activity you do to a feature
  • In the right circumstances, some people in test roles can perform QA activities
  • But QA activities are generally separate from test activities

Got it?

bookmark_borderPresentation Maturity

Test conference season is upon us, and so begins the onslaught of “slides” from  powerpoint / keynote / (google docs presentation app whatever-it’s-called). I have seen hundreds of presentations on a wide variety of subjects over the years and thought I’d share what I know about what a presentation tells you about the presenter.

The “freshmen” presentation

“New” presenters typically have a lot of slides with a lot of bullet points. If someone says this is their first time ever presenting, you will want to sit in the front row where you can view the 8-point type clearly. If you need to, you can scootch your chair a bit farther forward for clarity. If you can’t get a seat in the front, don’t worry – fortunately these people will read every bullet point. If you are confused about the topic, the freshmen also have that covered – their first few slides typically contain definitions from wikipedia or dictionary.com, complete with pronunciation guides (“metric” is a very difficult word to pronounce).

Design is typically black text on a white background (aka the default powerpoint design)

The “sophomore” presentation

The sophomore presentation experience is all about design. Two main things differentiate the sophomore from the freshmen. The presenter has some experience (i.e. they’ve explored powerpoint more). In order to show their presentation maturity, their presentations now use one of the “fancier” design templates available. Most often, these use dark text on a slightly darker background – something that looks “advanced” on a laptop screen, but looks like an oil spill on a portable conference projector. There is slightly less text per slide than the freshmen, but they make up for the space by splattering bits of clip art on each slide. Sometimes the clip art has something to do with the topic, but the main rule is that it has to fill up dead space.

Speaking-wise, sophomores don’t generally read every slide. Because they are experienced in presentation, they no longer practice presenting with their slides, and because they no longer practice with their slides, they tend to forget what they’re talking about.

The “junior” presentation

Now, you’re beginning to see the cream of the crop. These people have read about presenting, and are often (self-proclaimed) “experts”. For example, they’ve read that bullet points are bad, and pictures are good. Their presentations are filled with full page photos stolen from web sites taken on their trip around the world. The photos are very nice and give the audience something to focus on. Unfortunately, the photos rarely have anything to do with the presentation. And – since the juniors don’t practice their presentations either, they often end up talking about what’s in the photo rather than what they meant to talk about. You know when you’ve attended a presentation by one of these folks, because you’ll walk out talking about how good the slides were rather than saying anything about the content.

The “senior” presentation

These are the people you pay to see. They may use any of the techniques above – pictures are a must, as is enough text to show off their credibility. Also – and this is very important – senior presenters absolutely must dedicate at least 25% of their allotted presentation time to talking about themselves. If you are a senior presenter, it is imperative that you sound like you know you’re stuff, and to do that, you need to establish credibility. These people may include the dictionary.com definition, but the difference is that they invented the word!

Post-graduate presentations

These folks tell stories and structure their talk so that you remember the important points and why those points are important to remember. Slides don’t matter – they can be as effective with bullet points as they can with a picture of a cow farting. At a typical software conference, there are 2 of these (give or take 2). But they’re worth the search.

bookmark_borderTrust and Testing

I threw out a few trust-based tweets this week.

I’m a big fan of #trust in the workplace – hire smart people, coach & guide them, but give them plenty of freedom to do the right things

You don’t have to (and shouldn’t) involve everyone in decisions. But build trust by sharing why and how you make decisions.

I’ve been thinking a lot about trust lately – specifically trust in the workplace and how much it benefits an organization. Organizational trust is a popular concept (assuming you know where to look). The Management Innovation Exchange (I’m a huge fan); there’s The Management Trust, and Management by Trust. Stephen Covey (no – not that Stephen Covey, his son) wrote a book on trust, and it’s in HBR frequently.

But too many leaders – no wait, they can’t be leaders if they ask this way. Too many people fail at trust. I’ve probably seen fifty tweets and blogs complaining about management keeping a thumb on workers by counting test cases, bugs, and time-on-task. My simple response to these people is to just quit (or find some less dramatic way to find a new boss if possible) – life’s too short to wallow in an organization without trust (Ideally, you should find a new job first, and I know that’s not as easy as it sounds these days, but still…).

Fortunately, I work in an organization where trust is a huge part of the way we operate. We expect people to work hard and do what they think is right. Doing the right thing – and having the freedom to explore is critical in testing (and in software engineering). I’m a fan of metrics, and think when they’re used right, they are extremely valuable and help immensely in decision making (conversely, I know of people so freaked out by metrics misuse that they won’t use them at all – which is nearly as stupid as using them poorly).There is so much more to software testing than functional tests – I just don’t understand how there’s a chance in hell for testers to do what they’re supposed to do if they don’t have the freedom to change course when they need to without fear of repercussion (or missing their quota).

I know there are some people who actually prefer to have very explicit instructions on what to do – they work better if they are told what to do (and when to do it). I don’t fault these people – I hope they all find jobs with the managers who insist on giving explicit directions and hovering over their employees while they follow those directions.

Of course, “trust” doesn’t mean “do whatever you want, even if it’s useless” – you still need to show value, and you need to be able to explain what you did and why you did it – but hopefully that’s a better alternative than mindlessly running test cases all day.

bookmark_borderA Test Strategy – or whatever you want to call it

I’m in the middle of little project for work (one of many things I have going on these days), and I was reminded of the fact that we don’t have anything close to a consistent language in testing. It’s nothing new – I blogged about this subject before, but I still don’t have a reasonable solution. I sometimes convince myself that one advantage of certification is building a base of common terminology, but I never convince myself that certification is the right answer.

So, the project I’m working on is sort of a “test strategy” – but I’m pretty sure my definition of test strategy is different than yours. Matt Heusser, for example, recently blogged about test strategy – but in my world of definitions, what Matt is describing is test design – but to be fair, he does call it “designing a test strategy”. Also – to be ultimately clear, there’s nothing wrong with what Matt says, we just have different interpretations of what a test strategy is – no big deal.

For me, a test strategy is a description of the general approach a team will take to testing over a product cycle. It describes a set of approaches and techniques a team will use, why those approaches are appropriate, and how they will benefit the team and product. I (in my definition) avoid details of implementation, and instead, use the strategy message to align a test team and get everyone on the same page with values, approaches, and areas of growth. I like to think of it as an explanation of  “why” and “what” rather than “how”. I prefer to think of “how” as a minor implementation detail – we have smart people on the team, and they’ll figure out the “how” when it’s necessary. I’ve approached test strategy this way for some time, and taught test strategy to a few hundred testers at Microsoft (in which I also emphasize that they are free to adapt to their own definition and context). I should also stress that while I’m writing a document outlining our team strategy (or at least my thoughts on team strategy), that a document isn’t required. An email, whiteboard, or even word of mouth may suffice depending on the situation.

When thinking of a testing strategy, I consider improvements to both the product and the team to be critical. The testing needs to have enough meat and breadth to help build a quality product, but the strategy also needs to consider growth opportunities for the team (perhaps I should call my definition a team testing strategy?). To meet these goals, a strategy basically has a bunch of testing ideas (ideas include everything from approaches to processes to tools)  – some are challenging, but all definitely achievable. Some ideas are part of the day-to-day workflow of the team, but the team doesn’t address every single idea every day. Instead, the team bounces around between ideas as appropriate to the context of the product cycle, feedback, etc.Sometimes, owners are assigned to own ideas (or pieces of the strategy) throughout the product cycle, while other ideas cycle between team members. Over the product cycle, some ideas may be dropped, others will certainly be added, and if the strategy is successful, both the test team and product quality improve over time. This has worked for me in the past, and the approach has resonated with a lot of testers I’ve talked to over the past several years, so I think it has legs. We’ll have to wait and see how it works in my team’s next product cycle – but I’m bound to keep you posted.

bookmark_borderUpcoming stuff

If you’re interested in stalking me, here are some places I’ll be in the coming months.

On October 18th, I’ll be talking about code reviews at the Pacific Northwest Software Quality Conference. If you want to catch me there, you better catch me on the 18th, because I’m flying to Toronto on October 19th to present at the Toronto TesTrek Symposium and talk about ways of connecting with the customer.

I’ll be in Munich from January 24-28 for OOP 2010, delivering an evening tutorial on testing at Microsoft (an updated and expanded version of a talk I gave at Siemens last year), and a track presentation based on my 2009 STAR East keynote (for some reason, I rarely repeat presentations, but I’m guessing few people at OOP will have seen me at STAR). I’m also sort of excited that Robert “Uncle Bob” Martin will be giving a keynote, and look forward to (finally) getting a chance to hear him speak.

And finally, I’m fairly certain that I’d like to give some sort of webinar using the product I work on, Microsoft Lync. I honestly would love to get feedback from testers on how ad-hoc meetings work outside the corporate environment – especially from those who have used other meeting web clients before. But first, I probably need a compelling topic, and that’s where I’m hoping you can help out. The topic can be anything you’ve seen me speak, blog, or tweet about – I’ll keep the prezo short (30 minutes or so) so we can have ample time for discussions about the topic or technology. Post your ideas here, or send me a tweet (@alanpage). First topic to five votes wins (or the first topic that I think is a really cool idea)!

bookmark_borderDichotomy for Dummies

There are many clear examples of dichotomy (mutually exclusive or contradictory categories) in the world. A light switch is on or off. A gun may have fired, or not fired. A software program is running, or it is not running. But many things we deal with are not dichotomous, but continuous. A person can be happy or sad…or somewhere in between. The temperature outside may be cool or hot…or somewhere in between. Indeed, some things in the world are black and white, but others definitely have multiple shades of grey separating the two concepts.

Differentiating between dichotomous and continuous variables is (IMO) one of the basics of critical thinking – which is probably why I’m annoyed when people in software look for dichotomy where it doesn’t exist. For years, Agile proponents have denounced anything non-Agile as Waterfall (I’m not even completely convinced those two terms are on opposite ends of the same continuum). Software testing, I’m afraid, has had it’s share of dichotomy misdiagnosis in the past, but does seem to be improving as the profession matures.

Some key examples include:

  • Exploratory vs. Scripted. While it’s certainly possible to be entirely exploratory or entirely scripted in your test approach, grey areas between the two exist often in the testing world. I suppose you could say that if a test script contains exploratory elements that it’s really just ET, but see my example on Agile vs.Waterfall above for my reason why I disagree.
  • Automated vs. Manual. It takes a lot of effort to have a fully automated test – most “automated” testing has some amount of “manual-ness” to it. The same can be said for manual testing. When I’m testing “with my hands”, I almost always use tools to help me. Registry and process monitors, macro recorders, and debuggers are all tools that (automatically) help me test better. The short story is, that there’s a really blurry line that fills the gap between automated and manual testing
  • SDET vs. STE. Guess what – very few SDETs write tools and automation all day, and a lot of STEs I know write plenty of code (once upon a time, I wrote IIS server extensions to help with some test scenarios – my title at the time: SDET). I realize that these terms are Microsoft-ish, but although the titles are dichotomous, the roles are definitely continuous.

What other examples of misdiagnosed dichotomy have you seen?