bookmark_borderWhat I do – the new version

Part of our rhythm of business at Microsoft is setting yearly commitments. Technically, we do these in July for the fiscal year, but we were in the middle of shipping, so I didn’t get to my commitments until recently. On one hand, I don’t like commitments, as in many cases they don’t allow for change or intangibles, but if you’re careful – and have a good manager, you can work around the shortcomings. It’s also perfectly ok to update the commitments throughout the year, so one can adjust if work projects take a right turn.

I’ve shared my work commitments in the past, but now that I’m working on software again, as you’d expect, I do different stuff. If you’re curious what I do – at a very high level, read on.

First off, there’s the test evangelist part of my role. Internally, I’ll give a few large scale test talks, and a handful of intact team talks. I speak at our new employee orientation a half a dozen times a year and speak with customers a few times a year at our Executive Briefing Center. I also chair our test architect group, and a larger internal group of senior testers. Externally, I’ll do a few conferences in fiscal year 2011 (fy11), write a few articles, and talk with some Lync customers about testing.

I’m working on a bunch of stuff around test design. I have some ideas (some would say wild ideas) about changing our test design approach to put customer needs a bit more front and center. Part of this effort includes adapting some research from customer support and user research into practical application – all of this will include experiments and pilots. Also included in this “bucket” is some testability work – this includes working with development to make sure they’re writing testable code, as well as some guidelines and coaching for testers on how to review specs, designs, and code for testability.

I also own a big chunk of our overall test strategy – this includes how we test better and how our testers grow. I’ve talked a bit about this in the past and done a lot of the research and thinking already – the big task in front of me is to ensure that we make progress on the strategy.

I also provide as much mentoring and coaching across the team as I can. I hold myself accountable for growing the careers of as many of the testers on the team as I can. This includes 1:1 mentoring, coaching or leading virtual teams, as well as more subtle shifts in organizational change.

I’ve translated all of this vague sort of stuff into SMART goals that outline the execution plan (but that would be way too boring to share). If you’re internal to MS, I’ll post these on my //my site in the next few days. The rest of you will have to find another way to satisfy your curiosity (seriously, don’t you have something better to do)? Princess

bookmark_borderCareers in Test

You probably couldn’t tell, but my last post was part of my exploration into careers in test. Which I’m exploring because of my worry that so many of the discussions in test haven’t changed in the last ten years.

I better try to come up with a better explanation before everyone thinks I’m crazy.

A few weeks ago, I expressed my concern about the lack of forward thinking in software testing. I wondered (and still wonder) where the new ideas are going to come from. The “hot topics” in test haven’t changed that much in the last ten+ years, and I wonder if they’ll change in the next ten years.

One possible reason for this is that not that many people stay in test long enough to get to a point where they have both the experience and influence to advance the state of the art. Now, I don’t think everyone with some arbitrary amount of experience in test is automatically going to be a forward thinker and come up with great new ideas and approaches – and that’s also why I think we need more people with extensive experience in test.

And with that thought, I started thinking about careers in test, because without a good idea of what a career in test looks like, people may not stay in test.

And that got me thinking about what I do, and what other testers do -  and whether or not how we describe ourselves can make us a good (or bad…or neutral) role model for a career in test.

And, for better or for worse, that’s kind of the way I think about things. I keep recognizing more parts of the system and then I ponder how they work together, until eventually I come up with something.

Or not.

Anyway – I hope that puts a little more clarity on my recent rambles.

bookmark_borderWho are you?

I’ve been thinking about how professional testers describe themselves. We have high level labels like Tester, SDET, or Quality Engineer, but beyond those (mostly meaningless) titles, many of us have our own personal missions or visions – the words we want people to associate with us.

For example, my twitter bio is, “Long time software tester and quality guy – author of hwtsam”, and my profile on an internal twitter-like site is “Tester, tweeter, blogger, author”. They’re short, but both descriptions give people an idea of who I am. More importantly, I can back up each of the points.

But those bios are sort of lame. They describe what I’ve done, but not much of what I do, or how I do it. Many of us describe not just what we do, but what we think we do; or how we want to be perceived. When someone asks what you do, do you say “I’m a tester”, or something like, “I’m a thought leader in testing who uses a combination of strategy and tactics to improve software and software teams”?

Both of these are perfectly valid answers. The difference in this case is that the first is immediately believable, while the second requires some proof. This isn’t necessarily a bad thing – if the description is what you strive to do and how you want to be perceived, it’s perfectly ok. But you probably have to show some signs that you can live up to the description.

For example, I could say, “I’m a thought leader in declarative testing approaches” – which would be cool, because I’m a fan of the concept, but I’m far from a thought leader in the subject. If I were to say something like that, I should have some talks, articles, blog posts, etc. to back up that claim (and you should expect that of me if I made that claim as well).

That one was easy – let’s explore something closer to a grey area and talk about how I do what I do. If I were to say “I use my view of the big picture to find patterns and connections to aid in my testing approach”, and you observed several examples of me failing to make connections within a system, do I lose credibility with you? I should – but then again, maybe you just don’t know me, caught me on bad days, or your observations were inaccurate. At the very least, you should question that claim.

I’m not sure where I’m going with this – perhaps it’s a midlife crisis of a sort, but I find myself trying to be more purposeful about what I do and how I achieve what I do…and how I remain credible. Every three or so years, I attempt to write a personal vision / mission and reassess my values. As you can probably tell, I value credibility – to be able to “walk the talk”, both in myself and others.

Here’s an example of one of these fru-fru self-activities from a few years ago.

I am a leader in software testing, software quality, who balances thought leadership with execution, ties vision to strategies, and nurtures communities of practice

It’s not great, but it works, and I’m due to write a new one in another year or two. This is different than a bio, because until now, I haven’t shared this directly with anyone, but I still find it important to try to live up to this. I can take each of the points and ask, what am I doing to display this? Do I think the people who matter perceive me this way? and what can I do in my current context to do more of these things? Am I credible? I also make notes about what else I’m doing to aid when I rewrite the statements in the future.

This approach works for me. I don’t know if it will work for anyone else, or how you manage your own career growth, personal bios or labels, but it’s probably good for all of us to ask ourselves if we are really who we say or think we are.

bookmark_borderQ&A about software testing

If you want to skip the preamble and get to the point, just page down to the end of this post.

The professional software testing community has a good support network of web sites dedicated (or partially dedicated) to answering questions about testing. Some of my favorites are:

(apologies if I didn’t include your favorite site – a comprehensive list wasn’t the point)

But as you may know, I’m a huge – no, gigantic fan of the Stack Overflow model of q&a services. The model for reputation and badges allows the reader to find the best (as voted by peer-reviews) answer quickly and encourages participation by both askers and answerers of questions.

About a year ago, when Stack Exchange (the generic engine behind Stack Overflow) went into beta, Justin Hunter kicked off a testing site based on the engine (http://testing.stackexchange.com). The site has had some traffic, but let’s face it – it doesn’t have the daily traffic of any of the four sites above (yet). If you haven’t been to http://testing.stackexchange.com yet, go ahead and ask (or answer) some questions,or vote on answers or questions you like.

Stack Exchange recently changed their model (in a good way), and have a staged process for developing sites. There’s a testing / quality assurance (people still don’t get the difference, but bear with me) site in proposal right now. The plan is, that once the site gets from proposal to beta, that we’ll use the great questions on testing.stackexchange.com to seed the new site.

I know what you’re thinking – why do we need another q&a site. For me, I believe in the Stack Exchange model, and think it would be a wonderful addition to the testing community.

If you’re with me, and want to support a Stack Exchange site on software testing, please go to the proposal and pledge your support.

We need a lot more people to commit to help make the site successful, but I know you can help..

bookmark_borderLeadership

When I was 9 years old, I’d play pick up soccer at recess. A couple of kids – the “leaders” would pick teams, and then we’d play. Since we were kids, the leaders of the teams were sometimes the best players, but usually the loudest kids. Leadership was short-lived, but effective for the purpose. I imagine that it works the same way with the 9-year olds of today.

But we’re not kids anymore.

Leaders today – the good leaders, still may not be the best players on the team, but they’re not the loudest anymore.

Leaders care about making progress in their work and in sharing their results. More importantly, they care about the work and progress far more than they care about their popularity. Good leaders are excellent decision makers (despite ambiguity), and are humble, honest, and accountable when they blow it. They know how to use conflict to draw out insights, and how to create harmony in a bad situation. Simply put, they lead.

I just wish we had more leaders. Leaders who cared more about progress than fighting decade-old arguments; leaders who strive to collaborate more than alienate; leaders more concerned about communication than being the loudest voice in the crowd; and leaders who foster and demand innovation from their followers.

But we don’t – or I can’t find them…but somehow I know they’re out there. Maybe they’re just quiet – or maybe they don’t know they’re leaders yet.

But we need them now.

Note: Sheesh! I’ve already heard from three people who think I’m picking on testing people. I guess it’s because testing people read this blog, but I wasn’t thinking about testing when I wrote this (although I suppose it does apply to leaders in testing).

I recently re-read Seth Godin’s Tribes and Patrick Lencioni’s The Five Temptations of a CEO. The latter (which is really about leadership)  was the primary inspiration for this post – not the loud folks in the testing field.

Hope that slows down the hate mail.

bookmark_borderBeing Forward

I’m happy that my last post generated some discussion. I’m not sure about the rest of the blogosphere, but for some reason I seem to get half or more of the comments privately instead of through the blog comment system. Something I heard from a few people was this comment (anonymity retained).

Why don’t you list the forward thinking that testers should be concerned with? A list of topics may help advance things.

If I think if I had the full list of where software testing was heading, that I’d certainly include it. But because that would mean I also had the ability to predict the future, first, I’d buy a lottery ticket.

But they probably(?) didn’t mean that. I hope the questions were more about what I thought some areas to explore may be. I’m not sure if I know that either – but I suppose I can share some random ideas (that I may or may not explore in future blog posts).

  • I think we have a bit of an issue with data. On one hand, we don’t know what to do with the data we have. I’ll be stronger – we don’t have a freaking clue what to do with the software related data we have today. But we don’t have enough data to make good decisions. There are two issue here – we need more data, and we need a way to interpret that data in a way that lets us make great decisions.
  • We really have no idea (again, in general) how to design tests. We don’t know which set of tests are relevant for a given software context, we don’t understand which tests should be automated or performed manually, and we don’t know how to design robust and reliable tests when we do automate. I know that this doesn’t apply to some of you (or you think it doesn’t apply), but testers do a huge amount of under-testing and over-testing. Finding a much better balance is a huge challenge.
  • I’m concerned with the way most test teams (and many software teams in general) are organized and managed (I mentioned this in a comment, and Catherine Powell added her thoughts in a blog post). Software creation is knowledge work – yet we manage teams like they’re making widgets. That approach is idiotic at best.

What I’m most worried about is what I stated originally – I’m worried that not enough people are worrying about this stuff.

But then I wonder if I should worry at all? Maybe software will always sort of suck, and average quality software is enough. If that’s the case, then little or none of this matters – it’s all a waste of money. I’m serious – one possibility is that people just don’t care enough about software quality to pay for it. Is it a viable option to stop (or massively reduce) software testing and pass the cost savings on to customers?

Something else to explore later I suppose.

bookmark_borderForward thinking in software testing

I’ve  been thinking a lot lately about what’s next in software testing. What are the new ideas in testing? What is our role in the future of quality software? How do we advance the state of the art in testing?

But I worry.

I think forward thinking will come from highly experienced testers with a breadth of knowledge and the ability to lead to help us define what’s next.But testing still seems to be  (for most) a job you do for a while until you do something else rather than a career.

When I go to industry test conferences, someone often will ask the keynote audience how long they’ve been testing. For as long as I’ve been attending conferences, over 75% of the audience has been testing less than a year. Most of the people who have been testing longer are presenting at the conference or attending in some other “official” capacity (vendor tools, consulting opportunities, etc.).

I think the obstacles to advancing testing are worse than the revolving door pattern of the career choice. The role of the experienced (or “expert”) tester appears to be a role of helping the new testers get a handle on the basics. Now – don’t get me wrong, I have said a million times that the role of an expert tester is to make their team and peers better, but it’s a vicious circle when the career path of a tester ends at coaching and mentoring for the noobs. Then again, it’s big money if you want to go into test consulting Princess. Given the perceived turnover rate in testing and the growth of the industry, if you are good at bringing new testers up to speed, you should have gainful employment for years to come.

And it’s good to coach, mentor, and guide new testers – but it’s not enough. There are a world of challenges out there in testing, but little exploration into advancements, let along game changing activities. I’m approaching 20 years in software testing, and I still love it. But I think there’s much more out there than most of us think.

So let’s go find it.

bookmark_borderPeering into the white box–Live!

On Thursday, November 11, at 4:00pm PST, I’ll be delivering another free web cast thing using Microsoft Lync (pre-setup instructions here).

Signin information here:

https://join.microsoft.com/meet/alanpa/37MWC1GQ

Join by Phone:
+18883203585    
+14257063500      

Find a local number 

Conference ID: 86935295

The talk will be a re-hash of my PNSQC talk (shortened a bit and turned into more of a “how to, and why” rather than a strict case study). The mini-abstract is:

Code reviews are a long-time staple of software development – but what happens when the test team takes part in code reviews? We’ll talk about how my team at Microsoft has done this, what we’ve found, and why you may want to do something similar where you work.

I’ll post sign-in instructions here about 24 hours in advance. Mark your calendar now!

oh – if you have ideas or suggestions for future webcasts, post them in the comments below.

bookmark_borderProducers, players, and software

Chris McMahon (who has an extensive music background) recently gave a wonderful presentation at SoftwareGR where he stated:

The work of a performing artist is exactly the same as a functioning agile team.

He goes on to explain that it isn’t a metaphor – it’s exactly the same thing. And I believe him.

Later in the talk he discussed the role (including stories you have to hear) of the Producer in music and pondered why we don’t have more of these folks in software engineering

Where are the Producers in software development? Where are the people that can turn a (software) flop into a hit? (paraphrased – sorry Chris)

I listened to the talk last week, made a mental note to send some sort of thank you to Chris and went on about my life.

This weekend, I spent a bunch of time organizing the “music room” in my house. Several months ago, I “acquired” a room in my house for my stuff, but due to travel and family issues, I’ve only recently got the room sort of in shape. I celebrated by reacquainting myself with the “stuff” I’ve acquired over the years. I’m horribly out of practice, but I was surprised how much came back quickly.

Anyway – I was sitting on the couch thinking about how every bit of equipment – each of my saxophones, my flute, clarinet, harmonica, microphones, drums, guitars (and even my ukulele!) have a story…then I started thinking more about what Chris had said and reflected on my own musical background.

When I was playing music more, I was much more of a generalist than an expert – and I could read (music) and learn faster than most of my peers. I was also good at breaking down difficult problems (why does this sound like shit) to smaller solvable pieces. This was my ticket – I got gigs because I was versatile. I played both vibraphone and saxophone in one of the best collegiate jazz bands in the country, and both timpani and clarinet in the university orchestra (none of these, btw, at the same time. I played in rock bands because I could play both saxophone and guitar – and sing (the latter two, only good enough for rock and roll). I knew when to help, when to blend, and knew when to lead. I was never the best soloist, and was rarely the best musician on stage or in the studio, I certainly made my share of mistakes, but I still found a way to make my groups better and have more fun – and people recognized that. My peers also knew if there was something I couldn’t do, that I would be able to figure it out. I wasn’t always the first call for a session or to fill in, but my name always came up.

That part of my life seems so far away, but I realized that it’s exactly what I do today, and it’s probably why I’m so happy with what I do (even though I work for “the man” now). I never know the most about automation or tools or test techniques or leadership, but I know a “little about a lot” and definitely enough to be helpful and to make teams better. I’m still the “big picture” guy who figures out why shit sucks and finds the best way to improve without damaging egos and “the creative process”. I like being the session player / consultant, and I have no problem letting the better soloists step up to the mic when it’s their turn.

Maybe someday I’ll be a (software) producer – but at the very least, I hope I get a good chance to work with a great one.

bookmark_borderWho owns quality?

I think I’m finally caught up and recovered from my brief North American tour last week. While it was fun to present at two conferences plus a customer talk in four days, I missed out on a the second day of both conferences, as well as the opportunity to meet nearly as many people as I would have liked.

I thought I’d write a bit about something I heard more than once last week. I summed up my reaction in a tweet.

I forget how often some companies blame testers for escaped bugs. It’s not their fault.

My flippant follow up to the “bugs are tester’s fault” statement is typically “How can it be test’s fault? We weren’t the ones who put the bugs there in the first place”, but there’s some truth to the remark. Think of it this way – If testers are responsible for letting bugs “slip” through to customers, you have an engineering system where programmers are paid to insert bugs into the software, and where testers are penalized for not finding all of the needles in the haystack.

That doesn’t seem right. Delivering quality software isn’t a game where programmers insert bugs for testers to chase after – everyone on the team has to have some skin in the game on delivering quality (and value) to the customer

There’s also been a lot of buzz recently about the “cost of testing”, where testers (or teams of testers) attempt to justify the investment in testing. I have to admit that this bugs me – I don’t believe in the cost of testing – I believe in the cost of quality, and that the cost (and effort) is shared among the entire software team.

In hwtsam, I wrote:

Many years ago when I would ask the question, “who owns quality,” the answer would nearly always be “The test team owns quality.” Today, when I ask this question, the answer is customarily “Everyone owns quality.” While this may be a better answer to some, W. Mark Manduke of SEI has written: “When quality is declared to be everyone’s responsibility, no one is truly designated to be responsible for it, and quality issues fade into the chaos of the crisis du jour.” He concluded that “…when management truly commits to a quality culture, everyone will, indeed, be responsible for quality.”(STQE Magazine. Nov/Dec 2003 (Vol. 5, Issue 6))

Culture is hard to change, but it’s imperative for making quality software. If your programmers “throw code over the wall” and are surprised when the test team doesn’t find bugs, you should rethink your work environment (or resolve yourself to the fact that your job is to clean up the programmers mess and that quality is nowhere near your control).