bookmark_borderAngry Weasel on the Web

If you read my blog, you’re probably already sick of me, but I’ll share a few links anyway.

I gave a webinar a few weeks ago for the fine folks at EuroSTAR. Embedded recording is below. It’s a twenty minute ramble on ideas in testing. My laptop with the presentation was in front of me – and low on a table – so don’t be too distracted by my creepy eyes looking down all the time.

I took part in my first “Twinterview” (twitter interview) with the extremely nice people that run the Fusion and STANZ conferences in Australia and New Zealand. Here’s the link to the twinterview if you want to get a little more insight into what’s on my mind these days.

More details on those conference connections coming in a future post.

bookmark_borderThe Goal and the Path

When serendipity strikes, I know it’s something I should try to write about. At least three times in the last week I’ve had discussions about vision, tactics, and the necessary balance needed between the two. Without vision, your daily work is just work – a grind that takes you in no particular direction at all. There’s nothing wrong with this approach for many roles. For example, when I was a bicycle messenger, my entire job was to pick up stuff from point A and bring it to point B. It was different every day, and I had fun – but there was no vision. The only goal I was working toward was paying my rent. On the other hand, you can certainly have vision (e.g. “to be the world’s greatest tight-rope walker”), but without a tactical plan to get you there, it’s probably not going to happen.

Here’s another example. During the summer, I usually have a chance to run more. Usually, I just run when I can, and run whatever distance I have time for on that day. I could do the same thing this summer and achieve all of the benefits one gets from exercise. I don’t need a vision – I can just run.

This summer, I have a goal – I’d like to run 10k in less than 65 minutes by the end of August (I’d love to go for an even 60, but not sure I can reach that goal). I have no reason for the goal other than personal motivation, but the presence of the goal changes my tactics. In order to reach the goal, I can’t just run. The goal dictates that I need to work on pace, and distance, so I need a strategy for both. I know my 5k pace ~33 minutes), and know I can run 10k (I don’t have a current pace, but know it’s slower than my 5k pace).

When coming up with a strategy, I’m fond of the Current State / Desired State format – e.g.:

image

There are a lot of details hidden in the arrow pointing from Current State to Desired State. I need to work on increasing my pace and distance, so I’ll have to make sure I have longer runs every week, as well as interval training to help increase both my pace and strength. And, because I’ll be running more (and in some cases harder), I’ll need to take precautions to avoid injury (and at my age, do my best to avoid general aches and pains). Even with those details, aligning the tactics necessary to achieve this goal aren’t that complicated. More or less, I’ll still just run – I’ll just have a few different variations this year that will help me achieve my goal.

Of course, goals and visions on software teams are a bit more complicated. Team-wide changes require technical changes as well as people changes. If, for example, you want to move your team to using more agile practices, you may choose to deal with tdd/bdd frameworks and new project management, but you also have to deal with a web ov concerns and hiccups as you guide team members through change. In a situation like this, your Current State->Desired state diagram may look more like this.

image

It’s ugly, but expected. I’ve seen many people with “visions” fail to succeed because they spent their time looking for shortcuts through the system rather than understanding that navigating the system is the key to achieving a vision (conversely, I’ve seen many others fail because they focused on navigating the system rather than their initial goals).

Bottom line is that both tactics and vision are easy. The challenge facing leaders is balancing execution and the vision and showing results.

bookmark_borderSystems, observation, and motorcycles

Our family spent a bunch of time this weekend cleaning out the garage and taking care of a variety of long neglected household tasks. One thing I’d been meaning to do for over a year now is to get my Ducati up and running again. Between picking kids up (need a car for that), and riding my bike to work most of last summer, it’s probably been 18 months since I started the thing. Keep in mind, that Ducati’s are fickle machines to start with, but I figured I’d work on it for a while before calling someone to load it on a truck and haul it to the shop.

The first thing I did was drain the gas tank. I couldn’t recall if I added fuel stabilizer, but after that long, I can pretty much guarantee that the fuel was bad. I took the bad gas to the hazardous waste site (open on Sunday from 9-5!), picked up some new fuel, headed home, and gassed the Duc up.

I had the battery on a battery tender, but was still slightly surprised that it still had some starting power in it. Unfortunately, the engine just wouldn’t turn over. I double checked the fuel line (clear) and then pulled the spark plugs. The plugs were a little dirty, so I swapped them with a spare set from the toolbox.

Still nothing.

Sitting on the bike, I took a moment to think through how the engine worked. The starter was working, gas was flowing, but the engine wasn’t starting. Fortunately, I have a carbureted engine, and know what all (or most) of the engine workflow. There could still be bad gas still in the system, or there could be a problem with the carburetor. But neither of those seemed likely. I tried starting it one more time, and the engine just wouldn’t kick in.

While thinking through it some more, I noticed that I forgot to reattach one of the spark plug caps. I reattached it and…still nothing.

But – the behavior (i.e. engine sound) was identical with and without the spark plug cap attached – which pretty much guarantees that the spark plugs weren’t firing. I took them out one more time, cleaned them, and this time, checked the gap. For some reason, my spares were gapped really narrow (hint – always check the gap – even on brand new spark plugs selected for your vehicle). I widened the gap a few millimeters to spec, put them in and…

Vroom!

I immediately grabbed my helmet and gloves and went for a spin, and the bike ran great. No stalls, backfires or stutters. It probably still needs some more air in the tires and an oil change, but it sounds and runs just like a Ducati should.

In the end, this was just another debugging and diagnostic problem – much like the problems I face almost every day. The key points to remember are:

  • Know the system, and think about the system. When software (or Ducatis) fail, think through the entire system to note where failures may be occurring
  • Observe what’s going on – Products (and engines) fail for a reason. Chances are that there are unnoticed clues to the behavior you are seeing, so remember that anything you see may be helpful.

bookmark_borderTest Responsibility

I apologize in advance for yet another exploration of what testers do. More and more, I feel that Brent is right, and Test is a 4 Letter Word, but I feel we (whatever we want to call ourselves) can advance through discussion of our roles and responsibilities.

A few weeks ago, I was talking to a colleague about team responsibilities. As an exercise, he was trying to come up with a two word action that described what a discipline was responsible for – the action that you can count on. For development, we agreed quickly on ‘quality code’. There’s certainly more that a developer does, but given the two word action requirement, I can live with our conclusion.

The interesting conversation occurred when we discussed test. My initial answer was ‘provide information’ – is accurate – but not the right answer (for me, at least). I love and hate the notion of tester as information provider. We do generate data (and ideally actionable data) as a side effect of our testing, but that description makes it appear as if test has no power or responsibility for decision making – which I also find wrong. We are not gatekeepers of quality or safety nets, and we’re probably not going to block a release, but I think that testers need to do much more than passively provide information.

My colleague countered with a phrase completely on the opposite end. He proposed ‘sign off’ – that the responsibility of test was to ‘sign off’ on the product (and in order to sign off, we’d generate information, make decisions, etc.) As you can imagine, I didn’t like this description. I’m not against test weighing in on the sign off decision (or any other decision), but I dislike the idea of sign off being the primary responsibility. (Note – Catherine Powell has a nice article on the Decision Safety Net on her blog)

I don’t have a great answer yet for the responsibility of test. I like the idea of the role of test as an accelerant of quality – most of what I do has the end result of improving efficiency of and effectiveness of test and development work. ‘Accelerate Quality’ almost works for me, but I can’t say it’s the two word action that a tester should be responsible for. I’ll figure something out, but I’m open for ideas if you have them

I don’t have a time machine, but I think one positive note from this thought exercise is that I don’t think many (experienced) testers would list the primary action of a tester as ‘write tests’ or ‘find bugs’. At least not too many…

bookmark_borderNew Testing Ideas

I was checking out test conference programs, and found a list of talk titles I found intriguing (this is a sampling of titles from the conference). I’m curious to know how interesting and innovative you think this conference would be.

  • The Art and Science of Load Testing Internet Applications
  • Model-Based Testing for Data Centric Products
  • Successful Test Management: 8 Lessons Learned
  • Managing User Acceptance Tests in Large Projects
  • Architectures of Test Automation
  • Testing Rapidly Created Web Sites
  • Measuring Ad Hoc Testing
  • The Habits of Highly Effective Testers

There are definitely a few interesting topics above, and I’d probably attend all of the sessions if I went to the conference. Unfortunately, it’s too late to register, as the conference I pulled these titles from is nearly twelve years old.

I’ve complained in the past about the lack of new ideas in testing, but despite a massive amount of idea-regurgitation, I think software testing is edging up on a growth spurt. It takes a bit of “cooking” to come up with new Ideas (good ideas, at least), and two of the biggest ingredients seem to be falling into place.

First off, ideas take time to germinate. Testers are beginning to stay in testing longer (you’ll still see half or more of conference attendees with a year or less of testing experience), but anecdotal information, along with my testing spidey-sense tells me that we have more testers staying in test longer than we did a decade ago. With the experience, comes the ability of these testers to draw on enough experience and experiment long enough to bring new ideas to fruition.

Big ideas are often collisions between smaller ideas. You little tool or approach may not be much on it’s own, and my idea for short-circuiting the fizzbazz is a novelty at best. But when we discover each other’s ideas, a new idea may emerge (use the fizzbazz within your approach to make magic happen). In order to enable these collisions of small ideas, we need people with ideas – but we need a network to get those ideas to happen. The degrees of separation and the size of networks is massively larger than it was a decade ago. I think testing is ready to foster idea collision on a massive scale.

I could go on, and on…and I will – but not now.

I’ll be giving a talk on Where (Testing) Ideas Come From as part of the Eurostar Virtual Software Testing Conference. Register for free (‘cause I’m all about making money :}) and hear more of my crazy ideas.

bookmark_borderExploring Test Roles

I’m not quite sure why, but once again I’m writing about test roles. I don’t know of another job in the world where discussions like these are common. On the other hand, I don’t know of a job in the world where people are so passionate about what they do (and don’t do) as part of their role. I’ll chalk it up to the continued growth of the role and see if I can convince myself to finish this and post it before I stop myself.

Here’s the short version for those already bored with the topic. Roles that testers play on teams vary. They vary a lot. You can’t compare them. That’s ok, and (IMO) part of the growth of the role. I find it disturbing and detrimental when testers not only assume that their version of testing is “the way”, or that some roles (or people in those roles) do not qualify as testing.

And now for the longer version… The recent test is dead meme (which interestingly, won’t die) brought to light (in semi-dramatic fashion) that in some situations, some traditional “test” roles don’t exist anymore. It wasn’t originally phrased that way, but if you looked under the covers, that’s all that was there. I’m still surprised that so many people got stuck on the three-word catch phrase and couldn’t see the value in the statement. But if they did, I suppose I may not be spitting out this blog post.

Last year, I had a surprisingly popular post about My Job as a Tester. I’ve changed roles since then, and I’ve been thinking about an update, but for a variety of reasons, I’m just not ready yet. The biggest reason is that although I’ve been on the team for five months now, my role is still evolving. Once it settles into a bit of a groove (and as other factors resolve), I’m sure I’ll post a recap.

Recently, I’ve been working a lot on pieces of implementation of test infrastructure for my team. Although I’m still heavily involved in testing strategy and test “stuff”, the goal of most of my current work is to enable good testing. Since I sometimes describe my role as an improver of tests, testers, and testing, I’m still on target with my own vision.

While reflecting recently on what I’ve been doing vs. what [anyone else] does as a tester (and catching up on reading, I pondered the fact that what I’m doing now isn’t really testing as “traditionally” defined (whatever that means). However, what I do is making testing better – but am I more of a “productivity engineer” than a tester?  Brent Jensen has this description:

A tester’s job is to accelerate the achievement of shippable quality

By that definition, I suppose I’m right on the money. But I know there are people (who will likely tell me I’m damaging the craft, or that I’m mean to them) who don’t call what I do testing – I’m cool with that. I still like my job and my business card still says “Tester”.

By far, my favorite favorite thing to do as a tester is design tests. I love the challenge of crafting a suite of tests that enable team members to make well-informed decisions about product quality (at least that’s the plan). Testers in this role may be part of a whole-team approach where they have a test/quality focus, but have shared team goals. Or, there may still be “devs” and “testers”, but the wall between the two is minimal, and everyone works together (most of the time, at least) to make sure the product achieves both shipping and quality goals. Brent’s definition works pretty well for this role too.

Design overlaps execution. The Think-Design-Execute loop is tight in good testing – this is true whether it’s entirely, partially, or non-automated (or inversely, entirely, partially, or non-manual).

Which leads me to two test roles that, while they definitely exist, I could say they’re dead to me. But they’re not really dead – I know that. What they are is more…irrelevant. Given what I do, where I do it, and a smattering of other context indicators, two test roles are off of my radar.

The first is the test-automation only role. I think the role of taking manual test scripts written by one person and then automating those steps is a bad practice. I know some people like to do this stuff, but I think it’s a waste of time. What you end up with are tests that either should have been automated in the first place, or tests that should not be automated. Fortunately, while I acknowledge that these roles exist, I’m happy to  work in a world where these roles do not exist.

For lack of a better term, I’ll call the final role “waterfall-tester” – even though I know this role exists at some (fr)agile shops as well. This is the when-I’m-done-writing-it-you-can-test-it role. Test outsourcing is the most common manifestation of this role, but it exists anywhere where testers only provide value at the end of the cycle. I know great fantastic testers who love this role, and I’ve been in this role myself in the distant past. Today, however, I don’t want to think about testing something where my contribution hasn’t been part of the end-result from the earliest stages. Again, while I fully acknowledge that testers live in this role, I’m happy that it isn’t part of my testing world.

In the end, I’m not exactly sure what this means to anyone but me. As I’ve mentioned (and tweeted) before:

image

Which really says, that in nearly a thousand words, I’ve (once again) told you nothing new.

bookmark_borderSo Long, Tester Center

 

Earlier today, Ron posted the following about the Microsoft Tester Center: So Long, Tester Center.

There’s nothing I can say that Ron didn’t say – it was a fun effort – just hard to do without any full time support (and as our individual jobs got more complex and demanding).

The good news, is that I (and many others) remain passionate about sharing some of the cool ideas and innovations coming out of Microsoft. The smartest testers in the world work here, and if you pay attention, you may just get some unique sneak peaks into some of the coolest testing approaches, tools, and ideas in testing you’ll ever see.

bookmark_borderThe Robots are Taking Over

Probably not news to most of you, but the local company that sells everything under the sun just bought a robotics company (Amazon Acquires Kiva Systems).

Normally, I don’t blog about news stories, but on the way to work this morning I heard an interesting discussion on a local talk radio station. It turns out that there are people up in arms over the purchase because using robots in the warehouse puts people out of jobs. To be clear, I don’t like people losing their jobs either – but bear with me for another paragraph.

The arguments continued with comments like, “The robots can’t do all of the warehouse work – sometimes you need to use your brain to find a misplaced item”, or, “You will still need people to verify with their eyes that the right items were selected”. Others countered with comments like, “The robots can work 24 hours a day”, and, “You’ll still need people to program the robots and give them directions”.

I laughed out loud in my car as I realized that these people calling into the radio station were having the same discussion many people have about test automation.I won’t rehash or expand, but I will summarize with two blurbs of less than 140 characters each:

image

Since I didn’t use the word test in those tweets, I think my comments apply to Amazon’s (probable) warehouse changes as well.

I was happy to hear that a few callers recognized that robots (and automation) can be used to solve left-brained monotonous work – and do it pretty well. The future of work is (IMO) creative work**, and (again, IMO), automation (and robots) are what we need in order to enable us to find the time and insights to develop our creative thoughts into game-changing innovation.

And that sure beats doing the boring shit.

 

**Daniel Pink wrote a whole book on this concept:  A Whole New Mind: Why Right-Brainers Will Rule the Future

bookmark_borderOops, I Did it Again

Here’s a story I hear often. The names have been changed to prevent the guilty.

Jake had barely taken a sip of his steaming coffee when he saw that thirty-two of the automated tests failed in last night’s test pass. “Crap, I’m slammed today”, thought Jake, “I don’t have time to look at thirty-blanking-two failures”. Without a second thought, Jake clicked the ‘re-run failures’’ button on the web page that displayed results and turned his attention back to his coffee. After finishing his coffee and filling a second cup, Jake was happy to see that twenty-five of the failing tests now passed. “Must be a flaky environment”, thought Jake as he took a big swig of coffee and got to work investigating the seven remaining failures.

A few weeks later, Jake was sitting in in a meeting to go over a few of the top the live site failures reported by customers and the operations folks. Ellen, the development manager was walking through the issues and fixes, and throwing in a little lightweight root-cause analysis where appropriate. “These three”, she began “caused a pretty bad customer experience. When we first looked at the errors, we figured it had to be an issue with the deployment environment, but we discovered that we could reproduce all of these in our internal test and development environments as well.” Jake’s stomach sunk a bit as Ellen continued. “It turns out that although the functionality is basically broken, it will work some of the time. I guess our tests were just lucky.”

In some versions of this story, Jake steps up to the plate and takes responsibility. In other versions, he merely learns a lesson. In a few versions of the story, Jake calls the whole thing a fluke and goes through the same thing later in his career.

The point of this story is simple. Every test failure means something. The failure may mean a product failure. It may mean you have flaky tests. When you start to assume flaky tests or environments, you’re heading into the land of broken windows and product failures you could have found earlier (actually, you probably did – you just ignored them).

Great testers rely on trustworthy tests. The goal is that every failed test represents a product failure, and any tests that fall short of that goal should be investigated and fixed – or at the very least updated with diagnostic information that lets you make a quick confident decision about the failure. Relying on test automation for any part of your testing is pointless if you don’t care about the results and look at failed tests every time they fail.

Yes, I know. Your situation is unique, and you have a business reason for ignoring failed tests. My first response when I hear this claim is that you’re probably wrong. Probably, but not definitely – but don’t let flaky tests get through your reality filter. Otherwise, you’ll be sitting in Jake’s shoes before you know it.