bookmark_borderThat Damn Gorilla

Last week I happened to log on to twitter just as some test folks were marveling over the the massive parallels between the video with the gorilla and the basketball players and software testing. I made some cracks about the absurdity of the hype, and more than a few testers freaked out.

Somewhere in the tweet insanity, I promised to blog about my opinions on the concept, so here goes.

First off – this isn’t a new gripe of mine. Here’s a tweet from August, 2011 (edited to fix my typo). To be clear, I think all three of the items below are valuable – imageyet all also seem to be frequently hyped beyond their value. The gorilla especially comes up among testers – many of whom overreact to the value of the video to software testing.

The video shows that it’s possible to miss something right in front of your eyes. The point of the video is to show that it’s possible to miss something that’s right in front of your eyes.

IB makes you confront the illusion of attention.

That’s it.

Knowing about inattentional blindness doesn’t make you better at noticing things, and there’s zero correlation between the observational skills of those who see the gorilla vs. those who don’t. While I agree that it’s critical for testers to know that inattentional blindness exists, it’s a small nugget of information in a pretty big pool of stuff that actually helps testers. Any knowledge work requires knowledge of IB – furthermore, I’d argue that there are plenty of professions where IB is much more critical to know about than in software testing.

For example:

  • Lifeguards
  • Judges
  • TSA Agents
  • Script Supervisor (those are the people in charge of ensuring continuity in movies / tv)
  • NASCAR drivers

And I’m hard pressed to think of a profession where knowledge of IB isn’t at least equal to that of testing. My garbage man (sanitation engineer) needs to make sure he doesn’t miss any cans, and ensure I’m not throwing away anything illegal. Given that garbage collection is much more repetitive than anything I do from day to day, I’d expect viewing the gorilla video to be standard training material for the guys in the big stinky truck (and for all I know, it is). The gorilla is fascinating – for everyone. There’s no special appeal to testing that I can see.

Of course, beyond the video (which helped the people behind it earn many awards), Christopher Chabris and Daniel Simons also wrote a book called, The Invisible Gorilla which discussed the vides and a ton of other really cool stuff (stuff I find much more interesting than the gorilla video – especially when considered as a whole). I’ve read the book twice, skimmed it several other times, and met the authors briefly after attending a talk they gave a year or so ago.

Some other cool topics covered include:

  • The illusion of memory (what you think you remember clearly may not be accurate. At all).
  • The illusion of confidence (most people overate their abilities – includes a great story about chess players and chess rankings)
  • The illusion of knowledge (you probably don’t know as much as you think you do)

It’s good stuff.

The Gorilla Stunt is good.

It’s just completely over-hyped in much of  the testing community.image

 

 

 

So stop hyping the damn gorilla video.

bookmark_borderWorking From Home – My two cents on remote work

Anyone who reads this blog has probably also read about, or heard of the recent policy from Marissa Mayer at Yahoo recalling home-office based employees back to the office (Bing search here in case you’ve been under a rock). Most of the reactions I’ve seen to this policy are correctly identifying this as a management problem (or a worker-milking-the-system) problem, and I agree with that assessment.

Of course, remote workers all over the internet are completely up in arms whether they work at Yahoo or not. Perhaps they’re afraid that their employer will follow suit (not-likely), or they don’t want to be discovered as yet-another-wfh-miler (slightly more likely), or they feel like their choice of work method is being threatened by the publicity of this choice (most likely).

To be clear, I am completely supportive of working from home. I’m fortunate enough to have an employer and a history of managers who let me work from home – or remotely as needed. In fact, the recent news reminded me that after I worked remotely for two solid weeks in the summer of 2011, I wrote up some thoughts for my (then) manager. Given that there’s nothing confidential in that write up, I’m sharing it unedited below as fodder for discussion.


From: Alan Page
To: Ross
Date: August 3, 2011
Subject: Working the Swing Shift in France

This summer, I’m spending two weeks working from Toulouse, France. My family came here for a vacation (and to house sit for some friends of friends). I had planned to only stay for about 10 days, but due to a variety of circumstances, I decided to extend my trip and stay with my family for an additional two weeks. I asked Ross if I could work from France for a few weeks and he graciously allowed me to do so.

The logistics

The house we’re staying in has a reasonably fast internet connection as well as an office where the door shuts, so a reasonable workspace wasn’t a problem. I decided that I’d work Redmond hours (I start work between 4:00 and 6:00 pm and work until 2:00 or 3:00am). There was no requirement that I align my work with Redmond time, but it allowed me to spend some time on daytrips with my family during the day before beginning the workday. I was also able to attend a fair number of meetings over Lync.

The experience

I’ve worked from home before and have never had a problem staying focused on work outside of the workplace (I probably learned to excel in this area while writing hwtsam on evenings and weekends). My family (fortunately) “gets” that I’m working even though I’m close by and leaves me alone to concentrate.

One highlight of the experience is that Lync has worked flawlessly. I’ve made several calls to Redmond, and attended several meetings. Audio and video have worked well, and it’s helped keep me connected much of the time.

I was reflecting on my first week of working remotely, and had a bit of an insight. I have been able to get a ton of work done – but to be fair and honest, it’s different work than I would have done had I been in Redmond. I don’t think this is necessarily a bad thing, since some of the things I’ve worked on (e.g. writing up thoughts and experimenting with fault injection, figuring out how Lync should approach model-based testing, writing up debugging tutorials, or polishing up a thinkweek paper) are the right work for me to do, but it seems that working remotely changes priorities slightly. By this, I mean that a big part of my typical role involves interacting with people on the team in a somewhat random pattern – e.g. answering questions in the hallway, discussing topics of the day over lunch, or following up with people 1:1 after meetings. Not all Microsoft roles involve this sort of interaction, but it seems difficult to interact in this manner remotely.

One example I thought of that reflects the above is this scenario: Say Josh and Bob are talking in Josh’s office. I overhear the conversation and have some relevant (and valuable!) thoughts, so I get up, poke my head in and join the conversation. That scenario doesn’t happen if I’m not there.

An interesting Lync feature based on this would work like this. Say Bob and Josh are having an IM conversation. If Lync noticed that I was on both of their contact lists, and they mentioned a keyword that shows up in my “interests”, Lync would ask them if they wanted to add me to the conversation.

I think the casual interaction limitation is more of a cultural problem than something inherent to working remotely (and something that may solve itself if I was away for a longer period). My thought is that in general, people on our team don’t send IM’s casually – e.g. “Hey – I was just talking to Dan about test automation, and wondered if you had thoughts on how to do data driven testing”, or “Do you have any quick thoughts on blah?”. The questions I normally hear in the hallway, or from someone sticking their head in my office haven’t occurred over IM (or telephone for that matter).

Another example is the value (in our culture) of the in-person follow up. For example, I sent an email to a few peers on the team last week – I was expecting it to turn into a discussion, but only received two short replies (I replied to both, and then the conversation ended). If I were in Redmond, I probably would have had an additional casual conversation with a few of the recipients and attempted to clear up any ambiguity or answer any questions that was blocking an engaged conversation on the topic. This is difficult to do remotely (although it’s certainly possible to do all of this over IM, it takes some getting used to).


Now – since I wrote that email, I’ve changed teams, and the world has warmed up more to social media. Lots of Microsoftees use Yammer, and IM usage is on the rise. I (and others) still don’t think we (as a company, and as an industry) are doing enough to support and encourage remote work, and I’m discouraged a bit that Yahoo has seemed to take us a step or two backwards.

 

 

bookmark_borderSee Me…Hear Me

Although I really enjoy talking about testing, I’m (purposely) speaking a lot less these days. I have a day-job that I love, and I like hanging out in the rainy Pacific Northwest. As of now (and I think a plan change is a long shot), I’m travelling to exactly one conference in CY13 (StarWest in Fall of 2013 – more on that later….but it will be epic).

However – if you are also in the Pacific Northwest, and want to hang out and talk about test innovation, I’ll be hanging out with the folks at qasig on the evening of March 13. I’ll be delivering an updated version of my EuroStar keynote on test innovation, and I expect it will be a fun night.

Beyond that (and depending on interest), I may do another free webinar soon (it’s been at least a year since the last one), but that will be about it for me talking about testing in person about testing.

More here later.

-AW

bookmark_borderCan you get me a repro?

“Hey – can you set up a repro of that bug for me?”

As a tester, how many times have you heard this phrase? How many times have you walked through the steps you outlined in the bug report so someone could look at an error for you? Or – how many times have you seen a test error, and immediately re-run the test to see if you could reproduce the error yourself?

Is it a big number? If it is, you’re not going to like what I have to say. If you need to reproduce all of your bugs to figure out what’s going on, you screwed up. Your logging is bad. Your diagnostics don’t exist. You wrote crummy code. You’re wasting time!

I sometimes see code like this:
int status = CreateSomethingCool(object);

if (status != COOL_SUCCESS)

{
    return -1;
}
status = MakeItWayCool(object);
if (status != COOL_SUCCESS)
{
    return -1;
}

The next time the test is run, Joe Tester (assuming some sort of automation around the automation) gets an email saying his test failed. Of course, Joe doesn’t have a freakin’ clue why his test failed. It could have failed for either case above (assuming those are the only two failure points), so he needs to run it – perhaps under a debugger to see what happened.

Joe writes horrible test code.

But – Joe wants to improve, so he writes this:

int status = CreateSomethingCool(object);

if (status != COOL_SUCCESS)
{
    return -1;
}
status = MakeItWayCool(object);

if (status != COOL_SUCCESS)
{
    return -2;
}

Now Joe has different return values, but he still writes crappy code. At least now he knows why his test failed; but he doesn’t know why the product failed.

So, I fired Joe, and hired Sally. Sally wrote this instead.

LOG(“Calling CreateSomethingCool with object = %s”, object.ToString());
int status = CreateSomethingCool(object);

VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));

LOG(“Calling MakeItWayCool with object == %s”, object.ToString());
status = MakeItWayCool(object);

VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));

To be fair, Sally’s version was a little easier to read, and contained some comments (or it would if Sally was a real person), but when her test failed, instead of needing to set up a repro, the automation system sent her this mail.

Test Failed

CoolTest failed with a verify failure. Log follows:
Calling CreateSomethingCool with object = LittleRedCorvette
CreateSomethingCool Failed. status=5, objectData=
    Name=LittleRedCorvette
Size=0x100
Active=false
Running=false
LastError=5

Even here, there may not be enough information, but chances are that if Sally (or her teammates know that), “CoolTest is failing with object LittleRedCorvette, and it’s likely because the object was inactive and not running, and that the error code was 5”, that someone familiar with the code would know exactly where to look.

And – in the case where Sally (or a teammate) has to hook up a debugger anyway, they should add additional debug information to the log file for the next time a similar error happens. Setting up a repro wastes time. Doing it for every issue you find is irresponsible and wasteful. Be a professional, and stop setting up repros, and start writing tests (and code) that make your job easier and make your team better.

bookmark_borderYet another future of testing post (YAFOTP)

I was talking with a colleague of mine this morning about his role and what it meant, and I made a mental note to blog about some of my ideas. Given that my ability to remember anything peaks at about a day, I thought I better write it down now. I predict there will be a lot of holes here – I’ll fill those in later or in the comments – but here goes.

Regardless of how you feel about the health of software testing (which depends largely on your ability to interpret a metaphor), for me, it’s getting easier and easier to see that testing is changing. Granted, it’s changing along with the software under test, so if you’re testing the same sort of desktop software products you have been for years, using development practices even older, the good news is that you’re safe – your testing world probably isn’t going to change either.

The rest of us are working on software that releases quickly and often, using development practices that support that cadence. In our world, it just doesn’t make sense for a test team to invest a bunch of time in functional testing. It’s cheaper and more cost efficient to have the programmers who write the code write tests to verify unit and functional correctness. This eliminates unnecessary back and forth, and forces programmers to write more testable (and often simpler) code from the beginning, resulting in easier maintainability and extension of the code.

Of course, you don’t need to tell me that it’s a long leap from functional correctness to usable software. Given a beer or two, I’ll give you names of products I’ve worked on that have been near functionally perfect, yet near failures in the market. This gap is where my future of test lives. Another colleague (one with no blog) says, “We can define what programmers do, and what program managers do fairly easily. Testers do the remainder.” This statement remains correct – even if “the remainder” is a moving target.

One big role falling into the remainder is that of data analysis / data science / data interpretation / whatever you want to call the analysis of customer data rolling in. As products move more and more into the cloud, there are more and more opportunities to run tests and analysis in production and get data in near real time. I honestly think that the ability to provide actionable product insights from terabytes or more of data is the key to a six-figure plus paycheck for decades to come. Some testers will fit naturally into this role – but I have a hunch we’ll find a lot of people in this role with backgrounds in Mathematics or Statistics than from Computer Science.

When you think about “the remainder”, there’s another big hole. I think we’ll always need people to look at big end-to-end scenarios and determine how non-functional attributes (e.g. performance, privacy, usability, reliability, etc.) contribute to the user experience. Some of this evaluation will come from manually walking through scenarios), but there will be plenty of need for programmatic measurement and analysis as well (e.g. is there real value in manual performance tests, or manual stress tests?). I don’t know if there’d be more or less specialization than there is today, and don’t know if it matters…but it may.

There may be other new roles, while some roles abundant today may go away – although not immediately, as I still see several openings for “Functional Test Engineer” on a popular job site. Short story is that I’m cool with this future. Others may not be, and that’s ok. I’m just happy to ride the wave.

bookmark_borderDebugging For Testers

I came across a few comments and statements recently that set off a slightly red (orange?) flag for me. The gist was that debugging was for coders, and that testers “just found the issues”. I get the context (or at least I think I get it) – I agree that all testers don’t have to be coders, and that all testers certainly don’t have to be debugging masters, but I
don’t think that any testers should run away from a debugger in fear either.

In fact, I’d say there’s a minimum amount of information that every tester should know about debuggers, regardless of whether they are monster-rock-star programmers, or part-time IT application validators. I’ll start a list here, and likely add to it when I’m annoyed.

So here goes:

Things Every Tester Should Know About Debuggers

  • Call Stacks – A “call stack” is a list of functions that led up to the application crash. Different debuggers display this differently, but it will look something like this:
    FunctionThatCrashed
    Function_Three
    Function_Two
    Function_One

    Read this from bottom to top – Function_One called Function_Two, which called Function_Three, which called FunctionThatCrashed (which crashed).

    Typically you’ll see parameters after the functions, and those will probably give you a clue. If you’re working with developers, and you see a crash in the debugger, the call stack is probably the most important contextual information you, as a tester, can provide.

  • Crash vs. Assert – When a program breaks into the debugger (meaning the program stops executing, and the debugger springs into action), the program may have crashed (e.g. attempted to read/write to/from invalid memory), or it could have broken at a developer-induced breakpoint (often in a macro called an ‘assert’). On x86 / amd64 platforms this shows up as an int 3
    or int 2C in the debugger. The important thing to note in the case of the assert type of error is that the reason for the break is obvious  – or at least obvious if you can view the code. In practice, asserts often look like this:
      bool bSuccess = CreateUserAccount();
      //break into the debugger of the call to CUA fails so we can debug
      ASSERT(bSuccess == true);

When a tester sees this break, instead of saying, “The program crashed during logon”, they can say, “I’m hitting an assert because CreateUserAccount is failing on today’s build”. While both statements have value, the latter statement is much more informative, and in many cases actually actionable.

  • Basic Lingo – Know a little about a few basic types of crashes – e.g. Access Violation (AV), Stack Overflow, or Buffer Overrun, and know a little about what causes these errors (e.g. reading from a bad memory address, calling too many functions – usually via recursion, or putting too long of a string into a buffer). You don’t have to be an expert, but a little knowledge here goes a long, long way in diagnosing bugs and getting them fixed.

Off the top of my head, that seems like enough, but not that much. There’s certainly more to learn, but I think knowing just this much should be a minimum for any professional tester.

More ideas – add ’em to the comments.

bookmark_borderTraining and Practice

I’m not sure if this is a musician’s perspective of learning or something more universal, but it’s something that I think about often. When learning, there’s value in training, lessons, reading, and other learning opportunities, but there’s much more value (IMO) in practice – but I’m not sure if that’s how everyone sees it (or if I’m even right for that matter).

One common example from my life is the mediocre musician who decides to get better by taking lessons from a master. This isn’t a bad choice by itself, but in many cases, months go by, and while they may make some minor improvements, there are no dramatic (or often, noticeable) improvements. Meanwhile, another mediocre musician makes huge advancements without a single lesson. The difference is that the latter musician invested in deliberate practice, while the former assumed the lessons were enough. The joke about the way to get to Carnegie Hall applies well here.

The same thing applies outside of music as well. Taking a skating lesson will help you with basics, but you need to practice to improve those techniques. You need to learn what works for you, and how motions feel, and need to get confidence in your abilities until you hit a point where additional lessons or training may help you make another leap forward.

Of course, this also applies to software engineering practices. Whether you’ve taken a class, or read a book (or even a Wikipedia article), you don’t really know the practice until you’ve tried it yourself (you certainly wouldn’t give a skating lesson or write an article on learning how to skate after an introductory lesson). For anyone interested in improving or learning, you are responsible for finding approaches you haven’t used before (not just those you haven’t read about before), and using deliberate practice to discover the details about what works, and does not work for your context.

I’m not saying practice alone is enough, but you need a balance (note, “Balance” does not necessarily mean 50-50) between deliberate practice and increasing your capacity to learn. One way to increase your capacity is to take a lesson or a training course. As I mentioned, you can also read about something new. For musicians, playing with other musicians is a fantastic way to learn and increase capacity – this is why I recommend pairing and coaching in software engineering so much – it’s a great way for all parties to learn and increase their ability to take advantage of deliberate practice.

bookmark_borderNew Year, New Look

I’ve been playing with the Thematic Framework for WordPress a bit over the last few months. I like it because it’s simple, easily extendable and massively tweakable. I’m not much of a designer, but I was having fun making my own child theme for Thematic when I came across Child’s Play theme from scottnix.com. I liked it so much, that I threw away my work and applied Child’s Play to angryweasel.com. I tweaked the hell out of my last theme (and I’m sure I’ll tweak this one…eventually) – but what you see now (unless you’re reading my RSS feed) is completely stock. Not much to it, but it’s exactly what I was looking for.

Posts related to software forthcoming…HNY.

bookmark_border2012 Recap

As 2012 comes to a close, I thought I’d recap the year. I think this little exercise is mostly for me, but I suppose new blog readers (or people with too much free time) may find some points of interest.

Some more from the non-Microsoft front follows.

My most popular blog posts from the last year are (were):

As I look at those topics, something tells me I should write about testing more, and skip non-testing topics (like year-end recaps, for example).

I spent most of the year close to home, hanging out with my family, and focusing on my job. I’ve been on the Xbox team for just over a year now, and I’m still having a great time. It’s a fantastic place to work, and I’m surrounded by more brilliant hard-working and passionate people than you could imagine. I will see if I can share some more stories and insights about the team in 2013.

I was visiting my parents this summer when my mother was diagnosed with cancer. I had committed to three speaking engagements in the fall, and in between those, work, and heading to my parents’ house every weekend to help out, late summer and fall were exhausting.

I was honored to be invited to be part of the STANZ tour of New Zealand and Australia in September put on by SoftEd. I can’t say enough good things about the SofEd staff – they took care of every detail before, during, and after the event. It was a fantastic experience, and I hope I can get myself invited again some day. I had planned to take an extra few weeks off to travel after the conference, but given my mom’s health, I headed back early.

In October I gave a talk (that I thought wend quite well) at Intel in Oregon. This is my second time speaking at Intel (I spoke in Israel last year), and would speak there any time.

My next speaking event wasn’t planned. I was at my mom’s side when she passed later in October, and I gave the eulogy at her funeral a few days later.

A week later, I was on a plane to Amsterdam to give the opening keynote at EuroStar (slides are in my previous post). The staff and program committee at EuroStar were well-organized, and the conference ran (from my view, at least) quite smoothly. In the end, I’m sorry to say that I delivered a flat (by my standards) presentation. I was thrilled that some people said they liked it, but I wasn’t happy with my delivery. Given everything going on at home, I flew home before the conference ended and began trying to get back into a more normal routine.

The “normal routine” for me these days is work, family, and when I can find time, a bit of Xbox gaming (gamertag: A Weezil). I may start blogging regularly again in the new year – or I may not – I don’t know yet. I’ve committed to one conference next fall, and may do one other one. And that’s about all I’ve figured out.

I’m thankful for all of the new people I’ve met this year (I met too many testers at conferences this year to list their names), and for people I got to know better. For the first time in years, I have a bit of excitement about where testing is going (although some readers may not like where I think testing is going…). If you’ve read this far, I wish you luck in 2013 (and for the rest of 2012). I hope to cross paths with many more of you in the months to come.