Can you get me a repro?

“Hey – can you set up a repro of that bug for me?”

As a tester, how many times have you heard this phrase? How many times have you walked through the steps you outlined in the bug report so someone could look at an error for you? Or – how many times have you seen a test error, and immediately re-run the test to see if you could reproduce the error yourself?

Is it a big number? If it is, you’re not going to like what I have to say. If you need to reproduce all of your bugs to figure out what’s going on, you screwed up. Your logging is bad. Your diagnostics don’t exist. You wrote crummy code. You’re wasting time!

I sometimes see code like this:
int status = CreateSomethingCool(object);

if (status != COOL_SUCCESS)

{
    return -1;
}
status = MakeItWayCool(object);
if (status != COOL_SUCCESS)
{
    return -1;
}

The next time the test is run, Joe Tester (assuming some sort of automation around the automation) gets an email saying his test failed. Of course, Joe doesn’t have a freakin’ clue why his test failed. It could have failed for either case above (assuming those are the only two failure points), so he needs to run it – perhaps under a debugger to see what happened.

Joe writes horrible test code.

But – Joe wants to improve, so he writes this:

int status = CreateSomethingCool(object);

if (status != COOL_SUCCESS)
{
    return -1;
}
status = MakeItWayCool(object);

if (status != COOL_SUCCESS)
{
    return -2;
}

Now Joe has different return values, but he still writes crappy code. At least now he knows why his test failed; but he doesn’t know why the product failed.

So, I fired Joe, and hired Sally. Sally wrote this instead.

LOG(“Calling CreateSomethingCool with object = %s”, object.ToString());
int status = CreateSomethingCool(object);

VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));

LOG(“Calling MakeItWayCool with object == %s”, object.ToString());
status = MakeItWayCool(object);

VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));

To be fair, Sally’s version was a little easier to read, and contained some comments (or it would if Sally was a real person), but when her test failed, instead of needing to set up a repro, the automation system sent her this mail.

Test Failed

CoolTest failed with a verify failure. Log follows:
Calling CreateSomethingCool with object = LittleRedCorvette
CreateSomethingCool Failed. status=5, objectData=
    Name=LittleRedCorvette
Size=0x100
Active=false
Running=false
LastError=5

Even here, there may not be enough information, but chances are that if Sally (or her teammates know that), “CoolTest is failing with object LittleRedCorvette, and it’s likely because the object was inactive and not running, and that the error code was 5”, that someone familiar with the code would know exactly where to look.

And – in the case where Sally (or a teammate) has to hook up a debugger anyway, they should add additional debug information to the log file for the next time a similar error happens. Setting up a repro wastes time. Doing it for every issue you find is irresponsible and wasteful. Be a professional, and stop setting up repros, and start writing tests (and code) that make your job easier and make your team better.

Similar Posts

  • PBKAC?

    Yesterday, I read a mail sent to an email alias I’m on, where the author was asking why tool X wasn’t enabled on his latest build. The mail looked something like this (genericized to protect the innocent). foo.service doesn’t appear to be working Repro: I installed the build from <build_path> I verified the binaries existed…

  • Five for Friday – June 14, 2019

    I’m reading the Lean Product Playbook – which has this quote (similar to the Poppendiek quote I also frequently mention.As Dave McClure of 500 Startups said, “Customers don’t care about your solution. They care about their problems.” Here’s an article on two of my favorite things – Kanban and Crossing the Chasm – Crossing the…

  • Finding Quality

    Leave it to Adam Goucher to beat me to the punch line. When I proposed that breaking down your definition of quality to a manageable set of ilities is a reasonable method for improving customer perceived quality, the logical next step is to try and find out which of the ilities you need to care…

  • Why bugs don’t get fixed

    I’ve run into more and more people lately who are astounded that software ships with known bugs. I’m frightened that many of these people are software testers and should know better. First, read this “old” (but good) article from Eric Sink. I doubt I have much to add, but I’ll try. Many bugs aren’t worth…

  • Getting back to work

    I’m back working again – or I guess I should say je suis de retour. I took a few weeks off in southwestern France, one thing led to another, and I decided to stay a bit longer than I originally planned. I’m working remotely for a week or two before returning stateside. It’s nice to…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.