The broken bullet anti-pattern

I’ve been meaning to write about this particular anti-pattern for a while now, as I think it contributes far too much to the lack of progress in advancing software testing. Up until five minutes ago, I called this the anti-silver bullet theory, in reference to Fred Brooks 1986 paper, No Silver Bullet as much as the Silver Bullet idiom in general. “Silver Bullets” refer to solutions that are extremely (or completely) effective for a given situation (like killing werewolves). In software, there are no silver bullets – there’s no practice, tool, language, approach, technique, or whatever that will solve all of your problems for you (and if some tool vendor tells you differently, don’t believe them!)

The broken bullet is the backwards version of the silver bullet. In the broken bullet anti-pattern, people dismiss ideas, approaches, etc. just because they are not a silver bullet. I caught myself applying the broken bullet a few weeks ago in a discussion about GUI automation – I don’t like most GUI automation because it’s fragile and rarely achieves enough return to justify the investment and maintenance, but I made the mistake of dismissing GUI automation as a solution just because I knew it didn’t work everywhere, and it was easy to get wrong – even though it could have worked well in this particular situation. Fortunately I caught myself before I made too much of a fool of myself.

Unfortunately, many others seem to embrace this anti-pattern regularly. The conversations usually go something like this:

Tester: hey everyone, I’m checking out floober as a test approach – seems like it will help me

Broken Bullet: don’t waste your time – floober is a mostly a myth and doesn’t work unless you use your brain. Here, I wrote a paper…

Tester: thanks for the help – I’ll go back to what I was doing before

Broken Bullet: no problem  – glad to keep you on the right track

Sometimes it’s more proactive. I can’t go a week without seeing an article or blog post saying “Don’t do X” – “here are all the ways it can go poorly for you and ruin your product / team / company / life. Stay away and don’t even think about X”

The problem is, that X (and floober for that matter) do work (if used carefully), and may be good solutions for some teams (and likely great solutions for others) – but will likely never get the attention they deserve because of broken bullets.

My call to action (if you care) is this: If you believe in No Silver Bullets- that there are no magic solutions to solve your software challenges, then you should also believe that there are no (ok, few) universally bad practices. Some practices are indeed much easier to get wrong, but that should only scare you – not stop you.

And the next time you see someone dismiss something because it can fail, tell them to take their broken bullets and leave you alone.

Similar Posts

  • Why?

    Phil Kirkham confirmed his birthright as a tester by asking the epitomical tester question regarding my last post. Why? To be fair, he actually asked: quite a schedule – so what do you get out of it ? Or conversely, if you didn’t go to these events what do you think you would be missing…

  • Conferences – again!

    I took some time off from speaking (too much) at conferences over the past few years. I spoke at TestBash Philadelphia a year ago (and also spoke at the Online Test Conference last summer), but 2017 (and, IIRC, 2016) have been light on me in terms of external speaking. But I’m going to kick off…

  • The Test Test

    I am always frustrated and somewhat sad when I hear testers whine or complain that they are not treated fairly; or that they are not respected; or that their development peers look down on them. I’ve been sitting on this post for many months wondering if I should post it or not when this thread…

2 Comments

  1. Hi Alan,

    I’d like to add up by pointing to another fallacy in broken bullet driven approach.

    Fail is not an absolute category. Short-term “fail” may lead to a long-term success, and vice versa. Moreover, “fail” might be recognized as “success”, depending on person / organization / society values and points of view.

    Taking automation as an example.

    * Maintenance failure. In fact, it’s a failure to design and implement automation maintainable.
    * Fragility. They failed to make it robust.
    * Cognition failure. This is a mix of test design failure and failure to implement the automation powerful enough to support test design needs.

    Thank you,
    Albert Gareev
    http://automation-beyond.com/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.