Just Fix It (mostly)

Chris McMahon’s latest post (Just Fix It) proposes that as far as bug tracking goes, the best course of action is to skip the “tracking” part of the workflow and “Just Fix It.” I’m a huge fan of this approach and think that for the most part, tracking a large number of bugs in a big fat bug system (and often overemphasizing the church of the bug curve) pretty much encourage a test-last / test-quality-in workflow.

I see this concept come up frequently, and I’ve noticed a bit of a trend. Teams that follow Just Fix It love it. Teams that prefer to fix bugs later are sure that the concept won’t work for their team, and that they need the curve and tracking data in order to ship their (undeniably unique) product. As a side note, one fun thing I’ve talked a few teams at Microsoft into doing is to pair on bug fixes – when Joe-Tester finds a bug, he Tells Kathy-Developer about the issue, and then the two of them pair-program the fix. I could write a whole post on why this is so cool, but I’ll leave it at that for now.

In short, I’m a huge fan of Just Fix It, but as usual, the totality of overlap of agreement between Chris and I is about 93%.

For example, let’s say on Monday, Alex-Tester notices that the froobnozzle isn’t working. He tells the developer, who fixes the problem immediately. Tuesday, Beth-Tester notices that the froobnozzle doesn’t work when interacting with another part of the system. She tells the developer, who fixes the problem right away. Over the following days and weeks, a lot of problems are discovered with the froobnozzle. The froobnozzle, is, in fact, a piece of crap held together by the 1s and 0s version of spit and duct tape. A bug tracking system let’s you see (I hate to say this out lout…) trends of where errors are. Bugs don’t appear randomly sprinkled throughout a product – they tend to congregate in clumps. Knowing where the clumps are can guide further testing, or risk decisions.

Source control almost mitigates this concern, but unless you have a diligent comment policy for check-ins, you probably won’t be able to differentiate between a “I was adding new code or functionality” check-ins vs. “I was fixing shit I broke” check-ins.

But I won’t go as far as to say you need a bug tracking system. As Chris describes it, and as most people use them, a bug system is really a work-item tracking system anyway. If you track work items on post-its or notecards, bugs should work the same way. As far as trends go, I think a simple tic-mark system would work just as well. When you discover a problem with the froobnozzle, write it on the board and put a tic-mark next to it. When a component gets n number of tics, schedule refactoring time, or do a design review (or both). Alternately, look at the components with the highest number of tics during the retrospective or sprint planning and review those for potential re-work.

But other than that, yeah. Just Fix It!

Note: I would have posted this as a comment on Chris’s blog, but I, for one, find the comment forms on blogspot completely unusable.

4 Comments

  1. Posted December 5, 2011 at 11:19 am | Permalink

    I got interested in “fix & forget” vs. “track” a few years back when Lee Copeland asked me to write an article on it for Better Software, and I’ve done several conference sessions on the topic.

    One important thing you don’t mention, maybe you assume it, is that “fix and forget” *requires* that you write a unit test that reproduces the problem, then write the fix, then check in both the test and the fix. This way, you are improving the quality of froobnozzle.

    I don’t know about other teams, but our team certainly notices when we get a lot of bugs in a particular part of the code, and we address it with refactoring or rewriting or whatever is appropriate.

  2. Posted December 5, 2011 at 11:38 am | Permalink

    Yes – of course you do that (unit test / review).

    I think that not all teams *notice* when a particular part of the code is particularly bad. I’m just suggesting that teams make some effort to track that (in what ever manner best suits the team).

    Of course, and as Chris mentioned on twitter, it’s best to just avoid spit and duct-tape code in the first place, and many of the practices of Agile do a nice job avoiding that problem in the first place.

  3. Michael Schnick
    Posted June 14, 2012 at 7:48 am | Permalink

    Hi Alan,

    thanks for your blog and sharing a view on testing from a different perspective.

    Here, at a small company, the same people are responsible for a couple of different software modules. Visiting them after finding a bug, and having them fix the issue in a co-programming style might be interesting for blocking issues. I’ll try that next time I run in such a situation.

    However, when I am testing a new module and come across a couple of bugs, I write them down in separate tickets. Those tickets are created maybe every 5 or 10 minutes. Should I distract my co-worker 4 or 5 times every 10 minutes to fix things which they could easily fix when I’m done testing?

    That’s basically the “clumped bugs” situation, I guess.

    Are we doing it “wrong” here? What do you suggest?

    Thanks,
    Michael

    • Posted June 14, 2012 at 9:31 am | Permalink

      If it works, it’s not wrong.

      But there’s always the question of whether doing something else would work better or not – and to discover that, you’ll need to experiment. Every situation, scenario, product, and team is different. If you can find a nugget in my ramblings you want to try, by all means, go for it, but I can never know what’s right and wrong for your team – you’ll have to figure that out yourself.

Leave some words for the weasel

%d bloggers like this: