LOL – UR AUTOMASHUN SUCKZ!

There’s a pretty good presentation from the folks at Electric Cloud making the rounds on “why your automation isn’t” (and other variations on that title). The premise is, that testers spend too much time babysitting (supposedly) automated tests, and that testers end up doing a lot of manual work in order to keep their automation running. I know nothing about their products, but they have some reasonably good ideas and apparently some tooling to help.

But it’s not enough. Crappy automation and crappy automation systems is an insanely huge problem. But we tend to ignore it because – hey – we’re running lots of automation, and that must  be good. To be clear,I have nothing against test automation (although I’m leery of a lot of GUI automation). Done well, it absolutely aids any testing effort. The other 99% of the time I bet it’s actually slowing you (and your team) down. Please, please, get it through your head that your job as a tester is to test software and not to create the largest test suite known to humankind. .

…and here are some ideas to help you get to automation that doesn’t suck.

The obvious place to start is with your code. Do you treat your test code like production code? Do you do code reviews? Do you run static analysis tools? Do you step through the tests with the debugger to ensure they are doing what you think they are? Do you trust the results from your tests – i.e. if a test fails, are you confident that there’s a product bug? If you’re spending hours every day grepping through test results and system logs to try to figure out what happened, your automation sucks.

Now think about how your tests are run. Are they automatically built for you every day (or more frequently), and distributed to a test bed of waiting (physical and virtual) machines? Or do you walk around a lab full of computers manually typing in command lines? Or do you just run all of your automation on the spare machine in your office, then upload your results to some spreadsheet on a share where nobody can ever find it. In other words, do your tests execute automatically…or do they suck?

What about failures? Do you look at your automation failures in the morning, gather up some supporting data, then enter a bug? Or are bugs entered automatically – including additional information like logs, call stacks, screen shots, trace information, and other relevant information automatically? When a bug is resolved as fixed, do you go run that failing test manually to ensure that the bug was fixed – or does your automation system automatically take care of manual (and mundane) tasks such as this for you? How about reporting – how do you generate the all important “Test Result Report”? Does your automation system take care of it, or is it a largely manual task.

Do you really have automation – or just a suck-filled wrapper around some suck-infested tests? It’s ok, be honest. It’s your choice what to do, but, but I bet the maturity of how you test software certainly has much more to do with end user quality than the number of tests you have (mostly because the latter metric is useless, but you get the point).

I don’t have a solution to sell you – but I can give you an architecture for and end to end true automation system for free. The chapter I wrote for Beautiful Testing covers this exact topic – and I’ve finally got around to posting it here. It’s all yours – read it and comment here if you’d like – or read it and delete it to free up some disk space. Regardless of my chapter, I still recommend you buy the book – like the other authors, I don’t make any money, but the proceeds go to buy mosquito nets to prevent malaria in Africa. More importantly, the book is friggin’ cool and I think every tester should own it.

KTHXBBYE

p.s. No idea at all why I decided to use a lolcat title – it just sorta felt right.

Comments

  1. The root cause is elsewhere

    My previous test lead asked me, “we want to push tester to go a bit deeper for the root cause when opening bugs, what’s your suggestion?”

    My answer is, the system asks us to do it in this way. Testers are pushed by schedule, and they are not rewarded if they take time to find root cause, why you think they should take time to do that?

    Again, for automation problem, the root cause is our system “seems” to measure it in that way. There are hardcoded numbers in commitment settings. People do it to live with the system.

    In my point of view, we should solve the problem in elsewhere.

  2. Other questions (or heuristics) for assessment.

    Does your automation learn from failures and does it help you improving it further? Is it capable to decide whether a failed test should be re-run or logged as a bug? How much of coding and maintenance effort required per test instruction? How this number would change for another member of your team? A newly joined person?

Leave a Reply to Alan Page Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.