To Automate, or not to Automate, that is the question that confuses testers around the world. One tester may say, "I automate everything, I am a test automation ninja!". Another tester may say, "I do not automate, if I automate I fail to engage my brain properly in the testing effort!". Another may say, “I automate that which is given to me to automate.” Yet another may say, "I try to automate everything, but what I cannot automate I test manually."
All of these testers are wrong (in my opinion, of course). So wrong, that if I had the power, I’d put them in the tester penalty box (sorry, hockey playoffs on my brain) until they came to their senses.
Good testers test first – or at the very least they think of tests first. I think great testers (or at least the testers I consider great) first think about how they’re going to approach a testing problem, then figure out what’s suitable for automation, and what’s not suitable. I say that like it’s easy – it’s not. My stock automation phrase is "You should automate 100% of the tests that should be automated". nearly every tester knows that, but finding the automation line can be tricky business.
I have my own heuristic for figuring this out – I call it the "I’m Bored" heuristic. I don’t like to be bored, so when I get bored, I automate what I’m doing. When I’m designing tests, I try to be more proactive and predict where I’ll get bored and automate those tasks.
Just today, I was fixing some errors reported by a static analysis tool and found myself doing the following.
- copy the file path from the log
- check out the file from source control
- load the file in an editor
- make the fix (or fixes)
- rebuild to ensure the error was fixed
After the third time, I was bored. I spent the next two minutes and fifty-one seconds writing a doskey macro that would pull the file name from the log, print it to the console (for easy copying), check out the file, and finally, load the file into the currently open editor. Given that I had another 40 files to go through, I consider that a good automation investment.
In HWTSAM, I think I told the story about my first week at Microsoft. My manager handed me a list of test cases (in excel) and told me to run the tests. I glanced at the list of eighty or so tests and asked when he expected me to finish automating them. He said, "Oh no, we don’t have time to automate those tests, you just need to run them every day."
As an aside, let me say that I think scripted manual tests are one of the most wasteful efforts any tester – no, anyone in the world – can take on. I know that some readers will cite examples of where manual scripted tests are valuable, but for the record, I despise them more than my daughter despises brussel sprouts (you don’t know my daughter, but you may have heard her screams of dismay across the country the last time I tried sneaking a few on to her plate).
So anyway, I started work on a Monday, got bored running those tests by Tuesday, and automated all eighty tests on Wednesday. I used my new found spare time to find all sorts of other issues (and to automate other scenarios). I don’t know if I ever told my manager that I automated those tests, but he was pretty dang happy with my results.
Not all automation efforts work this way (I’m talking to you, pointy haired manager). The dream panacea of automation for some folks is that testers will write a plethora of automation, then they’ll have time to do all kinds of additional testing while the automation runs seamlessly in the background. If all automation efforts were created equal (and by that, I mean equally simple), and if testers took the time to write reliable and maintainable code, this could be possible, but I don’t know anyone that lives in that world. Some things are difficult to automate, and we can waste our time trying to automate those tasks. Sometimes we write fragile tests because they appear to work (the illusion of progress). Then reality sets in and we discover we’re spending a big chunk of our time investigating and fixing failed tests (but that’s another story).
My parting advice is to remind you all that as software testers, our job is to test software. That probably sounds stupid (it sounds stupid to me as I type it), but test automation is just one tool from our tester toolbox that we can use to solve a specific set of testing problems. Like most tools, it works extremely well in some situations, and horribly in others. Use your brain and make the right choices.
Nice post, totally agree.
Reminds to what I´ve read in a list of best-practices for system-administrators: “If you have to do it more then one time, automate it; if you can´t automate it, document it!”
Regards,
Chriss
Hi,
I agree that frequency of testing determines whether “to automate or not to automate” to some extent.
Also, there are some parts of application that are not possible to automate even though frequently used. One example is CAPTCHA image on the signup/login form. The text in the CAPTCHA image keeps changing each time and it is impossible to automate it.
Regards,
Aruna
http://www.technologyandleadership.com
“The intersection of Technology and Leadership”
Alan,
Yep, afterall “It’s Automation, Not Automagic!”.
And I believe in automating only those things that make sense to automate. Even if you can automate something, should you really do it? As you said, it is only a tool. Use your tools wisely. “A fool with a tool is still a fool.”
Totally agreed.
Some tester is eagle to show their automation scripting ability…management who do not have much technical knowledge might mislead by this kind of tester.
Wow.. this is very interesting post. Thanks for sharing your views on Automation. I agree with you. When it comes to Software testing service Automation is the latest Buzz.
H Alan
I’m interested in your comment “As an aside, let me say that I think scripted manual tests are one of the most wasteful efforts any tester – no, anyone in the world – can take on”
My question is – what do you replace them with?
I agree the ‘manual scripting’ approach is boring, but in some company environments it seems a good choice. So, for example, in my organisation people don’t write requirements down at all (QA have to ‘work out’ the test basis\oracle) so the scripted manual tests become a useful source of information on how the system is supposed to work for everyone; as well as being a basis for the actual testing. Additionally the UI is brittle & hard to automate for regression testing purposes. So the manual scripted tests fill that gap. Is there a better alternative here? Many organisations seem to have vast repositories of scripted manual tests.
In my experience, scripted steps disengage the brain. As an alternative, document a scenario and key points/areas of coverage, and let the tester walk throughthe scenario (and variations!) rather than go through a do this-then do this-verify that x is true kind of script. In my experience, this approach *finds* more errors and regressions, and *misses* less (due to brain engagement).