Exploring Test Automation

 

I try to read a lot about testing in blogs, articles, books etc. A few days ago, I came across this quote, and it struck me in an odd way.

“Commonly, test automation involves automating a manual process already in place that uses a formalized testing process”

The source doesn’t matter, as it turns out that sentence was copied directly from the Wikipedia article on Test Automation. I’ve been at Microsoft for a long time now, and although I do try to stay connected with testing outside of the Borg, sometimes I notice that *my* view of a subject is quite different than what’s “commonly” understood.

Or, maybe not – but let me try to explain my concerns here a bit more.

I’m all for saving time and money, but I have concerns with an automation approach based entirely (or largely) on automating a bunch of manual tests. Good test design considers manual and computer assisted testing as two different attributes – not sequential tasks. That concept is such an ingrained approach to me (and the testers I get to work with), that the idea of a write tests->run tests manually->automate those tests seems fundamentally broken. I know there are companies that have separate test automation team that do exactly this, and I think it’s a horrible approach.

Let’s use the Numberz Challenge as an example. I know right away that I’m going to perform some manual tests (e.g. make sure the app can launch and close (original version had a bug here), verify look and feel, color choices, etc.). These sorts of tests could be automated, but I generally don’t for a few specific reasons.

  • The oracle for look and feel is difficult. Automated testing based on comparing screen colors or app dimensions is fragile. A pair of eyes for a few seconds every once in a while is much cheaper. There are, of course, exceptions, and if you’re convinced you can pull this off, just make sure you calculate the time spent investigating failures (and potential false positives) into your ROI calculation. I know many teams who have successful UI automation systems, but many more who have a bunch of test auto-crapation.
  • Most systems I work on – and certainly everything that has a UI, gets at least a tiny bit of usage from the product team or a small pilot group before rolling out widely. If the screen is pink, or the font somehow changed to Comic Sans, it will be noticed without the need for an automated test.

When I look at something like Numberz, I also see where I need to use the power of a computer to answer some of the questions I have. For example, I know I need a large enough data set to be confident of the random algorithm, and I know I need extensive pattern matching to ensure that the next number is never predictable. Doing this manually is impossible (or a waste of time depending on what you try to do).

Now I imagine this scenario in the write-test-then-automate-it workflow.

   1: Test Case Steps:

   2: 1) Launch App

   3: 2) Press the Roll! Button 

   4:  

   5: Verify:

   6: 1) Ensure that none of the numbers is less than 0 or greater than 9

   7: 2) Ensure that the total field is correct

   8:  

   9: Repeat this test at least 10 times

Now, the “automator” comes along and writes this test:

   1: Loop 10 times

   2:   App.Launch (“numberz.exe”)

   3:   Button.Click(“Roll”)

   4:   // Validate numbers are within range 

   5:   // Validate correct total

   6: End Loop

From what little context I have from the interwebs, this appears to be a common scenario — but we’ve missed the critical aspects of testing this app (testing randomness / predictability). What we haven’t done at all in this situation is test design. To me, test design is far more holistic than thinking through a few sets of user tasks. You need to ask, “what’s really going on here?”, and “what do we really need to know?”. Automating a bunch of user tasks rarely answers those questions. (Note – it does answer some questions, so if you have a good system and tests you can trust, by all means don’t stop what you’re doing).

Where I think I have my big disconnect is in the definition of test automation. When I think of test automation, I don’t think of automating user tasks. I think, “How can I use the power of a computer to answer the testing questions I have?”, “How can I use the power of a computer to help me discover what I don’t know I don’t know?”, and “What scenarios do I need to investigate where using a computer is the only practical solution?”.

Perhaps test automation is purely the automation of manual tasks, and I’m attempting to overload the word. I know some folks prefer the term “computer assisted testing”, and I suppose that’s fine too.

To me (and I’m sure I’ve used this line before), it’s just testing. But please stop thinking of test automation as the step that follows test design, and start thinking of test design first.

Comments

  1. I agree with Alan on that strange quote from Wikipedia: it’s missing the point.

    On might as well say “Commonly, food production involves hitting the ground with a spade.”

    Unfortunately, many test managers believe these words. I think the state of maturity of software quality is 10-20 years behind software engineering (creation) and this fact is just a way of observing this sad state of affairs.

    Thanks Alan!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.