Musings on Test Design

The wikipedia entry on test automation troubles me. It says:

Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

I can’t decide whether it bothers me because it’s wrong, or because so many people believe the statement is true. In fact, I know the approach in some organizations is to “design” a huge list of manual tests based on written and non-written requirements, and then either try to automate those tests, or pass the tests off to another team (or company) to automate.

This is an awful approach to test design. It’s not holistic. It’s short-sighted. It’s immature, and it’s unprofessional. It’s flat-out crummy testing. I frequently say, “You should automate 100% of the tests that should be automated”. Let me put it another way to be clear:

Some tests can only be run via some sort of test automation.
Some tests can only be done via human interaction.

That part (should be) obvious. Here’s the part I don’t think many people get:

You can’t effectively think about automated testing separately from human testing.

Test Design answers the question, “How are we going to test this?” The answer to that question will help you decide where automation can help (and where it won’t).

Here’s a screen shot of part of the registration form for outlook.com (disclaimer – I have no idea how this was tested).

image

Let’s look at two different ways of answering the “How will we test this?” question.

The “automator” may look at this and think the following.

  • I’ll build a table of first and last names and use those for test accounts
  • I’ll try every combination of Days, Months, and Years for Birthdate
  • I’ll generate a bunch of different test account names
  • I’ll create a password for each account
  • Once I try all of the combinations, I can sign off on this form
  • (or, they may think, “I wonder what sort of test script the test team will ask me to automate”

The “human” may look at the same form and think this:

  • I’ll want to try short names, long names, and blank names
  • I’ll see if I can find invalid dates (e.g. Feb 29 in a non-leap year)
  • Some characters are invalid for email names – I’ll try to find some of those
  • I’ll make sure the 8-character minimum and case sensitivity is enforced
  • Oh – I’ll try that passwords with foreign characters too.
  • Once I go through all of that, and anything else I discover, I can sign off on this form.

I’ll fire off a disclaimer now, because I’ve probably pissed off both “automators”, and “humans” with the generalizations above. I know there’s overlap. My argument is that there should be more.

In my contrived examples above, the “automator” is answering the question, “What can I automate?”, and the “human is answering the question, “What can I explore or discover?”. Neither is directly answering the question, “how are we going to test this?”.

I could just merge the lists, but that’s not exactly right. Let’s throw away humans and coders for a minute and see if we can use a mind map to get the whole picture together. Here’s what I came up with giving myself a limit of 10 minutes. There’s likely something missing, but it’s a starting point.

Registration Form

Now (and at no time before now), I can begin to think about where automation may help me. My goal isn’t to automate everything, it’s to automate what makes the most sense to automate. Things like submitting the form from multiple machines at once, or cycling through tables of data make sense to automate early. Other items, like checking for non-existent days or checking max length of a name are nice exploratory tasks. And then, there are ideas like foreign characters in the name, or trying RTL languages that may start as an exploratory test, but may lead to ideas for automation.

The point is, and this is worth repeating so often, is that thinking of test automation and “human” testing as separate activities is one of the biggest sins I see in software testing today. It’s not only ineffective, but I think this sort of thinking is a detriment to the advancement of software testing.

In my world, there are no such things as automated testing, exploratory testing, manual testing, etc.

There is only testing.

More musings and rants to follow.

Similar Posts

  • Exploring Test Automation

      I try to read a lot about testing in blogs, articles, books etc. A few days ago, I came across this quote, and it struck me in an odd way. “Commonly, test automation involves automating a manual process already in place that uses a formalized testing process” The source doesn’t matter, as it turns…

  • Five for Friday – November 23, 2018

    Here are some of the interesting things that passed my way this week. I thought this article from Hotjar on “The 10 lessons we learned launching (& killing) our $200K mobile app” was a really interesting read (thanks Perze). The learning here align quite well with Modern Testing I’d think this article about the making…

  • Welcome

    I’ve been blogging for nearly 5 years now. When I first started, I didn’t think I wanted to be a blogger – I just wanted a place to interact with customers. I quickly realized that I liked writing and started to study writing and used blogging to work on my writing. Now, 5 years later,…

  • Testing for Brown M&Ms

    One thing I’ve learned in my testing career is that where there are bugs, there are more bugs. Some of my colleague Nachi’s research shows that components with a high number of pre-release bugs has a corresponding share of the post-release bugs. The heuristic works well on a smaller scale too – if I find…

  • Fear Factor

    I had one of those meditative weekends, where between solving Advent of Code challenges, seeing the new Star Wars movie, and cleaning my home office, some ideas (sort of) merged in my head. One part of the recipe was yet-another-discussion on twitter over the weekend (over) reacting to the Test is Dead theme that came…

10 Comments

  1. All sounds so simple and common sense when written like that – so how come it ends up in such a stinking mess most of the time ?

    Looking forward to more musings and rants

  2. “Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.”

    Sadly, I believe this entry is correct. It is indeed “common” that folks approach test automation this way.

    I agree with you that this is a poor way to approach test automation, though.

  3. Good article!

    It could be even more holistic if the first questions are “what do we want to test?” and “what is important?” (called test analysis by some; an intertwined part of test design for me.)
    Thinking about relevant quality attributes often helps me discover what’s important, and when reflecting on e.g. stability the way to do it usually comes natural.
    I guess the best would be to just use what’s best suited without even reflecting about automation or manual, but I’m not there yet…

    The automated/exploratory/manual words surely are imperfect models, but they are often useful for communication, and as a start for learning things.

  4. Excellent points and very well stated. You continually convince me on the idea that all testing is just testing, and that there’s not much point in trying to catergorize everything to make things more “efficient”.

  5. Alan,

    I like the way you roll. It drives me up the wall when I’m asked for an “automation strategy” when its clear there is no test strategy to speak of, or when I’m asked to set up an “automation team” when testing – in general – is challenged.

    Automation is, and always will be, just a tool. Automation decisions should ALWAYS be subservient to answering the question “how am I going to test *that*?”

    Cheers,

    Iain

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.