I’ve been pondering test automation recently. Maybe it’s because of my participation in the upcoming stp summit (note: shameless self-promotion), but that’s only the surface of it. I’ve complained about misguided test automation efforts before, but it’s more than that too. For every tester that cries out that 100% automation is the only way to test software, someone else is simultaneously stating that only a human (with eyes and a brain) can adequately test software.
The answer, of course, is in the middle.
But I worry that even for those who have figured out that this isn’t an all or nothing proposition, many testers have no idea at all how to design an automated test – which means that they don’t know how to design a test in the first place. The problem I see most often is in the separation of automated and human testing. When approaching a testing problem, you have failed if your first approach is to think about how you’re going to (or not going to) automate. The first step – and most important – is to think how you’re going to test. From that test design effort, you can deduce what aspects of testing could be accomplished more efficiently with automation (and without).
A common variation of this is the automation of manual tests. It pains me to hear that some testers design scripted test cases, and then automate those actions. This tells me two things: the manual test cases suck, and the automation (probably) sucks. A good human brain-engaged test case never makes a good automated test case, and automating a scripted scenario is rarely a good automated test (although occasionally, it’s a start). Some teams even separate the test “writers” from the “automators” – which, to me, is a perfect recipe for crap automation.
An example would be good here. Imagine an application with the following requirements / attributes:
- The application has a “Roll” button that, when executed, generates five instances of random numbers between 0-9 (inclusive)
- The application totals the output from the 5 random numbers and displays them in the “Total” field.
- There are no user editable fields in the application
For those of you with no imagination, this is how I imagine it.
From a manual only perspective, layout, user experience, and interaction are definitely areas that need to be investigated, and that are usually best done manually. If I were to write a few scripted manual test cases for this (not that I would), they may look something like this:
Test Case 1
- Press Roll button
- Count (use calculator or an abacas if necessary) to verify that the value in the Total field matches the sum of the values below
Test Case 2
- Press Roll
- Ensure that the values in the lower fields are within 0-9 inclusively
- Repeat at least n times
Test Case 3
- Press Roll
- Ensure that the value in the top section is between 0 and 45
- Repeat
I have two complaints about the above tests. The first is that executing them manually is about as exciting as watching a banana rot, and the second is that (as I predicted), they’re not very good automated tests either.
When designing tests, it’s important to think about how automation can (or won’t) make the testing more efficient. What I hope you’ve realized already (and if not, please take a moment to think about the huge testing problem we haven’t talked about yet with this application), is that we’ve done nothing to test for randomness or distribution.
Testing randomness is fun because it’s harder than most people bother to think about. We don’t have to get it perfect for this example, but let’s at least think about it. In addition to the above test cases (granted, the third test case above may be redundant with the first), we need to think about distribution of values within the bottom five boxes. Given the functional goal we’re shooting for, we can probably hit all those test cases and more in a reasonably designed automated test.
Pseudo-code:
Loop 100,000 (or more) iterations
Press Roll Button
Verify that sum of output and total field are identical
Verify that output values are between 0 & 9
Store count of values from output boxes 1-5
End Loop
Examine distribution of numbers
Examine sequence of numbers
Other pattern matching activities as needed
The first loop takes care of the main functionality testing, but where automation really helps here is in the analysis of the output. I’d expect a statistically accurate representation of values, repeated values, and sequences.
That’s stuff you want to use automation to solve. Yet, I keep discovering stories of testers who either don’t bother to test stuff like that at all, or put their automation effort into trying to navigate the subtleties of UI testing (yet something else I have an opinion about).
I don’t’ care whether you’re an automator, a tester, a coder, or a cuttlefish – your goal in testing is not to automate everything, nor is it to validate the user experience through brain-engaged analysis. Your job is to use the most appropriate set of tools and test ideas to carry out your testing mission. You can’t do that when you decide how you’re going to test before you start thinking about what you’re going to test.
Notes: I don’t know why I picked 100k for the loop – it may be overkill, but it’s also something you have an option of doing if you’re automating the test. I suppose you could do that manually, but you will go insane…or make a mistake…or both.
There’s more to this concept, and follow ups…maybe. The big point is that I worry that people think that coded tests replace human tests, when they really enhance human testing – and also that human testers fail to see where coded tests can help them improve their testing. I think this is a huge problem – and that it’s one of those problems that sounds bad, but is actually much, much worse than that…but I don’t know how to get that point across.
I should also point out that given my loathing of GUI automation, that I’d really, really, hope that the pseudo-code I scratched out above could be accomplished by manipulating a model or an object model or the underlying logic directly. I want my logic/functional tests to test the logic and functionality of the application – not its flaky UI.