Beyond Regression Tests

In a recent talk on test design (link), I discussed the concept of "useful tests". In my definition, useful tests are tests that provide new information. Almost every test is useful…once – typically the first time it’s run where it shows that the underlying functionality is working. From that point on, many tests function primarily as regression tests. We run these tests as a safeguard to ensure that we don’t break anything with future changes. They’re great for this purpose – and when they find a bug, they are wonderfully useful. But 99% of the time (according to my definition) they are not useful – they don’t provide new information (they do provide information – the information that nothing has changed, but it’s not new information).

To be fair, these are exactly the types of tests I want for unit tests (and probably acceptance tests too, depending on the definition). I love using unit tests to give everyone on the team confidence that refactoring or feature additions do not break basic functionality – but unit tests (and acceptance tests) are usually just a small portion of the overall testing effort.

I should stop here to clarify. On some teams who "do automation", the unit and acceptance test automation is all they do. In this case, the regression only focus may be appropriate.

Fortunately, the teams I work on (and with) use automation for far more than regression testing – , we need tests that can be "useful" (provide new information) way more frequently than when something that was working once stops working. Test automation isn’t just for regression testing It cannot (and shouldn’t try to) replace human brain-engaged testing, but it can certainly do more than automate a bunch of rote tasks. In my opinion, an automation strategy that only performs regression testing is short-sighted and incomplete.

So – how do you write an automated test that provides new information more frequently? A big advantage of human testing is what I call the "I wonder…" principle. For example, "I wonder what will happen if I cancel this operation in the middle?" or "I wonder if this will work if I pull out the network cable?". Computer programs aren’t very good at wondering, but they are good at brute force. Model-based testing is a potential solution here, as you can use it to traverse decision points in the application (randomly, or by applying a graph traversal technique). Traversing an application in unexpected ways often results in finding new issues. Same test, different bugs – and that seems pretty useful to me. Some testers, for whatever reason, seem to be a bit afraid or skeptical of MBT. I think in many cases, potential adopters of MBT either try to do too much with it before they understand the concepts, or think that it’s a replacement for some other types of testing. MBT is just another test design technique – and like other test design techniques, it works great in particular situations, and not so well in others.

But MBT isn’t the only way to get tests to provide different information on subsequent runs. Simply introducing some randomness to a test can often turn up new issues. For example if you have a "suite" of tests that always run in the same order, try running them in a random order on each run (note – log the random seed so you can reproduce any discovered issues later). Or, say you have a test that does a particular operation five times. Instead of running the operation five times on every test, what about running the operation [between 1 & 10 times] randomly. Regression tests, for all the value they bring, tend to train the system to pass. By mixing things up, we often find new issues.

I’m also a fan of data driven testing. Get the test data out of the automation code and into a data file (use XML or another DSL as appropriate). Then, randomize the data used by the test (and automate the randomization too). Then when you’re testing a web page where users enter address information, you can easily mix up a variety of name, address, zip code, etc. data (especially including invalid data), and see what happens. If you send the same data to the system every time, it will certainly pass all of your tests eventually, but I’m almost certain that you’ll still miss issues too.

Depending on what you’re testing, I’m sure I can think of other ideas that may make your automated tests useful more often. There’s nothing wrong with regression testing, but if that comprises all of your test automation effort, you’re probably leaving some cards (and bugs) on the table.

Comments

  1. I like your thoughts around the data drvien tests and that’s where i am trying my best explaining to my team. most of our tests are around the web service or the web pages… i felt some resistance at some extent talking about having our framework generate the test scripts based on the data provided in XL or some data file. we tend to write that number of scripts for those variations.

    I am against the thoughts of saying i need to show the automation count or test count by giving all my data variations as different tests, as opposed to iterations (MTM term) of the same test. we have “n” number of tests with “n” number of xml files – one for each test. The problem i have with this approach is , if the “object under test” changed the input signature to add few attributes or remove them, then there is a lot of work involved to update individual xml files to make those changes. It is lot more expensive to maintain at that point.

    we build the automation for the most part for the regression use. but there comes the question you and @TestingMentor discussed over Twitter about running automated regression tests on different configuration, how many of those regression tests need to be run every run. and as you say “if they provide any new information”

    …great thoughts around regression tests..

    1. To be clear, I don’t see flaws in MBT – what I see are flaws in /how MBT is adopted/. Too often, people either try to use it where it’s not applicable, or they try to use an MBT “tool” before they really understand what MBT is.

      Over time, as teams fail to successfully adopt MBT (usually for those reasons), I think it builds up a bit of a stigma – e.g. “So and so tried MBT, but it didn’t work, so I won’t try it either”

    1. Yes, of course I like fuzz testing. I was actually going to go a bit deeper into fuzz testing, fault injection, scale, etc. (and other ways to apply programmer knowledge to testing besides regression tests) in a follow up post.

      Thanks for the comment (and for reading).

      Oh – and of course I’ve read ALL of BT :}

      1. @Markus, Microsoft uses Fuzz extensively (or at least they make it look like they do according to their publication 🙂 ).

        @Alan, I’d be very intested in reading more about your view on Fuzzing. I didn’t like the way it is explained in HWTSAM, where fuzz was almost entirely stripped off random or noise…
        In fact, fuzzing is not so related even to most of the randomizations you mention in the post, right? 🙂

        ( This is a matter close to my heart, Alan. I’ll be presenting on both fuzzing and fault model (injections) at StarEast. )

  2. This was a great read. Our test team is just starting to write the test specs, and I’ve been pushing for something like this, but wasn’t sure I was getting my point across.
    I just forwarded them your article, I think we’re all on the same page now on the increased value we can deliver 🙂

Leave a Reply to Darren McMillan Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.