Whew – what a week it’s been. There’s only drawback I can think of from being in Switzerland last week is that my work and meeting load this week has been nearly double. Given that I’m still catching up from my India trip last month, I’m quite thankful that my travel schedule is light in the coming months. I’m looking forward to taking on some fun and challenging work (which I’ll share here when appropriate).
A few weeks ago, I talked about regression tests and gave a few examples of how a tester could write automated regression tests that had more of a chance of finding new issues than typical regression tests (for the full story, read here).
In that post (you did read it, didn’t you), I mentioned the "I wonder…" principle with the example of "I wonder if this will work if I pull out the network cable?" An area where the often misconstrued "programmer-tester" can put coding skillz to work is to automate the "I wonder" part of the test. If removing a network cable during a particular transaction is interesting, then removing it during almost any network transaction would be interesting. So why not add functionality to the test automation system that throttles or interrupts network activity on all transactions – (or make it a configurable variation so you have some test runs that work with consistent network connections).
To be clear, I’m not suggesting replacing exploratory testing with automation. What I’m saying is that many things we do when exploring can be scaled using automation.
Fault injection is a pretty typical way for coder-testers to add value to automation (frankly, I don’t consider people who just write point A to point B scripted automation to be testers, but that’s something I suppose I’ll tackle in the comments section). Testing for application behavior with low disk space, low system memory, and any other common failure scenarios is a great way to make your stagnant automated tests find new issues.
In a comment, Markus mentioned fuzzing – while not necessarily a way to make generic automation find more issues, it is another place where writing automation is the only (practical) solution. Fuzzing, in a nutshell, is the act of intelligently munging any sort of data used by the system under test (note – for a full definition of fuzzing, see the wikipedia article). File fuzzing, for example, is the process of taking a valid file, and then mucking with a bit of the internal structure in order to expose problems (including potential security problems) in the way the application handles that file. (Note that what I call "dumb-fuzzing, you can take any old file, mess with it randomly, and see what happens – it’s a reasonable test, but typically won’t find as many security-class issues). A binary file type (e.g. a word doc) typically includes information about the file, including the number of sections in the file and the length of those sections). If you’re careful about how you tweak some of the bits in the headers and sections, you can potentially trick the application into crashing – or, if you’re really good, into running arbitrary bits of code. You can do the same thing with any sort of binary data – for example, we do a lot of protocol fuzzing – mucking with bits to make sure the receiving end of the protocol does the right thing.
In order to cover all of the permutations, a good fuzzing approach may require thousands of files. Editing ten thousand files in a hex editor isn’t my idea of excitement, so automated fuzzing is a good solution. Oh – and loading all those files into an application doesn’t seem that exciting to me either, so that part should probably be automated too.
There’s more stuff for the tester-coder to do. In a client server org like I work in, we have to test against multiple network topologies. Luckily, with a bit of programming, we can do that all automatically and save a bunch of time. Now – to be fair, there is some tax to having all of this infrastructure. With automated tests and fault injection and topologies and all of the other parts of the system moving, a reasonable amount of time goes to maintenance, and that part sucks, but for the most part, a team of testers with prog skills /have the potential/ to do a reasonably good job testing fairly complex software.
However – if your team only writes regression-ish automation – or if your view of the tester-developer is someone who only writes automated regression tests, you are completely missing the point of coding skills for testers. I know (and fear) that this is the case for way-too-many people in the profession today, and I find it a bit disheartening. I do hope, however, that the next generation of testers will have the big-picture thinking we need to find the harmony between brain and skills that testing needs to grow.