I frequently go days at work without writing code. On many days, I do “typical” test activities (seeing what works, what doesn’t work, etc.) Some days I’m debugging all day. Other days, I’m sifting through the perl and batch scripts that bolt our build process together. On other days, I’m managing projects, tweaking excel, or trying not to cause too many problems in SQL. This afternoon, at least after lunch, was coding time – and probably the first extended code time I’ve had in a few weeks.
Without too many boring details, I had a backlog item to write a service to automatically launch a memory analysis tool and periodically take snapshots of memory allocations in specified processes. The memory analysis tools (which I also
wrote adapted from another project) work really well, but need a lot of manual intervention in order to get accurate data. I wrote a quick “spec” on my whiteboard, wrote tests as I went along, and in about four hours I had it up and running successfully. I’m confident that the service will make it much easier for our team to find and isolate memory issues. To me, writing this tool was a testing activity. It doesn’t test anything, but certainly makes testing of the product much easier.
On my drive home, I was pondering the universe (and my day) and remembered that there’s a sad bit of truth in the software world. It’s a fallacy that causes great confusion. It promotes poor strategies, bad hiring decisions, and endless confusion among management, contributors, and consultants. It may very well be the most misunderstood word in the world of software. It’s the “A” word that is practically ruining testing.
Yes – as much as I hate them, I used a one word paragraph. I did it because the abuse and misuse of “automation” in testing is causing too many smart people to do too many dumb things. That said, I suppose an explanation is in order.
Once upon a time, someone noticed that testers who knew how to code (or coders who knew how to test) were doing some wicked-cool testing. They had tools and stress tests and monitoring and analysis and they found great bugs that nobody else could find – “Test Developers” were all the rage…so we (the industry) hired more of them. Not much later (and possibly simultaneously), someone else decided that testers who could code could automate all of the tasks that the silly testers who didn’t code used to do. Instead of finding really cool bugs, coding testers were now automating a bunch of steps previously done manually. Not to be left out, UI automation frameworks began to sprout, with a promise that “anyone can automate – even a tester.” Now, test teams can spend the bulk of their time maintaining and debugging tests that never should have been automated in the first place.
And that was the first step into darkness.
This mindset, although common, kills me:
- Testers who code write automation
- Automation is primarily made up of a suite of user tasks
- Automation is primarily done through the GUI
All of these items are, in my opinion, idiotic – and likely harmful to software quality. It focuses test teams in the wrong place and is, frankly, insulting to testers who know coding.
If you’re a tester who writes code (or a coder who cares about testing), try this:
- Write tools that help you find important bugs quickly
- Write a few high level tests that ensure user actions “work”, but leave most of it for exploratory testing, user testing (assuming you have great monitoring tools), or both.
- If you must write GUI tests for the above, write fewer of them (that way you’ll spend less time investigating their flaky behavior).
If you’re a tester who writes code, you’re not a test automator, you’re not a test automation engineer, and you don’t write test automation or test code.
You write code. And you test.
You write code that helps the product get out the door sooner. And if the code you write (or the testing you do) doesn’t help the product, why are you doing it in the first place?