Stop, If You Want To…

Well – that was a fun post. The dust hasn’t quite settled, but a follow up is definitely in order. 

First, some context. I was committed to giving a lightning talk as part of STAR East’s “Lightning Strikes the Keynotes” hour. I purposely didn’t pick a topic before I left, and figured I would come up with something while I was there. On Wednesday morning (the day of the lightning talks), I was out for a run thinking about the conference, when I had the idea to talk about testers writing fewer automated tests. I realized that if programmers should be writing more tests (ok, “checks” for those of you who insist), and that just about every mention of automation I heard at the conference talked about the challenges in automating end-to-end scenarios and equal challenges in maintaining that automation – that it was a topic worth exploring. I could have called the post, Testers Should Stop Writing Some of Their Automation Because Programmers Should Do Some of it, and Some of Your Automation Isn’t Very Good Anyway and it is Getting in the Way of Testing You Should Be Doing, but I chose a more controversial (and shorter) title instead. I purposely left out the types of automation (and other coding activities) that testers still should do, because I was afraid that if I did, that it would distract from the main two points (Programmers need to write a lot more tests, and Testers spend too much time writing and maintaining bad automation).

Also – I only had five minutes (to talk about it), so I stuck to the main points:

  1. Developers need to own more testing.
  2. Testers need to stop wasting time writing ineffective automation.

But the fun really began with the comments. I’ve never had more fun reading blog comments, twitter, and my mailbox (for some reason a bunch of people prefer to email me directly with comments rather than comment publically). I thought I’d comment on a few areas where I had a lot of questions.

Does this work for both services and “thick” clients?

Although I didn’t call it out in my post, this sort of approach works really well with services. But – it can work with thick clients too, you just need to be a little more careful with your deployment, as rollback and monitoring won’t be in real time like your service. I think mobile apps are a great example of where you may run experiments with a limited number of users, but windows (or mac) apps could follow the model as well. For always (or often) connected devices, I see no reason to not push updates – of course, these updates should probably go through a bit more testing than services before being pushed, as if something is broken, getting back to a safe state will take a bit of work.

The important thing to note for those still in unbelief is that deploying test items to production is done all of the time. Ebay does it, amazon does it, netflix does it (I could go on, but believe me that it’s done a lot). I don’t have a link, but in the comments, Noah Sussman tells me that NASA does it.

What Do Testers Do? Where did my cheese go. You are annoying!

The fear of cheese moving is strong (“If developers do functional testing, what will I do?”). There’s a lot of testing activity left to do (even when you take away writing developers tests for them and wasting time on unneeded automation). Stress and performance suites and monitoring tools (for example) should give the coding testers on the team plenty of work to do. Data analysis is also necessary if you’re gathering data from customers. And thanks to Roberto for pointing out that sanity checking UI changes, testing for Accessibility or Localization, or color changes all could use an onsite tester (or sometimes some code) to help.

And honestly, now that I’ve made you think about it, there are a few places where testers writing automation is useful. But it’s about time that testers stopped trying to write automation for cases that shouldn’t be automated. Want to try logging in and out 500 times – automate that (ideally NOT at the GUI level)? Go for it. Want to automate the end-to-end scenario of setting up an account and logging in? Please don’t bother. Instead, just add some monitoring code that lets you know if login is failing and save yourself some frustration.

One other thing – the point about developers owning more testing – that one is equally true for services and clients. It doesn’t make sense to me at all to have a separate team verify functional correctness. It’s not “too hard” or “too much work” for developers to write tests. Developers need to own writing quality code – and doing this requires that they write tests. I was surprised that some people felt that it was bad for developers to test their own code – but I suppose that years of working in silos will make you believe that there’s some sort of taint in doing so (but there’s not!).

This is obviously a point (and a change that is really happening now at a lot of companies) that causes no small amount of fear in testers. I get that, but ignoring the change or burying your head in the sand, or justifying why testers need to own functional testing isn’t going to help you figure out how to function when these changes hit your team.

Comments

  1. In Google, writing tests as a developer is not a nice to have, is a requirement. If you don’t, it is the same as writing your resignation letter.

    Some friends go as far as asking “do devs write unit tests?” before anything else when interviewing. If the answer is no, they don’t bother.

  2. The best devs I know do not begrudgingly write tests – the INSIST on it. Conversely when I meet a dev who does not think they need to write tests, I write them off as a dinosaur.

  3. In addition to other activities of value that testers can do, is it worthwhile for the coding testers to review the unit tests that the developers have written and executed?

    Also, some testers interact a great deal with the business users (via requirement reviews, UAT and training workshops). How can we use their deeper knowledge of likely user behavior in testing?

    Alan, thank you for your useful posts.

    1. Not sure if you asked me, but I’ll chime in: yes. Writing bad unit tests is easy. They might not really test the most important behavior of the unit, and they can be written in a way that reflects the unit implementation – which might block refactoring with a false negative result.

      Also, such a review is a learning and communication opportunity 🙂 For one thing, most testing coders could learn from their dev peers.

  4. NASA isn’t deploying test items to production.They do have the capability for an “over space” software upgrade but it is costly, very slow and risky, and it is not the same as pushing poorly tested code to production.

  5. IMO your example scenario of “create a new account, and log in” is itself an edge case because although it’s an obvious candidate for E2E automation, it also works well with DDQ in the sense that a failure after deployment is low-impact anyway. An end-user who discovers this failure will probably just try again, and possibly succeed on 2nd try for a server farm/ graduated rollout setting. Also, analytics on the live environment would make discovery of this failure very simple, simpler than more complex scenarios.

    The E2E scenario of “log in, modify account settings, log out, get a new client, log in again, verify updated settings” is a simple example of a higher-impact product quality issue and IMO the potential cost of shipping broken product here is significant. So, E2E automation here really makes sense.

  6. It’s not “bad” for devs to write unit tests or E2E automation, but there are downsides to devs writing the E2E automation: devs might have special knowledge of how to break their own feature, but other than that, they are not the best role to think about breaking their own feature. A good tester is better at that. Also, devs are always under pressure to ship product, and anything that can conflict with that is not going to get the same emotional investment (to put it mildly), so quality could suffer.

    Alan, I love your writing! Thanks

    1. This. Exactly this. It is nice to say that devs should and do write tests, but what is the first thing to get dropped when pushing for a release? You guessed it. A good tester will easily develop better tests, and do it more efficiently

    2. In addition to Matt’s comment on E2E automation, I think it is better for a test to do the E2E automation and again it depends on what the boundaries are for E2E. In the transactional applications space that I am in, it is quite harder for a dev to write E2E automation where there are / could be over 20 apps spanning across within a program portfolio with the data expected to be flown as appropriate across all the systems.

      Also, for the most part, devs could be or are feature experts, we need to strengthen that aspect given that the immediate upstream and downstream integration awareness but accentuate the positive of writing super cool bug free features. Test should be able to help out the overall end to end automation for the (scripted / unscripted ) user scenarios and thereby the team could leverage the team strengths accordingly.

  7. I appreciate that you’ve thought a lot about automation, that you have been working in the field of automation for a long time and that you’ve seen the good and bad in automation. I find it really interesting that two Microsoft employees both conclude the developers should be doing more work. My question to you Alan is why did you not go all the way like James Whittaker (http://blogs.msdn.com/b/jw_on_tech/archive/2014/04/11/stop-testing-like-its-1999.aspx ) in saying that testers should be replaced with more developers? I do wonder if MS culture has affected your particular thinking or if it is a real shift in the industry?

    For what it is worth, I do agree with you that often times those who write automation do a poor job of it (be it developer or QA engineer). Perhaps recommending that automation is not done is a better bet because of the variable but often bad quality of automated tests. UI tests particularly are less than stable. I have been pondering myself if testing should be done in a more ‘embedded’ mode, where asserts are embedded in code (and hit by unit tests), annotations/attributes tell what values are generated for testing or something like Pex is used to generate smarter unit tests using the code paths to generate interesting values. Recording what users do is another tool in the toolbox, no doubt about it. I’m not unaware of how much software goes out into the wild untested (http://www.stickyminds.com/article/why-testers-need-get-used-change ), much less automated.

    Does instrumentation, as a tool have more value than all the other tools in the tool box? Why even manual test if you have enough metrics from your instrumentation? Fire your testers seems to be the ultimate conclusion I see going down this line of thinking. As a guy who could take a dev job, I am set as far as this possible future world, so I don’t see myself as threatened. If this makes sense it will happen, but we have seen a world without QA in the past and it certainly was a buggy world. Perhaps TDD and unit tests will solve it, assuming devs actually have studied testing and can do their testing well. Even if you think that often testers should just do manual testing, by keeping testers from doing coding, are we keeping them from learning whitebox style testing? Does this put developers in a worse position because their unit tests might be poorly done but the tester who has no code experience has little ability to help write unit tests?

    Lastly, saying ‘if you want to’ is great and all but people take your advice with some degree of seriousness because of your reputation and time working in the field. I appreciate that you are noticing a change and you’re pointing it out. But that should have been what you started with, not what you ended with in my opinion. In any case, I really did enjoy your posts, so thanks!

    – JCD

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.