In my role on Teams, I was “in charge” of quality – which eventually turned into everything from the moment code was checked in until it was deployed to our end-users. At one point during development, we had a fully usable product with no known blocking issues. We were missing key features, performance was slow sometimes, and we had a few UI tweaks we knew we needed to make. In what would seem like a weird role for a tester, I pushed and pushed to release our product to more (internal) users. Those above me resisted, saying it “wasn’t ready.”
I was concerned, of course, in creating a quality product, but I was also concerned whether or not we were creating the right product. I wanted to know if we were building the right thing. To paraphrase Eric Ries – you don’t get value from your engineering effort until it’s in the hands of customers. I coined the phrase “technical ‘self-satisfaction'” to describe the process where you engineer and tweak (and re-tweak) only for you or your own team. While the product did improve continuously, I still believe it would have improved faster, had we released more often.
In my previous post, I talked about how it’s OK to wait for a future release to get users that next important feature. While I truly believe there’s no reason to rush, I’m absolutely not against getting customers early access to minimal features (or a minimum-minimum viable product – MMVP).
The decision on whether to release now or later isn’t a contradiction. It’s a choice (mostly) of how well you can validate the business or customer value of the feature in use – (and if possible, or necessary, remove the feature). If you have analytics in place that enable you to understand how customers are using the feature, and if that feature is valuable, it’s a lot easier to make the decision to “ship” the feature to customers. On the other hand, if you’re shipping blind – i.e. dumping new functionality on customers and counting on twitter, blog posts, and support calls to discover if customers find value in the feature, I suggest you wait. And perhaps investigate new lines of work.
One thing I consistently ask teams to do during feature design is to include how they plan to measure the value of the feature to customers or business value. Often, only a proxy metric is available, but those work way better than nothing at all. Just as BDD makes you think about feature behavior before implementation, this approach (Analysis Driven Development?) makes you think about how you’ll know if you’ve made the right thing before you start building the wrong thing.
Short story is that an analytics system that allows you to evaluate usage and other relevant data in production, along with a deployment system that allows you to quickly fix (or roll back) changes means that you can pretty much try whatever you want with customers. If you don’t have this net, you need to be very careful. There’s a fine line between the fallacy of now, and a failure to learn.
I am a big fan of the MMVP style approach to product development (although to be fair I’ve never actually heard that until now – only the MVP). The product guy I work with is a massive believer in getting the product we have out this way and then testing, iterating, tweaking etc. What analytics system do you recommend? Mixpanel?
All I’ve ever used have been internal tools at Microsoft, with a bit of google analytics.
I’ve used a system called Interana before for data analysis and was quite happy with it. ymmv