I’m a big fan of exploratory testing. I’ve used the approach long before I knew what it was called and think it’s the core of good testing. it’s so ingrained in the general approaches I’ve used in my career that I often don’t differentiate between exploratory testing and plain-old-testing (I’ve gone as far to say explicitly that all good testing is exploratory in nature, but as with most times I’ve used the word “all”, I’ve found exceptions).
I’ve been working on my ET skills for over 18 years, so it’s nice to see so much recent emphasis on the approach in blogs and books. As I said, I think it’s the core of good testing, so a strong foundation in ET can only help testing improve overall. As with any hot topic in any field, there are many strong advocates, and there are definitely “camps” of thought on what exactly ET is. On a side note, those of you who know me, know that I am certainly not afraid to take jabs on just about any of the subjects I’m passionate about – both in person, and in this blog. I once made a small (a few words) comment in a blog post about ET and “belonging to a club” that brought down a deluge of email reactions that I still ponder frequently. To this day, I both regret the remark (it was a thoughtless jab), and remain somewhat dumbfounded by the reactions to the comment.
Anyway…I recently introduced ET to my (still sort of new) team at Microsoft. Actually, I didn’t really introduce it as much as I revealed it, as I discovered a natural talent throughout the team that went far beyond what I was able of teaching them.
But let me back up a bit…
I have given several presentations to my team on a variety of testing topics. I’m a big believer in testers having a big “toolbox” and knowledge on when and where to use those tools. I noticed early on in my time on the team that some testers would get so caught up in “running tests”, that they forgot to engage their brains and think about what they were doing. I also noticed that although most testers were experts in their feature area, that some had limited knowledge of the rest of the product. So, at a regularly scheduled brown bag (lunch time tech talk), I gave an intro / overview on ET. To follow up, I asked if there were any volunteers who would like to take part in an ET session with me to practice (with the secondary goal of learning more about the overall product). I quickly had a group of four volunteers, so I set up a time, and we were off to the races.
The format was:
- 90 minute meeting.
- I took the first 10 to talk about our goals for the meeting (Learn ET, Learn about the product, and Learn about tools that may help us with ET). Finding bugs is a probable, but not necessary side effect of these goals.
- For the next 75 minutes, we tested together.
- Last 5 minutes was a quick debrief and sharing of thoughts.
At the first session, I didn’t know at all what to expect. I didn’t know if people would learn, and didn’t now if we’d find bugs (I worried that if we didn’t find bugs that people wouldn’t see it as successful). I was also worried about engagement – what if people got stuck and “checked out”?
It turns out that I didn’t really need to worry. The room rocked with engagement. We agreed on random place within the application to start, I threw out a few ideas, thought out loud for a moment, and before I knew it, I couldn’t keep up with the comments and bugs and excitement in the room. I was blown away how well it went (again, it was nothing to do with me – I just pointed them in the right direction).
Based on the success, I tried another session. The same worries came to mind. The first session went so well that I had a high bar to live up to, and figured that it may have been due to the “early adopters” who signed up so quickly for the first session. Once again, I was proven wrong as a completely different set of people filled the room with ET energy.
Since then, I’ve moderated four more sessions (including one via teleconference ) and have had similar results in each of them. Better yet, some of those attendees have conducted their own ET sessions within their own team (and had similar success). Individuals are also using the approach outside of specific sessions.
Some points to share include:
- We’ve had 3-4 attendees (plus me) for each session, and I’m pretty happy with the learning experience. I think this size group is big enough that people can learn from each other rapidly, and small enough that everyone gets to be heard. I also don’t think I could keep up on notes with a larger group.
- The session length also seems to work well. People are engaged throughout and there’s barely a slow down before the session ends (one attendee mentioned that they felt guilty because the session didn’t feel like work!).
- Everyone should get used to thinking out lout – especially early in the session. It helps people learn and build off of each other’s ideas
- The focus on learning has worked well for us. I think that anytime testing focuses on finding bugs, that it veers off track
And that’s pretty much it. We’ll definitely continue to hone our ET skills, and I’m sure we’ll continue to experiment, add to our skill set, and tweak our ideas and processes as we go.
that’s Awesome, Alan. Thank you for pointing to the approach you took in those exercises.
Thanks Ram – I’m always glad to share!
I think we need to catch up sometime to talk about this topic… just heard your comments around this on a test strategy webinar recording as well, coincidentally while re-reading this post.
You know how to find me – feel free to set up some time to talk and catch up.