PNSQC is still going strong (although it may be over by the time I post this). Sadly, my North American test conference tour forced me to miss the second day of the conference in order to make it to Toronto in time for the QAI Testrek conference.
My opinion of PNSQC is the same as always – it’s a fantastic conference, and an incredible value for the price. Those who have followed this conference for some time may remember a time in the not so distant past when Microsoft speakers were rare. This year’s conference, however had Microsoft speakers in nearly every time slot (including a keynote by Harry Robinson).
I enjoyed Jim Sartain’s (Adobe) talk on pushing quality upstream – especially a system he uses to track “early bugs” (errors found in unit test or review). I would have really liked to see more depth or examples around this, as I think there’s a lot more complexity to this than he led us to believe.
The highlight of my one-day conference was Tim Lister’s keynote. First of all, I’m a huge fan of Peopleware (if you are a manager, buy a copy now. If you are not a manager, buy a copy for your manager). Then, as if I wasn’t already making lovey eyes at Tim from the front row, he talked about patterns – mentioning both the Alexander and GoF books before diving into examples of organizational patterns. I enjoyed his stories and style. It was a great start to the conference.
At lunch, I attended a discussion on solving complexity (hosted by Jon Bach). The big takeaway for me was an increased awareness that complexity is in the eye of the beholder – what seems complex to you may not seem complex to someone else. This deserves another post…but no promises.
My presentation: “Peering into the white box: A testers view of code reviews” had a good turnout (better than I expected for a talk on code reviews). I can’t really evaluate my own talks very well, but it seemed to be ok. My slides (which as usual, may not make sense on their own) are below.
Another highlight was meeting @mkltesthead (Michael Larsen) in person. I gave him a copy of hwtsam for his troubles. I also gave a copy to the top tweeter of the conference, @iesavage (Ian Savage). I’m only sorry I didn’t get a chance to meet more people…but there’s always next year.
Alan – thanks for your comments on my PNSQC presentation. I would love to discuss early defect removal best practices and metrics with you. At Adobe we’ve been finding that having some lightweight metrics around the percentage of defects found and removed early (e.g. personal review, peer review, test driven development, static code analysis) vs. late (e.g. integration or system testing, prerelease testing, shipped defects) can be a very intuitive measure of how effective and efficient we are at finding and removing bugs. Many Adobe teams are setting targets for what percentage they find early. Not only is it more effective and efficient to find defects early, counting (not logging these defects in a defect tracking system) provides an early indicator of code quality.
Thanks for the comment (although I took the liberty of correcting my name).
I think I’ll take you up on your offer to discuss – we’re slammed now, but can I email you in a month or so to discuss or set up a phone conversation?