Five for Friday – May 25, 2018

Wow – where did that week go? Here are few things I found worth pondering this week.

  • There will be a full blog post with details, but I had one too many pieces of hardware fail after an ill-timed Windows update, and a few too many settings changes after the same, and I flipped out a bit.
    The good news is that I was able to get everything I needed to run in order to do my job running on Ubuntu in only a few hours.
  • I’m giving a presentation to a small group of peers next week on data analysis, and I’m reminded of this quote by Josh Wills at Slack.
    “Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.”
  • Yet another great post from Michael Lopp on professional growth – Your Professional Growth Questionnaire
  • I’ve mentioned Radical Candor here before, but this is a great post on giving feedback – A Manager’s Guide for Effectively Giving Feedback
  • It’s GDPR day. I won’t hyperlink GDPR, but I will give you a link to The GDPR Hall of Shame

Five for Friday – May 18, 2018

Five things on my mind – or interesting this week:

  • I recently re-read a chunk of The Lean Startup, and highlighted yet-another-quote-about-failure-and-learning.
    When blame inevitably arises, the most senior people in the room should repeat this mantra: if a mistake happens, shame on us for making it so easy to make that mistake.” — Eric Ries
  • …which leads well into this article (it’s from last month, but I just found it this week), on what it really means to Go Fast and Break Things.
  •  I tweeted to my web provider last week about getting free SSL (they currently charge a minimum of $8 a month for SSL on angryeasel.com). The bot(?)-driven replies and emails leads me to believe that web hosting is a highly competitive market. Overall, it looks like I can save a few bucks a month AND get ssl up and running. So far, leading contenders are Interserver and Chemicloud. I am, of course, open to other options (needs are hosting of two wordpress sites, domain email, and at least 10GB of storage.
  • It’s almost World Cup time, and it was interesting to see that a simulation done by UBS predicted that Germany would win (they’re probably right), but more interestingly, they gave Italy a 1.6% chance of winning. For my non-football/soccer readers this is interesting because Italy did not quality for the World Cup.
  • And finally, it’s yet-another well written article on the Netflix Tech blog – Full Cycle Developers at Netflix?—?Operate What You Build

The users guide to working for me

A while back, inspired by Roy Rappaport’s manager readme, I created my own. Unfortunately too late for most of my Unity hires, but a fun reflection exercise anyway. I originally had this on our internal wiki, but I recently moved it to my github site as a more permanent location.

A few of you already saw this go public last week in the hackernoon article, but for total transparency, you’re welcome to view my “manager readme” on my github site here.

Five for Friday – May 11, 2018

It’s time for the weekly visit inside things going through my head (and browser) this week. A bit of self-promotion in the last two bullets, but I hope you find these interesting anyway.

  • I’ve been thinking a lot lately (on many fronts) about change – and resistance to change, and I’m reminded of this quote by Arnold Bennett:
    Any change, even a change for the better, is always accompanied by drawbacks and discomforts. “
  • I liked this article by Jurgen Appelo on The Sense and Nonsense Of Empowerment. For those of you who are managers, I’m curious to hear what you think.
  • There are really only two (four if you count Windows on my home PC, and my Xbox One) Microsoft products I use much anymore. One is Excel – but only when I need to do analysis or create visuals that Google Slides can’t handle.
    But Visual Studio Code has become my favorite code editor – and it keeps getting better. They release monthly, and every release has multiple valuable features and fixes. To be fair, I don’t write much code these days, but when I do, I’ve been turning to Code every time.
  • I was honored and happy to be included in Abstracta’s list of the 75 Best Software Testing Blogs. I was impressed that they actually poked around and found out what Angry Weasel means to me, and the list includes a lot of great resources.
  • In the I think this is really cool category, Ministry of Testing made a poster (and an accompanying article) of the Modern Testing Principles. Print it and post it (and forward me the feedback :}).

The Test Automation Snowman

I was nothing short of blown away over the past few days, when some comments I made on twitter about UI automation caused a lot of folks to raise their eyebrows.

Here’s the tweet in question.

Feedback (blowback?) ranged from accusations of harmful blanket statements to lectures on how what I really meant was “checking”, not testing – with a handful of folks who seemed worried that the world I was describing was too scary compared to the world where they lived. Testing evolves at different speeds in different places, so the last point, at least, was expected. But I stand by the sentiment in that tweet.

The test automation pyramid is a model for thinking about distribution of your tests. Mike Cohn first (I think) wrote about it here. In my opinion (and observation), there is an unhealthy obsession in software testing with writing “automation” (where “automation” means UI automation – i.e. using tools like Selenium to manipulate the UI to test the system under test). A few folks recently have complained about the pyramid (“it’s not a pyramid, it’s a triangle!”; “There are more than 3 types of testing”, etc.).

“All Models are wrong, some are useful” — George Box

It’s a good model, with a lot of practical application. Two key takeaways from the pyramid model are:

  • Write tests at the lowest level where they can find the bug
  • Minimize the amount of top/UI level tests

I’m passionate about the second point for three main reasons:

  1. UI Tests are flaky. This was true 25 years ago when I first wrote UI automation, and it’s true today. They’re just not as reliable and trustworthy as lower level tests.
  2. Despite the fact that reliable UI Tests are difficult to write, we (industry) seem to think that UI automation is a reasonable entry point to the world of coding for “manual” testers. UI automation is a horrible way to start programming. Shell (or other) scripts to help set up test environments or generate test data would be a much better use of time (and achieve more success) than learning to code by writing UI tests.
  3. UI tests are s l o w. This is fine if you have a handful of tests, but a huge issue if you have hundreds, or thousands of UI tests.
    Note – if you have a large amount of testing that can only be done at the UI level, that’s a big red testability issue you should probably address before investing in expensive testing.

Let’s assume for a moment that the problems with UI automation stability have been solved (companies like testim.io have used ML to make some strides in this area, and despite the entry path problem I mentioned above, there is improvement in automation tools and tester skills). If we go with this assumption, then point #1 – and possibly point #2 above are no longer an issue.

Point #3, however, is not solvable. Tests that automate the UI are slow. Way slow. Like a glacier stuck in molasses slow. I once wrote a UI based networking test to create a folder, share it, connect to it, write files to it, delete files, and then unshare the folder. That test took a little less than two minutes. Problem was, that I needed to test that process for every character possible in isolation (due to issues with DBCS code pages on non-Unicode Windows where details would fill pages of no longer relevant information). On Chinese windows, for example, this was (IIRC), somewhere near 8000 characters.

I wrote an API level test that tested the entire code page – including varying lengths of folder and share names that ran in under 5 minutes (and less than a minute for Western code pages). Of course, we still did spot checking (both exploratory, and via some UI automation), but testing at the level closest to where we could find bugs was the most efficient – both in proximity and speed.

Another view of tests I like is the size model from google. Rather than dwell too much on what makes a test a unit or integration test, think of tests in sizes – where tests of a certain duration are classified at different levels. This model works well (and solves the pyramid complaints I’ve seen recently), but it doesn’t have a visualization.

So – without further babbling, I created this alternate view – The Test Automation Snowman.

Use it, or ignore it. But I still beg you to consider writing far fewer UI based tests.

Five for Friday – May 4, 2018

It’s Star Wars Day! Here’s what I found interesting this week.

  • Quote I’m pondering (or quote within a quote, as it’s the authors of The Coaching Habit who are quoting Bernard Shaw):
    “Bernard Shaw put it succinctly when he said, “The single biggest problem with communication is the illusion that it has taken place.”
  • Book I’m reading now: Why We Sleep: Unlocking the Power of Sleep and Dreams
  • I’m big on learning from failure – but I found this article on Blind Spots in Learning and Inference has a lot of interesting points on blind spots often made when looking at failures (focusing on two widely famous failures).
  • Are you kidding me? More CPU Flaws?
  • I taught a workshop earlier this week on web testing tools. We spent a chunk of time on Postman, but I wanted to give credit to Danny Dainton, as I borrowed from (and referenced) his github repository on All Things Postman. Thanks Danny.

Output, Results, and Things That Are Difficult to Measure

This is a brainstorm rather than a well thought out (or thought out at-all) blog post. I briefly considered a tweet stream, but wanted to give my few followers a break.

I’ve been thinking a bit recently about what it means to be productive on a software team – or how that productivity is viewed, tracked, or measured. I’m not going to dive into performance reviews, as those have a lot more moving parts and dynamics – but just the general view of how we view result – or output from any individual, and any organization.

Traditionally, “output” is a bullet point list of things-you-did. This is output – or production. It’s certainly measurable, and if someone has no output or results, there’s likely an issue. But there’s another kind of output that doesn’t get talked about enough. What about the person who makes other people better? The person who coaches, mentors, and finds ways to make people around them productive?

Jessica Kerr, calls this attribute ‘generativity’

I like the term, and have begun using it recently…which is probably why I’ve been thinking more about this over the past few days.

Some points to consider (and potentially discuss):

  • Output is linear. I can produce 0 things, and I can produce n things. More things == more production. (Note that all produced things are not equal).
  • Generativity can be positive or negative. I can help the team achieve more than the sum of their parts, or I can be an asshole and hold everyone back.
  • You can be both Generative and Productive. While you can be Productive without being generative, I don’t know that it’s a good thing to be Generative and not Productive. More on this below the bullets.
  • Generativity requires leadership – this means that generative people tend to be more experienced (and that less experienced people may focus entirely on output).
  • For those who care about measuring, Productivity is much easier to measure than Generativity.

I don’t know of a system that’s done a decent job figuring out how to measure the non-tangible output (Generativity) of team members. I’ve seen highly generative (IMO) people viewed poorly because their output was difficult to measure (which is why I included the third bullet above), but I think generative people are critical for success in most teams. Of course, the best generative people I know also produce some measurable results as well, so perhaps that combination is necessary.

Five for Friday – April 27, 2018

Here are five of the things I found interesting this week.

  • It’s no secret that I’m a big fan of Patrick Lencioni – this quote from him still makes me chuckle…and also wonder a bit about what happens when I’m not around.
    As a leader, you’re probably not doing a good job unless your employees can do a good impression of you when you’re not around.”
  • I’ve mentioned (or think I have) that I’m not afraid of working myself out of a job (or role). I ponder this a lot, and came across this (not new) article on Working Yourself our of a Job to Accelerate your Career.
  • I enjoyed this article on how important it is to stratify data in order to get accurate analysis.
  • Jesper Ottosen ponders, Could Modern Testing Work in the Enterprise?
  • Most importantly, I have a new dog. Meet Terra!

Five for Friday – April 20, 2018

  • I’ve been thinking a lot about leadership lately, and this quote from The 5 Levels of Leadership (John Maxwell) rings true in so many of my thoughts and reflections.
    “If you think you’re leading but no one is following, then you are only taking a walk.”
  • I’ve mentioned my love of personal kanban here before, but I have a thing for productivity in general. I read this article on time blocking which reflects a lot of the things I’ve figured out on my own over the last decade or more.
  • Within time blocks, when I really need to focus, I do two things. First is that I listen to orchestral music. Jazz and music with lyrics both are distractions to me. Some people swear by pomodoro but when I was writing HWTSaM, I fell in love with (10+2)*5 – which is simply 10 solid minutes of focus, followed by a 2 minute break, repeated 5 times – followed by a longer break. At one point in writing that book, I was so far behind that I took a week of vacation to catch up (which is a little ridiculous for a book where I received 0% of the sales) – but I would crank out 3 sets of 10+2*5 every morning, then another every afternoon, and I was cranking out pages.
  • To help, I even wrote a Windows Vista gadget (anyone remember those?) app for this technique (link if you’re massively curious). It’s odd that I procrastinated on writing the book to write an app to help with procrastination – but that’s me!
  • Finally, this is mine (and Brent’s) but worth another share. Our Modern Testing principles (listen to the podcast for a whole bunch more background) can be found by going to moderntesting.org

Five for Friday – April 13, 2018

  • Quote I’m pondering: “Anyone operating with a theory of leadership that assumes that experts know what is best, and that then the leadership problem is basically a sales problem in persuasion, is in our experience doomed at best to selling partial solutions at high cost” – Heifetz, et al in The Practice of Adaptive Leadership.
    I’ve been (re) studying adaptive leadership quite a bit recently, and hit home on a few of my poor-leadership experiences.
  • What I’m reading – The Five Levels of Leadership, but John Maxwell. I’ve read one of Maxwell’s other books, but I’m finding this one really valuable in forming a model in how leaders grow into better leaders.
  • I enjoyed (and shared with my team at Unity) this article on Continuous Improvement and Feature Branching. Well worth the read.
  • There was a time when PSP (Personal Software Process) was (well, not popular, but far more interesting than it should have been). I thought of PSP when I read this article on tracking time, but completely without the icky feeling of bean-counting that PSP always gave me. In fact, I’m going to try tracking what I do for a month purely for self-reflection on my own prioritization. Definitely interesting.
  • Finally, for all of my readers who enjoy my love of wine, there’s this article fro National Geographic – Our 9,000-Year Love Affair With Booze
%d bloggers like this: