In my last post, I tried really hard to focus on the message of employee reviews, and how that message shouldn’t be a surprise. I said things like: “I’m not a fan of the system…”, and “It doesn’t matter how messed up the system is…” – yet the mention of the dreaded-curve of the MS review system was still the focal point of some of the comments, and of many private emails. I had always planned to follow the Three Surprises post with my view of the curve-based review system, but I’ve gone back and forth several times in the last day. If you’ve skimmed ahead, you probably figured out that I decided to go ahead and share my thoughts. There’s probably nothing new here, but I think there’s probably something here that both MS and non-MS employees will be able to learn from.
As I also mentioned in my last post, I’m not particularly fond of the curve , but consider it the tax I pay in order to get paid extremely well to do a job I enjoy. The most recent version of the review curve is the third variation I’ve experienced at Microsoft and is only 18 months old. All versions have their pros and cons, and over beers, I will gladly share the full story and dozens of anecdotes. For this forum, however, I’ll keep things (relatively) short and focus on the latest revision.
The most positive aspects about the curve are the simplicity and the transparency. There’s little mystery to the review system – peer groups (people in the same level band) are stack ranked from 1-n, and then the rank is broken into ratings groups based on known percentages. The bonus percentages for each score and level band are known, so there’s not that much mystery in the system (other than the surprises I mentioned in the last post). It’s easy to apply and easy to manage at all levels of the organization.
On the negative side (from my view at least), there are just a few things I think are worth pointing out. Microsoft values differentiation of employees. What this means in practice is that once the lines are drawn to determine review rankings, everyone in each peer group receives the exact same reward. In a ranking system of 1-5, this means that the person who just missed getting a “1” ranking gets the exact same rewards as the person who barely squeaked out of a “3” rating to get a “2”. As I mentioned above, this makes the system simple, but also can send some difficult messages. Despite this, since the goal is to differentiate employee rewards, this aspect of the curve works as promised.
It’s also a big challenge for a ‘superstar’ team. The curve applies to the team (or, for higher levels, across a division), but it’s possible to get a lower review score just by being on a good team. Even worse, the opposite is true. One can guarantee a good review score by finding a dysfunctional team and stepping in as the superstar.
And that leads to the part I really worry about – the side effects of the curve and competition. Behaviors visibly change as calibration season approaches. Some people begin to suck up in attempt to play the “review game”, while others refuse to take risks they would take at any other part of the year. A co-worker was commenting recently on strange behavior in a meeting when I asked him, “would the conversation have been different at a different time of the year?” (the answer was a resounding yes). I am a huge believer in collaboration (I once had a manager who said, collaboration was my super-power). The problem is that collaboration and competition don’t work well together. It’s in my best interest (as far as rewards go) to do everything I can to make sure all of my peers fail – and they, likewise, should feel the same. Given the system, that case could indeed happen, but fortunately most teams are able to rise above it – despite the potential hit on our own rewards and career growth. Alfie Kohn has written some fantastic books on this subject – including twoI keep on my desk this time of year: Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes, and No Contest: The Case Against Competition. To me, Kohn’s work says a lot about review systems in general. Combine these with Dan Pinks work on motivation, and you can form a pretty good picture for yourself with review systems.
Psychologically, as you can imagine, it’s a mess. Once the posturing is over and the calibration happens, news of the actual review scores slowly leak out – causing a zombie-like depression over a fair share of employees. After a few months, the fog shakes off, and it’s back to business as usual – for another 8-10 months or so.
I realize, that by sharing these thoughts, I’m opening the floodgates for criticism and complaints about the system. I live with this system in order to get to do what I do. If I ever get to a point where my job isn’t the best possible job I can imagine for myself, or if a review system conflicts with what I get to do, my story will change. Until then, it’s definitely a price I can pay.