I was asked this via email:
I just read your article
http://www.developer.com/tech/article.php/3715196
And, I came across this principle:
Don’t use metrics to compare teams. Metrics are
best viewed as probes and indicators of problem areas that may warrant
further investigation.
Interestingly, this is one of the drivers that my employer wants to collect metrics.
My responses:
Simply put, no two teams are alike.
From another perspective, comparing two points completely misses the goal of doing agile software development. A good agile team is continually looking for ways to improve their productivity–i.e. within the team itself. Most teams have massive potential room for improvement, never mind worrying about what the other teams are up to. They don’t need distraction and concern about keeping their jobs–what they need instead is good leadership and support.
Competition in the marketplace can be a great way to end up with better product, but creating competition between two teams producing different products is artificial and pointless.
There’s also no “fair” way to compare two teams working on different products. Many factors can be a reason why any one team has a significant advantage or disadvantage over other: difficult vs rapid technology (e.g. C++ vs Ruby), team size, geographical considerations, quality of existing code, weak team members, duration together working as a team, bad managers, fire drills, crankier customers, and so on. How do you provide a good measurement of productivity based on all these factors?
You might come up with something that seems reasonably fair, but you’re better off just focusing on getting each team to use their own past productivity as a baseline. The only realistic measurements are relative (and even those are skewed by things such as departing team members).
It can also be counter-productive to focus on racing other teams: What if one team finishes 50% more stories but delivers 20% more defects, resulting in a customer threatening to drop your product? What if you worry about technical metrics (such as code coverage), and one team gets high coverage but their tests stink (seen this–it’s subjective, and easy to tell for a seasoned TDD developer but not anyone else)?
If management focuses on a small number of metrics, and mandates certain goals for them, you can guarantee that the smarter teams will do what they can to play the numbers game. More often than not, a singular metric will have little to do with legitimate goals (customer satisfaction, on-time delivery, high quality). Due to management mandate, I saw a team rapidly increase their code coverage by rapidly cutting and pasting tests, producing such low quality tests that they eventually abandoned. The end result was a complete waste of investment in the unit tests, because the team wasn’t concerned about what really mattered.
Short-term goals may make the numbers look good, but in the long run you will pay for the misguided focus.
Worse, it can be demoralizing. I worked in a large Fortune 500 company where a VP with seemingly good intentions pitted teams against one another by issuing letter grades (A, B, C, D, F). He insisted on having a meeting with each of 30+ teams in his organization and lambasting those who he perceived to be inadequate. Never mind that there were factors for some of the teams that had nothing to do with team productivity, but instead with customer or internal politics.
The only time I would even consider a team-vs-team competition is if both were high-perfoming, kicking-rear teams. That might be fun. But it’s hard enough to find one top-notch team, and almost impossible to find two in the same place.