Pitting Teams Against One Another

I was asked this via email:

I just read your article
http://www.developer.com/tech/article.php/3715196

And, I came across this principle:
Don’t use metrics to compare teams. Metrics are best viewed as probes and indicators of problem areas that may warrant further investigation.

Interestingly, this is one of the drivers that my employer wants to collect metrics.

My responses:

Simply put, no two teams are alike.

From another perspective, comparing two points completely misses the goal of doing agile software development. A good agile team is continually looking for ways to improve their productivity–i.e. within the team itself. Most teams have massive potential room for improvement, never mind worrying about what the other teams are up to. They don’t need distraction and concern about keeping their jobs–what they need instead is good leadership and support.

Competition in the marketplace can be a great way to end up with better product, but creating competition between two teams producing different products is artificial and pointless.

There’s also no “fair” way to compare two teams working on different products. Many factors can be a reason why any one team has a significant advantage or disadvantage over other: difficult vs rapid technology (e.g. C++ vs Ruby), team size, geographical considerations, quality of existing code, weak team members, duration together working as a team, bad managers, fire drills, crankier customers, and so on. How do you provide a good measurement of productivity based on all these factors?

You might come up with something that seems reasonably fair, but you’re better off just focusing on getting each team to use their own past productivity as a baseline. The only realistic measurements are relative (and even those are skewed by things such as departing team members).

It can also be counter-productive to focus on racing other teams: What if one team finishes 50% more stories but delivers 20% more defects, resulting in a customer threatening to drop your product? What if you worry about technical metrics (such as code coverage), and one team gets high coverage but their tests stink (seen this–it’s subjective, and easy to tell for a seasoned TDD developer but not anyone else)?

If management focuses on a small number of metrics, and mandates certain goals for them, you can guarantee that the smarter teams will do what they can to play the numbers game. More often than not, a singular metric will have little to do with legitimate goals (customer satisfaction, on-time delivery, high quality). Due to management mandate, I saw a team rapidly increase their code coverage by rapidly cutting and pasting tests, producing such low quality tests that they eventually abandoned. The end result was a complete waste of investment in the unit tests, because the team wasn’t concerned about what really mattered.

Short-term goals may make the numbers look good, but in the long run you will pay for the misguided focus.

Worse, it can be demoralizing. I worked in a large Fortune 500 company where a VP with seemingly good intentions pitted teams against one another by issuing letter grades (A, B, C, D, F). He insisted on having a meeting with each of 30+ teams in his organization and lambasting those who he perceived to be inadequate. Never mind that there were factors for some of the teams that had nothing to do with team productivity, but instead with customer or internal politics.

The only time I would even consider a team-vs-team competition is if both were high-perfoming, kicking-rear teams. That might be fun. But it’s hard enough to find one top-notch team, and almost impossible to find two in the same place.

Tradeoffs

tradeoffs-751776

This graph, intended to show the differences in outcome between TDD and not, is a sketch of my observations combined with information extrapolated from other people’s anecdotes. One datapoint is backed by third party research: Studies show that it takes about 15% longer to produce an initial solution using TDD. Hence I show in the graph the increased amount of time under TDD to get to “done.”

The tradeoff mentioned in the title is that TDD takes a little longer to get to “done” than code ‘n’ fix. It requires incremental creation of code that is sometimes replaced with incrementally better solutions, a process that often results in a smaller overall amount of code.

When doing TDD, the time spent to go from “done” to “done done” is minimal. When doing code ‘n’ fix, this time is an unknown. If you’re a perfect coder, it is zero time! With some sadness, I must report that I’ve never encountered any perfect coders, and I know that I’m not one. Instead, my experience has shown that it almost always takes longer for the code ‘n’ fixers to get to “done done” than what they optimistically predict.

You’ll note that I’ve depicted the overall amount of code in both graphs to be about the same. In a couple cases now, I’ve seen a team take a TDD mentality, apply legacy test-after techniques, and subsequently refactor a moderate-sized system. In both cases they drove the amount of production code down to less than half the original size, while at the same time regularly adding functionality. But test code is usually as large, if not a bit larger, than the production code. In both of these “legacy salvage” cases, the total amount of code ended up being more or less a wash. Of course, TDD provides a large bonus–it produces tests that verify, document, and allow further refactoring.

Again, this graph is just a rough representation of what I’ve observed. A research study might be useful for those people who insist on them.

Function Points Are a Crock

Every once in a while, someone will ask about the value of incorporating function points. For those under, say, the age of 40, function points were something that were devised in the late 1970s in order to come up with a consistent metric for estimating the size of a software system.

Sure, function points can end up being fairly accurate from time to time. They may help in terms of comparing efforts on two software projects.

The problem is the investment required to derive function points. Function point analysis is expensive (albeit there have been several initiatives to try and simplify the effort). In order to be useful across projects, a metric has to be consistently calculated. In the case of function points, consistently calculating them is a meticulous effort. Generally, you need an “expert” in order to do well with function points.

Fortunately, to save the day, we have high-priced consultants that can come in and explain why (a) agile sucks and (b) why they are the one who can do these ridiculous calculations.

Horse hockey. I’ve seen as good or better results from agile estimating and planning techniques. Or, agile aside, from notions that are much simpler to calculate (“how many screens are there?”). These techniques, which come more cheaply, are far less onerous, and anybody can quickly learn how to do them without hiring an overpriced consultant (or “software economist”).

Should you trust me, a “high-priced consultant” myself to tell you function points are dead? Look around. There’s a good reason why most organizations have moved off of such heavyweight nonsense. All of these wondrous efforts and calculations make managers and executives feel good. Function points look like a lot of effort went into them, and fancy looking calculations back this effort up. But that’s about it. They don’t really add value to a project. What adds value is getting quality product in front of a customer on a consistent basis, giving them what they ask for and expect.

A good consultant will teach you how to start solving your own problems. A questionable one will sell you complexity you don’t need.

Comments:

Why function points for an application with a metric that is not a metric but a rating?

Imagine calculating car points trying to establish a single value for a car with points for wheels, chassi, gears, backseat and son on?
Has anyone tried?

Not possible i guess!

Our experts group make function points available at lowest cost in the whole world, I guess. We charge around 50 cents to 1 dollar for each RFP page. It converts to $3 – $5 / hour kind of a rate.

We have also come up with a function point tool that is used primarily by our experts to make function point submissions but it can be used by anyone who wishes to practice Function points.

People are invited to use this free online FP tool

http://www.econcinnity.com/eConcinnity/faces/work/estimation/functionpoints/FunctionPointCapture.jsp

If you register on the site (go to home page to register) then it will let you save your work as you go. There are other tools as well (such as WBS and RCL (a requirements capture language under research) tools), please have a look at the services tab on the home page

Do send your suggestions for improvement.

Thanks
Animesh

Atom