On Email Lists and Advocacy

I try to keep up with all of the various agile “Yahoo! Group” lists: extremeprogramming, scrumdevelopment, refactoring, testdrivendevelopment, agile-testing, and so on. I post once in a while when I see a topic I feel qualified to respond to or that I’m passionate about. Lately I’ve been getting a lot less value out of these lists, some of which I’ve been following for ten years.

Many of the lists degrade into passionate statements of position, i.e. advocacy. Inevitably, people get burnt. Tempers flare, people get upset, some stomp their feet, some simply withdraw. I myself have gotten burnt on this same list, feeling that someone (probably not coincidentally, the same person who is withdrawing now) was looking to simply piece apart every single word written to find as much fault as possible with it.

I withdrew completely from that discussion. I’ve also withdrawn from perhaps a couple other difficult discussions (out of the hundreds I’ve engaged in over the years). Sometimes it’s because I felt the environment was too hostile, and I justified it to myself by thinking that the individuals involved were being immature or offensive or whatever. But that’s not courage speaking. I looked back at my most recent example and regretted how I handled it. Withdrawing didn’t solve anything.

Sometimes a simple statement, meant to be innocuous by who wrote it, is taken as a stab and affront. We can pout and take our toys and go home, but that’s not at all useful or commendable. Courage can help us find a way to face the challenge and learn how to get past the issue. We don’t have to agree with everyone or get along with them, but sometimes a sour incident can lead to a valuable relationship.

So how is this at all relevant to anything? Well, if we are to succeed, agile software development requires that we face challenges and not bury them in isolated cubes. Similar clashes occur not only on email lists, but in real life, and we’re particularly going to see such clashes more frequently in highly collaborative teams. Conflicts that are not handled properly will detract from a team’s ability to deliver. The XP value of courage is still essential to learning how to regularly deliver quality software.

Trailing the Way in 2009

I started reading a very recently published book. I don’t want to mention the name of the book, but it is a tome targeted at “application developers.” After reading a small number of paragraphs from the book, I had to check my calendar to see whether this was 2009 or 1995.

Some choice sentences from the book that just grabbed me right off the bat:

  • “Software applications should simulate (model) the real world with close affinity to the problem domain.”
  • “…the average developer should spend 40 to 50 percent of his or her time in design and not writing code.”
  • “Similar to comments and just as important [emphasis added], the design documents a program.”
  • “An artist does not start with a paintbrush and a canvas. There is considerable preparation before painting can begin. … Similarly, developers do not simply start writing code. The requirements analysis must be undertaken, a design drafted, the prototyping [emphasis added] of class operations, and only then, finally, the implementation.”
  • “The goal of TDD is simply to be a framework for addressing customer requirements with software through an iterative approach to testing and coding.”

[ All told, the author uses around 10 paragraphs in the book to describe TDD, not providing a single example. ]

For me, one of the most illuminating benefits of all this up-front emphasis on design seemed to be realized in the brilliant class diagram for a simple retail banking system. It looks something like:

  Employee <|---- Teller  <------  Customer
              |
              |-- Manager

The power of the “real world” comes alive! This ingenious class diagram existed to support building the following code:

static void Main(string[] args)
{
    Customer cust = new Customer();
    cust.Teller = new Teller();
    cust.Deposit();
    cust.Withdrawal();
}

Per the author, this code “validates” the design.

I’m speechless. (Well, no, I’m fingerless, or something. There must be a better word for being so flabbergasted that you can’t start to type a response.) I could rail on this book to no end, but it would only draw attention to the book (and you’ll note that I’m not providing a link to it either).

Reviewing TAD

I’ve recently participated in a number of after-the-fact code reviews. Developers are supposed to be writing tests first, and are supposed to ask for help when they’re not sure how to do so. Here’s what it looks like:

C = one unit of time spent coding
T = one unit of time spent testing
I =  "    "   "   "    "   in an inspection or review meeting
R =  "    "   "   "    "   doing rework based on results of review
x =  "    "   "   "    "   doing other work

CCCC xxxx TTTTT xxx IIIII RRR II

We’re obviously in TAD (test-after development) mode here. Pay attention to the right hand side, which roughly indicates: When could this code actually ship?

Note that there are multiple Rs because many people are involved in the review meeting. Also note that there is a little more time spent on testing, and yet the coverage never ends up good enough (rarely more than 75%). And finally, note the rework and re-inspection that must take place.

What the same model looks like with TDD:

TCTCTCTC xxxx IIII xxx RR I

So, a little less time building the right code in the first place, and also means a little less time in rework. It also means a little less time in review meetings, since the tests can help effectively describe whether or not the developer captured the requirements correctly.

With pairing and TDD:

TTCCTTCCTTCC xxxx xxx

I’ve doubled up the Ts and Cs because there are now two people involved. I’m not even going to claim that pairing developers will produce a better solution faster, even though I’ve seen this to be the case. Note that this is now the shortest line by far. (It will even accommodate some amount of formal review–much smaller now because it’s already been actively reviewed at least once and preferably twice by the pairs building it–and still be shorter than the original process.)

I’m also not accounting for “ramp-up” time wasted. The intervening xxx’s in the inspection-based flow means that minds have moved on to other things. When it comes time to review code, or rework it, developers must put their head back around something that they might have done days ago. Particularly without good tests, this can waste considerable time.

I’m in an environment now where test-first is only beginning to take roots. Every day I see ample evidence why test-after is simply a bad, bad idea.

Two Greens in a Row

I’m not refactoring, I’m test-driving new code, writing assertions first, yet I get two green bars in a row. Stop! I need to figure out what’s going on. The first possibility is that I didn’t expect to get green:

  • Did I compile? Am I picking up the right version of the code? – D’oh!
  • Does the test really specify what I think it does? – I take a bit of time to read through the test.
  • Do I really understand the code well enough? – I dig into the code, and in rare circumstances fire up the debugger. Maybe someone wasn’t following YAGNI.

Alternatively, if I did expect to get green:

  • Did I take too large a step? – I consider restarting from the prior red, looking for a way to take more incremental steps. Obviously a more-granular increment of behavior exists, since I felt to compelled to write a distinct assertion for it.
  • Am I simply being completist? – I’m not testing, I’m doing TDD, where two greens in a row suggests that I don’t need the second assertion. Test-driving is about incrementally growing the system, not ensuring that everything works. But from a confidence standpoint, I want to probe at interesting boundary conditions. Sometimes my compulsion to probe is because I don’t understand the code as well as I should. Sometimes there’s just too much going on, and I find that adding confidence tests is worthwhile. And finally, I remember that my tests should act as documentation. So most of the time, I’m ok with being a “completist.”
  • Are there “linked” (i.e. redundant) concepts in the design? – Maybe the interface is overloaded, deliberately so. More often than not, I can link the two concepts in the test as well, building a custom assert; conceptually I end up with one assert per test. If I find that I have a lot of tests with more than one postcondition, my designs are probably getting less cohesive. Or maybe I’m just writing too broad of a test.

There are no doubt other reasons for two greens in a row. No matter, the event should always trigger a need to stop and think about why.

Atom