Responding to a Comment on “Tradeoffs”

As mentioned two posts ago, it’s not possible to see new comments made on my old blog. Here’s a comment I received on Tradeoffs.

The basic thrust of the “Tradeoffs” blog post is that code & fix cycles–after development thinks they’re done, and software gets run through a test team–we typically hit a cycle where QA encounters no end of problems that must be fixed.

“I think you’re comparing a project that uses TDD to a project that writes no unit tests until, ‘Done,’ right?

“Why are you ignoring the projects that write unit tests after each unit is written?

“This is faster than both the approaches you describe. It’s faster than TDD because you write far fewer tests that you don’t need and faster than your other (waterfall?) approach because you find problems sooner.”

/Schmoo

Schmoo asks a good question–writing unit tests shortly after a “unit” is written could conceivably be a way of minimizing the code & fix cycle.

I’ve seen it tried, however. In reality, it helps a little, not a lot and not nearly as much as TDD. The code & fix cycle is still there, a bit shorter than it would have been otherwise.

Now, add good automated acceptance tests into the mix, and I believe Schmoo’s premise is closer to true. But unit tests? No. The primary reason is that most test-after-development (TAD) unit tests are insufficient with respect to coverage, particularly in the more complex areas. Why? Check out my earlier blog post, The Realities of Test-After Development.

As far as TDD generating more tests that you don’t need, depends on what your goals are. I’ve found that TDD, which requires you to write tests for just about everything, results in much faster development than TAD, even as TAD produces less coverage. More on these topics in a future blog post.

You Don’t Build Stories

I recently responded to a poster at JavaRanch who was asking about how stories are used in agile. I think a lot of people use the word “story” improperly–they use it in the sense that a story is a feature. “I built 10 stories into the product.” No, a story is a discussion around a feature. A story is also not a card.

While this may seem like a nit that I’m picking, I believe misuse of the word “story” has led to unfortunate insistences, such as rigid formats for cards that capture feature conversations (and I’m going to stop calling these cards “story cards,” which isn’t quite as bad a term, but it does help reinforce the wrong things).

(Most of the rest of this post is taken directly from the exchange on JavaRanch.)

“Story cards” are simply placeholders, or tools. The word “story” is the important thing here. A story is something we tell people; the card is simply a reminder, a summary of that oral storytelling. A card is a convenient medium. I suppose I’m thankful I don’t hear about “story whiteboards.”

Story cards (I tried… it’s not easy!) are not artifacts! Once you complete work on building a feature, the card is meaningless. You might think you could use it to track what had been done in a given iteration, but a 5-word summary of a several-day conversation (between customer and developers and other folks) scribbled on a card will not likely be meaningful to anyone six months down the road.

Instead, we get rid of the conversation piece, and replace it with something that documents how the system behaves once that new feature is in place. The best form for that document is a series of well-designed acceptance tests that anyone can easily read and understand. (See our Agile in a Flash card on Acceptance Test Design Principles for a definition of well-designed.)

Acceptance tests are the closest analogs to use cases (a tool popularized almost 20 years ago now), but with slightly differing granularities. The name of an acceptance test can map to the goal, or name, of a use case; it can also map to a much smaller slice of functionality. This is because features in agile are intended to be very small, taking only a day or so to complete. So someone might tell a story: “allow users to filter sort results by age.” That’s likely not an entire use case, it’s an alternate path on a larger use case.

Otherwise, the focus is the same: Jacobson said you could produce user guides and documentation from well-written use cases. The same can hold true for acceptance tests (with a little assembly work), which have the vastly superior quality of eternal accuracy: As long as all your acceptance tests pass, you know that they describe how the system works.

Stories, on the other hand, are just conversations, and the memory of all conversations eventually fades from memory. The cards act as great tools while the story is in progress–they help us track things and remind us what we were talking about, but other than that, we should just recycle the cards once we deliver the corresponding features.

Learning Great Practices From Constraints

The Maryland high school I attended helped me get my early career off the ground. In 1980, I took my first computer class. A friend and his geek cohorts had already learned enough to swarm all over the HP/3000 machine we used. They quickly knew more about it than the operator hired by the school.

I learned that our machine and the terminals (HP-2640A’s, I believe) had been donated to the school. But before giving up the terminals, the donor cracked them open and removed some of the terminal memory, no doubt to help beef up the terminals for their newer system. The terminal memory allowed you to scroll back: by default, you got one page at a time (25×80, or maybe 24×80 to accommodate the f-keys line at the bottom). Later, when I started working at the University of Maryland–a job I got mostly because of my high school experience with the HP/3000–I had a wondrously souped-up terminal that allowed me to scroll back a whopping ten or so pages!

For whatever dumb reason, however, it was possible to remove even some of the first screen’s worth of memory. So instead of a 25×80 display, we effectively had a 17×80 display. You’d see 17 lines of output, followed out by a functionally-dead and black bottom third of the screen.

The effect this constraint had on editing programs was overwhelming. We used a line editor (TDP/3000 was the nicer one that HP offered), which meant you had to explicitly tell the editor to display certain lines when you wanted to view them. It went something like:
> list 34-49
… 16 lines of code displayed here …
>
The list command itself could scroll off the screen, but you still had a new command prompt taking up one more line, effectively leaving us with 16 lines maximum of code that I could see on-screen at any given time. I got very good at figuring out n:n+15.

The result: I learned very early on to compose my programs so that many, if not most, of my functions were short–no more than 16 lines! Otherwise, I’d spend far too much time listing lines appearing earlier or later in a function (to emulate scrolling up and down).

This habit became a key part of the successful parts of my programming career. My programs were usually structured to be very easy to maintain, and I never had lots of problems with pinpointing and fixing defects. Creating short methods is a great practice that I still find value in today.

PrizCon 2010

I traveled to Marion Correctional Institute (MCI) in Ohio on Saturday, where I delivered a couple of talks to about sixty inmates at the MCI Careers in Technology Conference, informally known as PrizCon. An airplane bug has sidelined me for the day (I’ve worked from home before when sick, but talking is uncomfortable), so I’ll take the time to relate my experience.

First and foremost I must thank Dan Wiebe for creating the program behind the conference and the conference itself. I also thank Pillar for helping by sponsoring the event. Thanks also to the prison staff, who were plenty gracious for supporting something that only makes them do more work. And finally, thanks to the other presenters and attendees for turning this into as real a conference as any other I’ve attended.

Dan has been working with inmates at MCI for a while. He’s also been blogging about his encounters for a while; please take a look at his postings for additional background information. He’s already posted about the conference itself, as has Joel Helbling.

With this conference, Dan put together a fantastic opportunity for both inmates and people from the outside to get together and talk about software development. Presenters included both external folks and prisoners. All in all, it was perhaps the most rewarding experience I’ve had over the past decade.

What was most striking was how enthusiastic and passionate the conference attendees were. Their knowledge about software development ranged from zero (“what is code?”) to having built software professionally in the past. Yet all listened intently. I had several great conversations with inmates who wanted to know where the industry was heading, or who wanted to talk about their experiences with reading through Agile Java, or just wanted to thank me for coming out. I was also happy to meet some of my peers in the industry who came both to present and to listen.

As far as being in a prison, a suggestion Dan had made helped me avoid being at all uptight about being in there–he said to simply treat them in a professional capacity, as if they were coworkers. It worked like a charm. I believe our talks left them with something they’ll remember for a while (including a few laughs), and I hope they are able to take some of the messages to heart.

Badges?

“Badges? We ain’t got no badges. We don’t need no badges! I don’t have to show you any stinkin’ badges!”
– Gold Hat, The Treasure of the Sierra Madre

Once again the talk of agile certification rears its ugly head, this time in the form of merit badges.

As someone who has interviewed and/or phone-screened hundreds of candidates, I understand the motivation. One lame recruiting firm had me talk to about a dozen candidates in live, speed-dating form (~20 minutes each) plus another ten or so phone screens. We were looking for a technical agile coach who could pair and help developers with TDDing C++ code. Out of the 20+ candidates, only a couple were worthy of more consideration, and ultimately they too proved to be not up to the task. The rest? “I haven’t used C++ in 10 years.” Or, “I ran some unit tests once.” Or, “Yeah, C++ is on my resume, but I have zero interest in working with it again.” Or, one of my favorites, “Did you want me to show you my TDDs?”

Turns out whoever wrote the requisition had asked for the wrong thing. Their laundry list was lengthy, several sentences worth, and didn’t get to the essence of what we really needed. I reworded it. “Seeking agile coach to pair and teach developers TDD in a C++ environment” was about all it said. World of difference. Out of the (much smaller) handful of applicants we subsequently received, we could have hired just about any one of them. Don’t ask for more than you really need.

I would never consider discarding developer applicants due to lack of certification–whether that certification be SCJP, ScrumMaster(tm), Badge o’ Agile Merit (BAM!), or Bachelor of Science (yes, university degrees are simply an advanced and entrenched form of certification). I’ve never found a certification to be The Great Differentiator. I’ve encountered enough certified individuals who were complete wastes of time, and enough brilliant individuals who others might have passed over due to their lack of certification.

The premise is that certification would save us money in hiring developers. It might save a small amount of money, but I contend that it’s really only worthwhile, as Tom DeMarco says, for the certifiers.

I can eliminate at least 90% of the chaff by asking for the right thing. And, from the rest, within five minutes or so, I have a very good notion as to whether someone is suitable as an agile developer. This isn’t an advanced skill. Sure, you need to have done enough agile to spot the phonies, but most people who’ve been on a good agile team for at least a year or so have this experience. If you don’t have any such individuals with solid agile experience, then you should seek out a reputable firm who can talk agile, such as Improving Enterprises in Dallas, and seed your initial team with a couple people who will truly make the difference.

There is a huge cost in a revenue-driven certification scheme for everyone who is not a certifier. For those of us who take to agile on our own, and are already passionate about it, it is little more than a very expensive tax to us, and insulting to boot. For those who want to learn, there are far better ways than taking classes. Yes, some of the proposed schemes suggest more creative elements, such as conference attendance, but none of these demonstrate that someone’s worthy of hiring. They only demonstrate that someone has spent a good amount of money and time on “something.”

About the only real way to know whether or not to hire someone is to have worked with them for a good amount of time. Some of the proposed schemes center around this notion of a trust circle, which might work, but it looks like there would be cost still involved.

(I did propose a scheme which would require already-certified developers to pair a certain amount of hours annually with the would-be certified, as a pro-bono service. This would allow the ability to become certified with an investment in time only, not money. No bites yet–my cynical nature reinforcing my notion that the certifiers are more about income potential than anything.)

In this down economy, I find it deeply concerning that we are asking already-salary-beleaguered developers to relinquish even more of their income.

In lieu of requiring those looking for employment to spend lots more money, we should just filter them better, then talk and pair with them for a little bit of time. It is agile, after all, where we value face-to-face communication and not so much written documentation.

Tradeoffs

tradeoffs-751776

This graph, intended to show the differences in outcome between TDD and not, is a sketch of my observations combined with information extrapolated from other people’s anecdotes. One datapoint is backed by third party research: Studies show that it takes about 15% longer to produce an initial solution using TDD. Hence I show in the graph the increased amount of time under TDD to get to “done.”

The tradeoff mentioned in the title is that TDD takes a little longer to get to “done” than code ‘n’ fix. It requires incremental creation of code that is sometimes replaced with incrementally better solutions, a process that often results in a smaller overall amount of code.

When doing TDD, the time spent to go from “done” to “done done” is minimal. When doing code ‘n’ fix, this time is an unknown. If you’re a perfect coder, it is zero time! With some sadness, I must report that I’ve never encountered any perfect coders, and I know that I’m not one. Instead, my experience has shown that it almost always takes longer for the code ‘n’ fixers to get to “done done” than what they optimistically predict.

You’ll note that I’ve depicted the overall amount of code in both graphs to be about the same. In a couple cases now, I’ve seen a team take a TDD mentality, apply legacy test-after techniques, and subsequently refactor a moderate-sized system. In both cases they drove the amount of production code down to less than half the original size, while at the same time regularly adding functionality. But test code is usually as large, if not a bit larger, than the production code. In both of these “legacy salvage” cases, the total amount of code ended up being more or less a wash. Of course, TDD provides a large bonus–it produces tests that verify, document, and allow further refactoring.

Again, this graph is just a rough representation of what I’ve observed. A research study might be useful for those people who insist on them.

Framing Your Design

A while back, I worked with a team in a cube bullpen to help mentor them primarily in test-driven development (TDD). Their goal was to deliver a decent-sized web product–not huge, not trivial.

The first time I paired with the programmer nearest the bullpen opening, I noticed a large framed document on a nearby cube wall, outside the bullpen, and asked what it was. All I could tell from that distance was that it was a highly detailed diagram of some sort.

I asked my pair about the document.
“It’s our class model.”
“How do you use it?”
“We don’t. It’s out of date.”
Of course it was. We went back to building code.

When we took a break, I walked to the diagram. Sure enough, it was a UML class model–with gobs of detail. Public methods, attributes, private methods, annotations on the associations, attribute types, return types, and so on, all specified in what looked like a couple hundred classes. Arrows all over the place. As I previously mentioned, the document was framed (albeit cheaply), which meant that the model itself was enshrined in a protective layer of glass (plastic?).

The framed model sure looked pretty! And it no doubt looked quite impressive to any non-technical observer (such as a vice president): “They built all that!”

Of course, the team that actually produced the detailed model no longer found it useful. During the remainder of my engagement, I never once saw a developer look at the diagram. And most amusing, when I took my one visit to inspect the model, a closer look revealed that the programmers had briefly attempted to keep the model up to date. On the surface of the glass, there were various scribbles and attempted modifications, all written directly on the glass.

Your software is special, but your design models are not, and they change rapidly. Don’t enshrine your design.

 

Easier Testing Using the SRP

This simple file monitor class notifies listeners when a watched file is modified:

public class FileMonitor {
 private Set<FileChangeListener> listeners =
    new HashSet<FileChangeListener>();
 private int duration;
 private File file;
 private long lastModified;

 public FileMonitor(String filename, int duration) {
     this.file = new File(filename);
     lastModified = file.lastModified();
     this.duration = duration;
 }

 public void start() {
     boolean isDaemon = true;
     long initialDelay = duration;
     new Timer(isDaemon).schedule(
        new FileMonitorTask(), initialDelay, duration);
 }

 public void addListener(FileChangeListener listener) {
     listeners.add(listener);
 }

 class FileMonitorTask extends TimerTask {
     @Override
     public void run() {
         if (file.lastModified() > lastModified) {
             lastModified = file.lastModified();
             for (FileChangeListener listener: listeners) {
                 listener.modified();
             }
         }
     }
  }
}

A FileMonitor schedules a TimerTask that just checks the modified date from time to time, and compares it to the last modified date. The code above seems like a typical and reasonable implementation. FileMonitorTask could be an anonymous inner class, of course. I spent more time than I would have preferred test-driving this solution, since I made the mistake of test-driving it as a single unit. These tests had to deal with the vagaries of threading due to the Timer scheduling.

import java.io.*;
import java.util.*;
import org.junit.*;
import static org.junit.Assert.*;

public class FileMonitorTest {
 private static final String FILENAME = "FileMonitorTest.properties";
 private static final int MONITOR_DURATION = 1;
 private static final int TIMEOUT = 50;
 private FileMonitor monitor;
 private File file;
 private List<FileChangeListener> fileChangeNotifications;
 private Thread waitThread;

 @Before
 public void initializeListenerCounter() {
     fileChangeNotifications = new ArrayList<FileChangeListener>();
 }

 @Before
 public void createFileAndMonitor() throws IOException {
     FileUtils.writeFile(FILENAME, "property=originalValue");
     file = new File(FILENAME);
     monitor = new FileMonitor(FILENAME, MONITOR_DURATION);
 }

 @After
 public void deleteFile() {
     FileUtils.deleteIfExists(FILENAME);
 }

 @Test
 public void shouldNotifyWhenFileModified() throws Exception {
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(1);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();
     assertEquals(1, numberOfFileChangeNotifications());
 }

 @Test
 public void shouldSupportMultipleListeners() throws Exception {
     monitor.addListener(createCountingFileChangeListener());
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(2);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();
     assertEquals(2, numberOfFileChangeNotifications());
     assertAllListenersDistinctlyNotified();
 }

 @Test
 public void shouldOnlyReportOnceForSingleModification()
         throws InterruptedException {
     // slow--must wait on timeout. Better way?
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(2);

     monitor.start();

     alterFileToChangeLastModified();

     joinWaitThread();
     assertEquals(1, numberOfFileChangeNotifications());
 }

 @Test
 public void shouldReportMultipleTimesForMultipleModification()
     throws InterruptedException {
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(1);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();

     waitForFileChangeNotifications(2);
     alterFileToChangeLastModified();
     joinWaitThread();

     assertEquals(2, numberOfFileChangeNotifications());
 }

 private FileChangeListener createCountingFileChangeListener() {
     return new FileChangeListener() {
         @Override
         public void modified() {
             fileChangeNotifications.add(this);
         }
     };
 }

 private int numberOfFileChangeNotifications() {
     return fileChangeNotifications.size();
 }

 private void waitForFileChangeNotifications(final int expected) {
     waitThread = new Thread(new Runnable() {
         @Override
         public void run() {
             while (numberOfFileChangeNotifications() != expected)             {
                 sleep(1);
             }
         }
     });
     waitThread.start();
 }

 private void sleep(int ms) {
     try {
         Thread.sleep(ms);
     } catch (InterruptedException exception) {
         fail(exception.getMessage());
     }
 }

 private void alterFileToChangeLastModified() {
     file.setLastModified(file.lastModified() + 1000);
 }

 private void joinWaitThread() throws InterruptedException {
     waitThread.join(TIMEOUT);
 }

 private void assertAllListenersDistinctlyNotified() {
     Set<FileChangeListener> notifications =
         new HashSet<FileChangeListener>();
     notifications.addAll(fileChangeNotifications);
     assertEquals(fileChangeNotifications.size(), 
                  notifications.size());
 }
}

Wow, that’s a bit longer than I would have hoped. Yes, there are better ways to test the threading. And there may be better threading mechanisms to implement the monitor, although I think the Timer is pretty simple. I don’t do enough threading, so I usually code a “familiar” solution (i.e. something I already know well) initially, and then refactor into a more expressive and concise solution. I’m pretty sure this test is flawed. But rather than invest in fixing it, I reconsidered my design (by virtue of wanting an easier way to test it).

The SUT is really two classes, the monitor and the task. Each one can easily be tested separately, in isolation, as a unit. Sure, I want to see the threading in action, but perhaps that’s better relegated to an integration test (which might also double as an acceptance test). I decided to change the solution to accommodate more focused and more robust tests. The file monitor tests are pretty much what was there before (with some small additional refactorings), except that there’s now no concern over threading–I just test the run method:

import java.io.*;
import org.junit.*;
import static org.mockito.Mockito.*;
import org.mockito.*;

public class FileMonitor_TaskTest {
 private static final String FILENAME = "FileMonitorTest.properties";
 private FileMonitor.Task task = new FileMonitor.Task(FILENAME);
 @Mock
 private FileChangeListener listener;

 @Before
 public void initializeMockito() {
     MockitoAnnotations.initMocks(this);
 }

 @BeforeClass
 public static void createFile() throws IOException {
     FileUtils.writeFile(FILENAME, "property=originalValue");
 }

 @AfterClass
 public static void deleteFile() {
     FileUtils.deleteIfExists(FILENAME);
 }

 @Before
 public void createTask() {
     task = new FileMonitor.Task(FILENAME);
 }

 @Test
 public void shouldNotNotifyWhenFileNotModified() {
     task.addListener(listener);
     task.run();
     verify(listener, never()).modified();
 }

 @Test
 public void shouldNotifyWhenFileModified() throws Exception {
     task.addListener(listener);
     changeFileLastModifiedTime();

     task.run();

     verify(listener, times(1)).modified();
 }

 @Test
 public void shouldSupportMultipleListeners() throws Exception {
     task.addListener(listener);
     FileChangeListener secondListener = mock(FileChangeListener.class);
     task.addListener(secondListener);

     changeFileLastModifiedTime();
     task.run();

     verify(listener, times(1)).modified();
     verify(secondListener, times(1)).modified();
 }

 @Test
 public void shouldOnlyReportOnceForSingleModification() 
         throws InterruptedException {
     task.addListener(listener);

     task.run();
     changeFileLastModifiedTime();
     task.run();
     task.run();

     verify(listener, times(1)).modified();
 }

 @Test
 public void shouldReportMultipleTimesForMultipleModification() 
          throws InterruptedException {
     task.addListener(listener);

     task.run();
     changeFileLastModifiedTime();
     task.run();
     changeFileLastModifiedTime();
     task.run();

     verify(listener, times(2)).modified();
 }

 private void changeFileLastModifiedTime() {
     File file = new File(FILENAME);
     file.setLastModified(file.lastModified() + 1000);
 }
}

I introduced Mockito to provide a simple verifying stub for the listener. I suppose I should also stub out interactions with File, but I’ll incur the speed penalty for now. Next, I needed to test the remaining FileMonitor code. To do this involved proving that it creates and schedules a task appropriately using a timer, and that it handles listeners appropriately.

import xxx.CollectionUtil;
import java.util.Timer;
import org.junit.*;
import org.mockito.*;
import static org.mockito.Mockito.*;

public class FileMonitorTest {
 @Mock
 private Timer timer;

 @Before
 public void initializeMockito() {
     MockitoAnnotations.initMocks(this);
 }

 @Test
 public void shouldScheduleTimerWhenStarted() {
     String filename = "filename";
     long duration = 10;
     FileMonitor monitor = new FileMonitor(filename, duration) {
         @Override
         protected Timer createTimer() {
             return timer;
         }
     };
     monitor.start();
     verify(timer).schedule(any(FileMonitor.Task.class), 
         eq(duration), eq(duration));
 }

 @Test
 public void shouldDelegateListenersToTask() {
     FileChangeListener listener = mock(FileChangeListener.class);
     FileMonitor monitor = new FileMonitor("", 0);
     monitor.addListener(listener);

     CollectionUtil.assertContains(
         monitor.getTask().getListeners(), listener);
  }
}

The design of the production code had to change based on my interest in better tests. A number of small, incremental refactoring steps led me to this solution:

import java.io.*;
import java.util.*;

public class FileMonitor {
 private final long duration;
 private Task task;

 public FileMonitor(String filename, long durationInMs) {
     task = new Task(filename);
     this.duration = durationInMs;
 }

 public void start() {
     long initialDelay = duration;
     createTimer().schedule(task, initialDelay, duration);
 }

 protected Timer createTimer() {
     final boolean isDaemon = true;
     return new Timer(isDaemon);
 }

 Task getTask() {
     return task;
 }

 public void addListener(FileChangeListener listener) {
     task.addListener(listener);
 }

 static class Task extends TimerTask {
     private final File file;
     private long lastModified;
     private Set<FileChangeListener> listeners = 
         new HashSet<FileChangeListener>();

     Task(String filename) {
         this.file = new File(filename);
         lastModified = file.lastModified();
     }

     public void addListener(FileChangeListener listener) {
         listeners.add(listener);
     }

     Set&lt;FileChangeListener&gt; getListeners() {
         return listeners;
     }

     @Override
     public void run() {
         if (file.lastModified() > lastModified) {
             lastModified = file.lastModified();
             for (FileChangeListener listener: listeners) {
                 listener.modified();
             }
         }
     }
 }
}

Well, most interestingly, I no longer have any tests that must deal with additional threads. I test that the monitor delegates appropriately, and I test that the task works properly when run. I still want the integration tests, but I think they can be relegated to a higher, acceptance level test for the story:

reload properties at runtime

Otherwise, at this point, I have a high level of confidence in my code. I know the Timer class works, and I know my two classes work, and I’ve also demonstrated to myself that they all work together. I’m ready to move on. Some additional relevant points:

  • Threading is tough to get right, and it can be even tougher to test it.
  • My final solution is a few more lines of code due to supporting my interest in more focused testing. I’m ok with the additional code because of the flexibility having the tests gives me, and more often than not, having them allows me to easily eliminate duplication elsewhere.
  • It’s hard to be overdo the SRP. Far more classes than not weigh in the wrong direction of too many responsibilities.
  • The original solution, probably fine otherwise, is what you might expect from someone uninterested in TDD. It works (I think) but it represents rigid code, code that is more costly to change.

 

The Bowling Game

Sometimes the “toy” examples we build lead to something real. A few years ago, I wrote a 13-part article series for Informit, demonstrating TDD using the game of hold ’em poker. Shortly thereafter, I received a call from a company looking for Java developers to help them build an automated poker application for in-casino table. The pitch was that this would eliminate the need for costly human dealers; it would also speed up gameplay, allowing more hands and thus increasing the house’s rake. Someone at the small startup had read my articles and figured I at least knew the domain!

That work never materialized. Today I received an email (probably broadcast to wide distribution) from a headhunter with an interesting need: “We’re looking for someone in DFW with C# and C++ development experience who loves to bowl! They will be creating software for the sport of bowling and should have an interest in OR knowledge of the sport…” Ha! I don’t know that this is for an agile shop, but is there a hardcore agile developer out there who hasn’t coded the bowling game at least once? I wonder if that qualifies. 🙂

I’d alert Ron, but I’m sure he wouldn’t consider moving to Dallas. Well, if you’re interested, let me know and I’ll forward the recruiter info. Or if you’re looking for work in an agile shop elsewhere, check out the agile jobs list.

ScrumAlliance ™

On the scrumdevelopment Yahoo! group, a discussion revolves around user group concerns over the trademarking of the phrase “Sc*um User Group.” (Note that I’ve starred the name in order to avoid trademark violations.) A few inquiries resulted in an official response from a Sc*umAlliance representative.

I find it odd that (a) no one belonging to the “Sc*umAlliance” (whoever they really are–I’m suspecting maybe Mars, but given their legal nature, maybe Uranus) frequents the scrumdevelopment group, (b) none of them chose to respond directly on the group once apprised of the concerns, and (c) Ken Schwaber didn’t have an immediate answer (regardless of his affiliation with the Sc*umAlliance). I suppose there must be people (lawyers, no doubt) who don’t have to be at all interested in the product they support, but it just seems wrong somehow.

Agile requires lots of communication and continual negotiation. Bits of command and control have their value at times, but in most cases C&C begins to eat away at the trust relationship required for continual negotiation. I suggest people considering or doing Sc*um dispense with the trappings of a trademarked process and look instead to understanding what agile is really about.

The progenitors of agile–i.e. the agile manifesto signatories–rightly eschewed “heavyweight” processes–processes centered around lots of documentation and meetings. I think there’s a new category of “heavy” that should be spurned, and that includes processes controlled by people so heavily wrapped up in legal and licensing protections that they can’t bother discussing changes with the community who would be affected.

Atom