The Consulting Legitimacy Cycle

Pausing to determine where you’ve come from and why you are where you are can be a valuable exercise in knowing where you want to go, as well as in understanding the value you can provide. An exploration of my past dozen consulting years taught me that taking an occasional break from consulting has improved my ability to better understand and help subsequent customers.consulting :-)

In 1998, after over 15 years of professional development, I thought I’d be Mr.-Retire-From-Big-Corporation-With-Nice-Pension. In 1998, I had just reached my five-year mark with MCI. When WorldCom came around seeking souls to connect to the hive mind, my research determined that they didn’t provide such a hot place to work. I was in the 1% of shareholders who voted against it. Sometimes being right isn’t gratifying (what a debacle).

I moved on to a local dot-com, the now-defunct ChannelPoint. In 1999, I learned about XP** and did what of it I could–mostly some TDD–while isolated as a professional services developer in my shared office. Cool stuff!

In 2000, with ChannelPoint’s collapse pending, I excitedly began my foray into the world of consulting. I worked proudly for Object Mentor for two years. I then joined an XP team and enjoyed pair-programming day-to-day as “just another developer” for seven months.

An important revelation hit me: Promoting XP as a consultant and practicing XP as a team member are two completely different things. I’d preached about a few things–pairing, most significantly–that I’d done only as an external party, not as a true team member. Until you experience what pairing is like, day-in day-out for an extended period of time, you can’t possibly understand what it really feels like. And pairing felt good, but I also learned to recognize some problems with it.

I knew I had gathered very relevant information, and started consulting on my own. Cool! Well, the wife didn’t think so; she preferred me home. In 2004, I took on a local position again as a developer, and ended up a developer/manager for the better part of a year. I learned a lot more about what makes teams excel–I had a kick-ass development team, and they weren’t even doing much in an XP sense.

The call of consulting took me again on the road to help other teams learn what I’d learned. I transitioned to being an internal consultant at a large shop pushing the mandate of Thou Shalt All Be Agile All At Once (it doesn’t work, believe me). I started feeling out of touch, and the unending barriers presented by a rigid corporate structure made things all the more frustrating.

I returned to development as a remote programmer for GeoLearning for most of 2010. More learning about what works and what doesn’t! After GeoLearning was bought out and all remote individuals terminated, and after a local 3-month “just a developer” contract, I figured it was time to spread words again.

(The sneaky part: Employee vacation time over the years provided an opportunity to take on short training and consulting engagements here and there, largely to give me a fix for my books and training addictions.)

I’ve been back in full consulting swing for about 15 months. I wonder when that will need to end again. Yes, I could remain a full-time consultant and still provide good value for my customers. But I’ve found I can be even better by finding a way to occasionally immerse in a development team, to remain current, open-minded, and relevant.

** Extreme Programming. That’s for you, J.B. It’s not dead yet.

On Email Lists and Advocacy

I try to keep up with all of the various agile “Yahoo! Group” lists: extremeprogramming, scrumdevelopment, refactoring, testdrivendevelopment, agile-testing, and so on. I post once in a while when I see a topic I feel qualified to respond to or that I’m passionate about. Lately I’ve been getting a lot less value out of these lists, some of which I’ve been following for ten years.

Many of the lists degrade into passionate statements of position, i.e. advocacy. Inevitably, people get burnt. Tempers flare, people get upset, some stomp their feet, some simply withdraw. I myself have gotten burnt on this same list, feeling that someone (probably not coincidentally, the same person who is withdrawing now) was looking to simply piece apart every single word written to find as much fault as possible with it.

I withdrew completely from that discussion. I’ve also withdrawn from perhaps a couple other difficult discussions (out of the hundreds I’ve engaged in over the years). Sometimes it’s because I felt the environment was too hostile, and I justified it to myself by thinking that the individuals involved were being immature or offensive or whatever. But that’s not courage speaking. I looked back at my most recent example and regretted how I handled it. Withdrawing didn’t solve anything.

Sometimes a simple statement, meant to be innocuous by who wrote it, is taken as a stab and affront. We can pout and take our toys and go home, but that’s not at all useful or commendable. Courage can help us find a way to face the challenge and learn how to get past the issue. We don’t have to agree with everyone or get along with them, but sometimes a sour incident can lead to a valuable relationship.

So how is this at all relevant to anything? Well, if we are to succeed, agile software development requires that we face challenges and not bury them in isolated cubes. Similar clashes occur not only on email lists, but in real life, and we’re particularly going to see such clashes more frequently in highly collaborative teams. Conflicts that are not handled properly will detract from a team’s ability to deliver. The XP value of courage is still essential to learning how to regularly deliver quality software.

XP Pros & Cons: Reflecting on XP Experiences

A colleague in a graduate degree program recently posed a question to me on email:

> We’ve had quite a bit of eXtreme Programming discussion this week in my
> Software Processes class. Unfortunately, none of us have actually been
> involved in a real, live XP project. I’ve tried to defend it, but my XP
> background is pretty weak.
>
> I know you’ve been involved in a lot of XP work. Can you give me some info on
> what type of projects you’ve worked on, how successful they were, and what
> you think the pros and cons of XP are? If you don’t mind I’d like to quote
> you a bit in our online class discussion as someone who has really been in
> the XP trenches.

My response, with some name changes to protect the “innocent” and some additional cleanup, follows.

In general, the XP projects I’ve worked on were slightly more successful than the non-XP projects. I’ve probably coached 20 teams; this includes a mixture of small teams (2-4 people), lots of medium-sized teams (6-12), and a few large ones (18+). Some of the shop types included government, retail, airline industry, grocery, financial, telecommunications; everything was all over the map.

Prior to XP, I did lots of waterfall-based development at places like MCI, Marriott, and ChannelPoint [a failed Colorado Springs dot-com]; I also worked in a number of smaller shops where some flavor of waterfall was in place. I also did some spiral development at MCI, and I’ve consulted in a couple of shops where they had some implementation of RUP.

The most extensive XP effort I had experience on was with [a grocery chain I’ll refer to as IFC] in Texas, where I was initially in the trenches doing paired development. About two months after starting the contract I ended up running a short-term subproject to develop an offline solution for their pharmaceutical processing system.

The IFC effort was a good example of both success and failure in XP. Using test-driven development (TDD), the development team had managed to build a significant amount of reasonable quality production code within a short period of time. They reported a very low number of defects. The code had an extensive body of tests. The IFC team had an open environment and mandated pairing. However, there were a number of XP practices that the team did not follow. Specifically, they did not adhere to fixed-length iterations, and their planning was somewhat ad hoc; also, they did not refactor as much as necessary.

The upshoot was that at some point after I joined, they were unable to sustain reasonable delivery periods, partly due to some network dependency issues that exacerbated poor design in the code. Making changes to the code base was difficult, since the tests were overly dependent upon the implementation (they abused mocks). However, they were able to make a significant number of changes and improvements rapidly, since they had tests in the first place. Management insisted we develop an offline solution.

The offline solution I directed as coach/tead lead was to me a testament of the value of XP. In a matter of 8 weeks, we built a complete and successfully deployed solution that allowed offline pharmaceutical fills. As a coach with full ability to direct the XP team as I wanted, I took a number of “lesser” developers and showed them how to adhere to the simple rules that they had neglected over time: fixed iterations with simple but comprehensive plans, and hardcore as-you-go refactoring. The “senior” guys (including some expensive Vitria BusinessWare consultants) were left alone to solve the stability problems; they floundered and ultimately failed. My mostly junior team learned how to trust each other as a team (despite some early ugliness) and enjoyed working together. They also saw success and progress on a two-week basis.

I’ve seen many similar examples. At [a failed dot-com in Orange County], the development team continued to enjoy working on and delivering product while the walls and rest of the company crumbled around them due to a bad business model. At [a large company in Dallas], I worked with a number of different teams on a short-term basis; some of the teams had abysmal performance while others delivered spectacularly to expectation. The differences often could be attributed to adherence to practices.

The teams that took the values to heart and believed in what they were trying to do were successful. The teams who resisted and thought they were too smart for anyone else’s ideas on how to build things failed. Ultimately you could say this about most processes, not just XP. The best thing you can do is to get a team to gel: to learn how to communicate well and to trust each other. This means trusting each other enough to say when things aren’t going the right way. Or to speak up when something seems like it’s a waste of time (e.g. attending meetings where little is accomplished).

I believe XP promotes these values. If you want to talk specifics, the main value that I get out of XP practices boils down to three things:

  • testing/refactoring. These go hand in hand. Test-driven development (TDD) is the most valuable programming practice I’ve ever come across. The value of test-after pales in light of test-first. Refactoring is extremely important in extending the potential lifetime of software product; it is only really possible if you are writing comprehensive tests. And it’s really only effective if you do it all the time, i.e. if you build it into your TDD cycle (test, code, refactor, repeat).
  • constant planning. The problem with most projects is they attempt to plan once and assume they’re done with it. They spend a lot of time up front to come up with a date that everyone knows is bogus. XP takes the view that planning is good, but you really want to be planning and replanning all the time. Reality always intervenes with plans.
  • pair programming. Take a look at the article on my site called “Pair Programming Observations.” It contains a lot more information that I won’t repeat here. When done well, pairing is a wonderfully enjoyable and productive practice.

Here are what I consider the cons of XP:

  • As experience at IFC and elsewhere suggests, understanding and following through on quality design is essential. It is possible to depend upon “emergent design,” but if and only if you remain dogmatic about refactoring. A team needs at least a couple “refactoring zealots.” You have to eliminate duplication at all costs, and you have to ensure that the code remains expressive and clean. You have to write tests for everything. Tests are specs, and if you’re not going to write specs on paper, you have to write them in code. Write tests for everything, including exceptional conditions. Don’t write code unless there’s a test for it. Obviously you can’t test every possible condition, but you can at least test what you do know you have to code.

    The con here is that this is very hard to do. It requires a good coach to initiate and ingrain the rules into peoples’ heads. And it requires the people within the team to care enough to not let things get sour. Unfortunately, it’s almost impossible to get a team to be that meticulous. In retrospect, I’ve learned to invest just a small bit more of time up front in doing design. However, I temper that by ensuring everyone is fully aware that the initial design is a road map; reality will always take you on detours that end up redrawing your TripTik.

  • The temptation to let iteration deadlines slip is high. “Let’s just get that one more feature finished and then we’ll ship.” No. You have to learn to deal with deadlines in the small before you get anywhere near good at dealing with deadlines in the large. Management, unfortunately, has a hard time understanding this. So they slip. And then you lose all sorts of things, from consistency, trust, understanding of how to deliver product, and accuracy in estimates.

The pros:

  • Done well, XP gets a team to gel more than anything.
  • It builds true competency in all team members.
  • It makes for an enjoyable and honest work day.
  • It gets people out of their cubes and talking to one another.
  • TDD teaches developers about how to write quality code and how to improve their notions of design; it helps them to improve estimates. It improves the resumes of developers.
  • It gives management many tools, including predictability, flexibility of resources, consistency, and visibility into what’s really going on.
  • It gives customers the ability to see whether or not a company can deliver on its promises.
  • A personal favorite: you don’t spend a lot of time in stupid, wasteful meetings, and you don’t produce a lot of useless documents that no one reads and that aren’t trustworthy anyway.

With respect to TDD, the Jerry Jackson story about TDD is always good. I don’t remember if you’d ever met him while at ChannelPoint. He was one of the first authors of Java books; his book Java By Example was the fourth ever published on the Java language. I hadn’t seen him in a while; two years ago I met him in the fall while at Tim’s chess tournament. He said that he’d read a bit about TDD but hadn’t tried it yet, and that he was going to look into it. I saw him again about three months later, at another tournament. Before I could say anything to him, he walked up with his face lit up and said something like, “That TDD stuff is amazing!” This is the testament I get from anyone who’s given TDD an honest try. They all say similar things, about how cool it is and about how much it’s taught them about their coding and their design. Jerry now works at Cassatt; apparently they are all doing TDD (and a lot of their technique and inspiration is based off a paper I wrote for OOPSLA 2001).

Oh yeah… if you’re going to do XP, or RUP, or Scrum, or Crystal: Hire a process coach! It’s easy to screw any of them up. You would never field a football team without a coach, even if all of the guys had played before. It seems ludicrous that we slog away at building software, spending millions of bucks, without someone to make sure the process and techniques make sense.

I hope this helps.

-Jeff-

Book Review: XP Refactored

A copy of the Amazon review of the book Extreme Programming Refactored: The Case Against XP, by Matt Stephens and Doug Rosenberg

Note: This review appeared at Amazon for three months before it was removed, due to a customer complaint about the reference to another reviewer. It was a spotlight review for at least a couple weeks due to the large number of “helpful”/”not helpful” votes it received. The review appears unchanged, except for added links, below.

Additional note (10-Feb-2004): If you’ve arrived at this review from the SoftwareReality web site, you’ll expect a “rant,” “paranoia,” and “anger.” Judge for yourself, but I think the below review is a reflection of what passes for humor in XP Refactored. If my review is a rant (it may well be), then the whole of XP Refactored is one big rant. If my review demonstrates anger (doubtful), then XP Refactored is one massive hate-piece.

**
Some good points ruined by vindictiveness/unsupported claims
Reviewer: Jeff Langr from Colorado Springs, CO United States, September 18, 2003

In XP Refactored, the authors do a good job of covering many of the problems with how XP has been applied. But they waste a large chunk of the book on the Chrysler project (he said, she said), goofy songs (do you old geezers know anything other than the Beatles?), and some things that even the XP community has already admitted were problematic and that have been corrected.

The authors point out (pg93) that the C3 project was cancelled in February 2000, and Beck’s book Extreme Programming Explained came out “later the same year.” This is patently false; a search on this site itself proves that the book came out in October 1999. (Which also meant that Beck probably finished writing the book sometime in the summer of 1999.) My immediate reaction was that if the authors are willing to make up dates in order to rip XP, then what else is dishonest in the book?

I have a bigger problem with the guest testimonials than the authors’ own material. It all started with the disclaimer that many of the testimonials were anonymous, due to the contributors’ fears of reprisal. Once I read that, I wasn’t sure what was written by real people and what–er, who–was made up. Is someone as arrogant and cocksure as David Van Der Klauw a real person?

Each testimonial is just one (made-up?) person’s opinion, and is rarely backed up by any meaningful facts. For example, the possibly fictitious Van Der Klauw claims (pg192) that “many things of moderate or greater complexity cannot be test-first coded.” I’d say he’s wrong. Yes, there is a small number of things that cannot be tested, but by gosh, if any significant portion of your system is so complex that you can’t figure out how to test it, you don’t know much about design. In any case, Van Der Klauw isn’t even sure about his own assertions: “I may change my opinion over time.”

The best fact that Mr. Van Der Klauw has to offer is that he has “13 years of software development experience,” and that should be enough for anyone to believe his countless diatribes.

My biggest beef with the authors’ slam against the actual XP practices is related to refactoring, and to their misunderstanding of its value either with or without respect to XP. While refactoring can result in constantly altering the object-level design or even the architecture of the system, that is something that happens only infrequently in practice (and it can work). Instead, refactoring is best viewed as a means of constantly keeping code clean and maintainable through low-level code changes such as reorganizing methods and renaming things. In fact, this is the bulk of what good refactoring involves. Rarely is there any of the thrashing that the authors repeatedly suggest.

At one point (page 212), the authors recommend that you should never touch a piece of code once it works: “leave it well enough alone.” Wow. Everyone out there is just a perfect coder. Not. Anyone who’s ever hired a high-priced consultant (like good old Doug or Matt) to produce code has been burned by overly complex and often overdesigned code that had to be rewritten because it was completely indecipherable and unmaintainable.

The authors repetitively promote up-front design as a panacea. Fortunately they quietly admit that it’s virtually impossible to produce a perfect design up front, something that just about everybody has agreed upon. The question becomes, how much design should you do up front? The more you do up front, the more of a waste of time it is, if indeed it can’t be perfect.

Nowhere does XP preclude you from spending a half day per iteration–or even longer–to discuss design, produce UML diagrams, etc. In fact, that’s what Object Mentor (the consulting firm of Bob Martin, who is frequently quoted in the book) has recommended in their XP courses.

The authors also recommend writing all requirements down in detail at the start of the project (as in waterfall), and getting a signoff. Never mind the folly of trying to finalize everything up front; worse still is that the authors think that it’s ok for programmers to “spend a few months doing nothing” while waiting for the specs.

Unfortunately, I’ve seen contracts cancelled, jobs lost, and companies fold where the customer wanted to change their mind a few months after the “requirements were complete.” Too bad! You signed here on the requirements, our design is perfect, it’s done, and you can’t change it!

The authors do make concessions to the fact that XP has brought a lot of good things to the table, such as the heavy emphasis on test-driven development. Even the authors’ agile approach resembles much of XP. With this in mind, the absolute hatred that comes through in much of the remainder of the book strikes me as just a bunch of personal vendettas. I understand where they are coming from, as some members of the XP community can be arrogant and annoying (just like the authors), and a lot of XP has been overhyped and misunderstood. Still, one shouldn’t charge people this kind of money in order to air their dirty laundry.

By the way, this book is nowhere near serious, as another well-known reviewer states below. XP Refactored frequently uses words like “moronic” and “asinine” and thus smacks of being an abusive ad hominem attack, not a serious critique.

I was prepared to give the book 3 stars based on some of the value contained within, but there’s just too much vitriol and defensiveness in the book to make it an enjoyable read. Docked a star to 2.

Sustainable Pace

XP is often defined by the dozen or so practices that comprise it. Pair programming, test-first design, planning game, metaphor, etc. It’s an interesting way to define a development process, as a bunch of simple practices that work together to form a greater whole. Waterfall, on the other hand, is defined by its phases, which are simply breakdowns of what is delivered over time. Analysis produces an analysis document, design produces a design document, and the completion of the implementation phase results in actual production software.

XP’s premise is that following a handful of simple rules—practices—over and over will result in solid product. The process is the same from project inception to completion. In fact, there is no such notion of inception or of any other phase within XP. Other processes such as RUP are insistent that there is a “full lifecycle” of development: a very clear start to the effort (inception), a demarcation of when it is complete (transition), and lots of definition for the phases in between (elaboration and construction).

40-Hour Week

One of the XP practices was originally defined by Kent Beck as the “40-Hour Week.” The basic idea is that programmers must be well-rested in order to produce quality work. The less rested a team, the more likely that they introduce defects into the software. There are studies that prove a few things, including the fact that working additional hours is not a one-for-one productivity increase. One study described an environment where the development team increased their work hours by 50% to 60 hours per week, but was only able to produce 20% more lines of code. Not only that, but the number of defects doubled.1

In many software development environments, developers work far more than forty hours per week. Fifty and sixty hour weeks are common, and in the rush to meet deadlines, many developers find themselves clocking in far more. To some managers, developers who work forty hours per week are seen as doing only the minimum required to get by. In some shops, a culture is created where those who stay late and on weekends are the ones recognized as truly dedicated. A heavy time card becomes a badge of honor.

What is the value of these additional hours? Past some point, any additional time expenditure results in rapidly diminishing returns. The potential for mistakes rises dramatically as developers tire. More defects are introduced when developers are tired. Fixing these mistakes and defects can and will negate any benefit to having logged the extra hours.

Eight Hour Burn

As XP has matured, the names of some of its practices have changed. The phrase “40-Hour Week” had too many connotations and problems (not the least of which was an American cultural assumption), so the XP community searched for a replacement.

One of the suggested new names for “40-Hour Week” was “8-Hour Burn.” No different if you do the math, but the “burn” indicates that in a given day, there is only so much energy most people have to burn. That energy is used up after eight hours of solid work, and is replenished by the next morning, after some personal time and a good night’s sleep. This radical notion of an 8-hour work day is a model that has proven sustainability over long periods of time.

In fact, eight hours at any one activity is usually more than enough. Consider skiing as an example activity. In Colorado, ski lifts open at 9am and close at 4pm—only seven hours, not a lot of time considering the high lift ticket prices!

But the reality of skiing is that it is an activity that cannot be sustained for longer periods of time. No matter what level you ski at, your muscles can only hold up so long. You ski to your abilities: if you are a beginner, you are on the green (easy) slopes, but you are exerting a considerable amount of muscle power due to your inexperience. If you are an advanced skier, you are challenging yourself on the double black diamonds (expert) slopes, also exerting a lot of muscle power. Regardless of your ability, by three or four in the afternoon, the muscles in your legs are starting to turn to jelly. Most skiing accidents occur around this time of day. No matter how good a skier you are, it is easy to make stupid mistakes when you are exhausted.

So it goes with software development. After a solid day of coding, most minds are mush. Beck tells a story about a shop that discovered the bulk of their defects were introduced after 4:30pm. Stop! Go home, rest, come in refreshed and enthusiastic the next morning. Defects will decrease.

Sustainable Pace

Both terms 40-Hour Week and 8-Hour Burn have the same problem: they are defined by hard numbers. These numbers are certainly not internationally applicable, nor are they applicable in different environments, such as government shops.

The official term for this XP practice of being well-rested is now “Sustainable Pace.” The idea is that team members “work hard, and at a pace that can be sustained indefinitely.”2

The name “sustainable pace” is appealing from the standpoint that it suggests the ultimate goal of the practice. It doesn’t constrain a team to a narrow definition of what is a full day (or week) of work. If a team can sustain 50 hour work weeks without burning out team members, that is their pace. Government workers running at 37.5 hours per week won’t have to feel that they aren’t doing XP fully.

The sustainable pace practice is a good example of the benefits of defining a process by its practices. To explain this, let’s take waterfall as a counter-example.

In a waterfall world, there is a beginning and an end to our efforts. Waterfall postulates that if you do certain things at certain times (like doing lots of analysis up front), you shouldn’t have to revisit them. This life-cycle-based definition also assumes that given a starting point, there is a concrete amount of work that can be done by a given endpoint, and that we can always make it fit.

Well, you can usually “make it fit,” but this can involve some tradeoffs at the lower levels. If the project is running late, the best solution is to either extend the deadline, or negotiate the details of just what the customer expects to receive by that time. Unfortunately, the scope is not always negotiable.

In lieu of changing scope, another solution is to eliminate reviews, testing, refactoring, and other safeguards of software development, in order to get the work done in the allotted time. You can also make it fit through overtime.

Yet excessive overtime directly damages the notion of sustainability. There is plenty of evidence that people can get “burned out” by working excessive hours week after week. So overtime is not something you want to happen too often. (The general rule of thumb in XP is that developers cannot work more than one week of overtime in a row.)

Other solutions—elimination of refactoring, testing, etc.—will kill sustainability just as easily as overtime. A system that isn’t refactored constantly will turn to an unmaintainable mess. A system without tests becomes completely undependable over time, as new code introduces an unknown number of defects.

Successful software development hinges on the quality of what is produced. The problem of defining a process by deadlines is that deadlines don’t give easily. Instead, the quality controls give.

Practices and Sustainability

You want to be able to deliver software again and again, with success and predictability. This idea of a sustainable pace is one of the central drivers behind XP. Any development team can hack out gobs of code over a one-year period, but sustaining code much beyond this period is a costly and difficult proposition. I’ve seen shops throw away a million lines of code (and millions of dollars worth of investment). The software was so crufty that further development on it could not be sustained in a cost-effective fashion.

Many shops during the early-to-mid-90s tried to figure out a way of sustaining short iterations over time, an extremely difficult goal. There is just too much overhead if you try to fit an artifact-heavy waterfall process within a two- or three-week period. You might do it successfully for a few iterations, but it is oh so easy to start slipping things. One nasty bug discovered in the inspection meeting, and there’s no way to close out the iteration in time.

The overhead of an inspection cycle almost ensures impossibility for completion within two-week cycles. Having to throw untested code over the wall to be manually tested by a QA team means that you will soon run out of time to be able to test everything in an iteration. These two examples demonstrate the need for extreme practices, practices that build in the notion that we inspect things all the time (pair programming, collective code ownership), and that we specify the system through automated tests all the time.

XP provides a set of practices that, when done collectively, provide a framework that allows success. When its simple practices are followed, an emergent behavior arises that allows software to be predictably and successfully delivered over and over. We insist that each of these practices are done, because we know that not doing them will limit how long our software can survive.

By defining a process this way—from the ground up with a set of required practices—XP ensures that everything that must happen will happen. The process itself doesn’t allow for skimping on the essentials. The process itself is what allows for the delivery of quality software to be sustained over long periods of time.

1Lovaasen, Grant. “Brokering With eXtreme Programming,” XP Universe 2001.

Adopting XP

XP is a software development process that is as much about learning how to improve the process as it is about doing software development well. You don’t necessarily have to fully understand every “why” or even every “how” behind XP in order to get started doing it. You could start with the assumption that you will grow the process as the result of acquiring experience-based knowledge. But should you? Just what do you need to know to get started in XP?

There are many approaches to instantiating XP. Some of the mechanisms that can be used to help you and your shop get started include:

  • read the books
  • join the XP mailing list(s) and ask for help
  • take a class
  • find developers who have already done it
  • hire experienced coaches
  • wing it

You should have a fundamental understanding of what the XP process is about before tackling it. This understanding can be very quickly obtained from reading one or two of the many books already available. In all cases, start with Kent Beck’s manifesto that started the landslide: Extreme Programming, Embrace Change. Then proceed to one of the more practical books. I prefer Ron Jeffries’ pink book, XP Installed, as a short but information-packed overview of how to practice XP.

While the books are something you can and should start on right away, they will introduce more questions than they answer. Also, it is too easy to come away from books with the techniques of XP but not the spirit–something that can quickly lead to failure when you try to make XP work for you.

A course in XP can help jump-start the process by giving you and your team a hands-on feel of how XP all works together. The better courses available are taught by actual practitioners of XP, and keep pace its rapidly changing application in the real world. By attending such a course, you will see first-hand how it’s meant to be done, and the chance for misinterpretation should diminish.

However, a course is only the beginning. XP promotes the idea that you can and should begin development almost immediately. Its incremental, feedback-driven approach to everything means that you can quickly adapt to issues as they arise and take corrective action. But knowing what the proper corrections should be can be a hit-and-miss proposition. This is where experience and a strong hand come into play.

If I recommend anything with respect to applying XP–or any new software development process, language, or technique–it’s that you find the right people to guide your team. While there is nothing in XP that is inherently complex or difficult to learn, you can save significant time and pain by learning some of the traps and pitfalls in XP. The right people can tell you how to take smaller steps, how to properly do test-driven development, or how to start getting the proper behaviors emerge from the team itself.

A coach is an absolutely essential investment. You wouldn’t field a professional football team with players who had never even learned the rules. Your development team similarly represents hundreds of thousands or millions of dollars worth of talent. Make sure they have support from someone who is qualified to lead the way.
Coaching should be a short-term job: the core job of a coach is to teach the developers how to work as a self-managing team. A coach should be technically strong, but should also be able to provide leadership for the team.

The coach is responsible for recognizing trouble spots and for negotiating when issues arise. It is imperative that the coach work constantly with the team during its introduction to XP.

The coach can be someone from within the development team who has worked on a successful XP project, or it can be an external consultant. The coach does need to maintain a level of impartiality.

A manager is rarely the right person for the job. A development team can be intimidated by the constant presence of a manager, and most managers would not be able to dedicate 100% of their time to sitting with the development team. Most managers are not technically competent enough for the role.

You can buck the odds and wing it. You will experience some levels of success from applying XP practices, but this progress can be deceptive. It is very easy to produce a significant amount of bad software in a few months’ time when the proper controls of XP have not been applied.

What Can Go Wrong With XP

Extreme programming (XP), on the surface, is a hot, trendy way of developing software. Its name appeals to the reckless and annoys the cautious (“anything extreme can’t be good!”). A recent article in Wired magazine attests to its popularity amongst the cool set, who believe that XP is a silver bullet. The article also talks about some of XP’s severest detractors, who view XP as dangerous hype. The truth lies somewhere between. XP alone will not save your project. Applied properly, XP will allow your team to become very successful at deploying quality software. Applied poorly, XP will allow your team to use a process as an excuse for delivering–or not delivering–low quality software.

Applying XP

XP is a highly disciplined process that requires constant care. It consists of a dozen or so core disciplines that must all be followed in order for XP to work. The practices bolster one another, and each exists for not one but for many reasons. For example, test-driven development and constant code refactoring are critical to ensuring that the system retains a high quality. High code quality is required in order to be able to sustain iterative, incremental development. And without constant peer review, many developers would quickly lapse from doing these critical practices. XP provides this review through the discipline of pair programming.

Yet many shops shun pairing, viewing it unnecessary for developers to pair all the time. Not pairing is not necessarily a bad thing, but in lieu of pairing, some other form of peer review must take place. (Fagan inspections are an OK second choice.) Without the peer review, code quality quickly degrades, and you now may have a huge liability that no one is aware of. The lesson is that if you choose not to do an XP practice, you must understand what you have just taken out. You must then find another way to provide what the practice brought to the table.

Good software development will always require these things:

  • planning
  • requirements gathering and analysis
  • design/coding
  • testing
  • deployment
  • documentation
  • review
  • adherence to standards

Contrary to popular misconception, XP says that you should be doing all of these things, all of the time.

Common Traps

The most common mistakes I’ve seen in the application of XP include: not creating a dedicated customer team, not doing fixed iterations, misunderstanding test-driven development, not refactoring continuously, not doing design, and not pairing properly.

Not Creating a Dedicated Customer Team

If you read earlier literature on XP, you will be told that there is one customer for an XP team. The practice was called “On-site Customer.” Since then, Beck and others have clarified this practice and renamed it “Whole Team.” The whole team aspect is that we (developers and non-developers) are all in this together.

The customer side of the fence has been expanded to include all people responsible for understanding what needs to be built by the developers. In most organizations this includes the following roles.

Customer / Customer Proxy In most cases, the true customer (the person buying and using the system) is not available. Product companies have a marketing representative that understands what the market is looking for. The customer or proxy is the person that understands what the functionality needs to be in the system being built.
Subject Matter Experts Used on an as-needed basis.
Human Factors Experts Developers are good at building GUIs or web applications that look like they were built by… developers. The user experience of a system should be designed and presented to the development team as details to stories presented at iteration planning.
Testers Most customers have neither the background nor the time to define acceptance tests (ATs), which is the responsibility of the customer team. Testers work closely with the customer to build specific test cases that will be presented to the development team.
Project Managers While a well-functioning XP team has little need for hands-on project management, project managers in a larger organization play a key role in coordinating efforts between multiple projects. They can also be responsible for reporting project status to management.
Business/Systems Analysts Customers, given free reign, will throw story after story to the development team that represents addition of new functionality. The development team will respond well and rapidly produce a large amount of functionality. Unfortunately, other facets of requirements will often be missed: usability, reliability, performance, and supportability (the URPS in FURPS).

In an XP project, it is debatable as to whether these types of requirements should be explicitly presented as stories, or should be accommodated by functionality stories. I’ve seen the disastrous results of not ensuring that these requirements were met, so I recommend not trusting that your development team will meet them.

The business or systems analyst in an XP team becomes the person who understands what it takes to build a complete, robust system. The analyst works closely with the customer and/or testers to define acceptance tests, and knows to introduce tests representing the URPS requirements.

Note that these are roles, not necessarily full-time positions. Members of the development team will often be able to provide these skills. Just remember that the related work comes at a cost that must be factored into iteration planning.

Not Doing Fixed Iterations

Part of the value of short-cycle iterative development is the ability to gather feedback on many things, including rate of development, success of development, happiness of the customer, and effectiveness of the process. Without fixed-length iterations (i.e. in an environment where iterations are over when the current batch of stories is completed), there is no basis for consistent measurement of the data produced in a given iteration. This means there is also no basis for comparison of work done between any two iterations.

Most importantly, not having fixed iterations means that estimates will move closer to the meaningless side of the fence. Estimating anything with any real accuracy is extremely difficult. The best way to improve estimates is to make them more frequently, basing them on past work patterns and experiences, and to make them at a smaller, more fixed-size granularity.

Putting short deadlines on developers is painful at first. Without fixed deadlines, though, customers and management are always being told, “we just need another day.” Another day usually turns into another few days, another few weeks. Courage is needed by all parties involved. Yes, it is painful to not finish work by a deadline. But after a few iterations of education, developers learn the right amount of work to target for the next two weeks. I’ll take a few failures in the small over a failed project any day.

Misunderstanding Test-Driven Development

Small steps.
Small steps.
Small steps.

I can’t repeat this enough. Most developers that learn TDD from a book take far larger steps than intended. TDD is about tiny steps. A full cycle of test-code-refactor takes on average a few minutes. If you don’t see a green bar from passing tests within ten minutes, you’re taking too large steps.

When you are stuck on a problem with your code, stand up and take a short break. Then sit down, toss the most recent test and associated code, and start over again with even smaller steps. If you are doing TDD properly, this means you’ve spent up to ten minutes struggling; I guarantee that you’ll save gobs more struggling time by making a fresh attempt.

Tests are specs. If you do test-driven development correctly, there is no code in the system that can’t be traced back to a spec (test). That means that you don’t write any code without a test. Never mind “test everything that can possibly break.” Test everything. It’s much harder to get into trouble.

As an example of a naive misunderstanding of TDD, I’ve seen tests that look like this:

public void testFunction() {
   SomeClass thing = new SomeClass();
   boolean result = thing.doSomething();
   assertTrue(result);
}

In the production class, the method doSomething is long, does lots of things, and ends by returning a boolean result.

class SomeClass {
   boolean doSomething() {
      boolean result = true;
      // lots and lots of functionality
      // in lots and lots of lines of code
      // ...
      return result;
   }
}

Of course, the test passes just fine. Never mind that it tests none of the functionality in the doSomething method. The most accurate thing you could do with this code is reduce the doSomething method to a single statement:

boolean doSomething() {
   return true;
}

That’s all the test says it does. Ship it! 😉

Test everything until you can’t stand it. Test getters and setters, constructors, and exceptions. Everything in the code should have a reason for being, and that reason should be documented in the tests.

Not Refactoring Continuously

Understanding how to approach refactoring and do it properly is probably the most misunderstood aspect of XP. Not refactoring well is also the quickest way to destroy a codebase, XP or not.

Refactoring is best viewed as the practice of being constantly vigilant about your code. It fits best as part of the test-driven development cycle: write a test (spec), write the code, make the code right. Don’t let another minute go by without ensuring you haven’t introduced junk into the system. Done in this fashion, refactoring just becomes an engrained part of your every-minute every-day way of building software.

As soon as you aren’t on top of it, however, code turns to mush, even with a great design.

XP purists will insist that constant, extreme refactoring allows your design to emerge constantly, meaning that the cost of change remains low. This is absolutely true, as long as your system always retains the optimal design for the functionality implemented to date. Getting there and staying there is a daunting task, and it requires everyone to be refactoring zealots.

Many shops experimenting with XP, however, understand neither what it means to be a refactoring Nazi nor what optimal design is. Kent Beck’s concept of simple design can be used as a set of rules. The rules are straightforward; adhering to them takes lots of peer pressure.

Not Doing Design

I’ve heard far too many people saying that “you don’t do design in XP.” Hogwash (and I’m being nice).

As with everything else in XP, the “extreme” means that you do design all the time. You are designing at all sorts of levels. Until you get very good at following all the XP disciplines, the iteration planning meeting should be your primary avenue for design.

Iteration planning consists of task breakdowns. The best way to do task breakdowns is to start by sketching out a design. UML models should come into play here. Class diagrams, sequence diagrams, state models, and whatever other models are needed should be drawn out and debated. The level of detail on these models should be low–the goal is to solve the problem of general direction, not specify all the minute details.

For a typical two-week iteration, a planning session should last no longer than half a day (how much new stuff that needs to be designed can possibly fit into two weeks?). But if you’re just getting started, and the design skills on the team are lacking, there is no reason you can’t spend more time in the iteration planning meeting. The goal should be that you spend less time next iteration.

Design should also be viewed as a minute-to-minute part of development. With each task tackled, a pair should sketch out and agree on general direction. With each new unit test, the pair should review the code produced and determine if the design can be improved. This is the refactoring step. Ideally you’re not seeing large refactorings that significantly alter the design at this point, but things like commonality being factored out into a Template Method design pattern will happen frequently.

At any given time, all members of the development team should have a clear picture of the design of the system. Given today’s tools, there is no reason that a detailed design snapshot can’t be produced at a moment’s notice. In fact, I highly recommend taking a snapshot of the existing system into the iteration planning meeting to be used as a basis for design discussions.

In XP, design is still up-front; it’s just that the increments are much smaller, and less of it is written down. In a classic, more waterfall-oriented process, more design is finalized and written down up front. The downside in XP is that there is some rework based on changing requirements. The downside in the classic process is that there is no such thing as a perfect design, and following an idealistic model results in overdesigned and inflexible systems. There are always new requirements, and a process that teaches how to be prepared to accommodate them is better than one that does not.

Not Pairing Properly

Part of pairing is to ensure knowledge sharing across the team, not just amongst two people. Ensure that your pairs switch frequently–at least once per day if not more often. Otherwise you will get pockets of ability and knowledge. Corners of your system will turn into junk, and you will have dependencies on a small number of knowledgeable people.

Developers will resist switching pairs frequently. It is a context switch, and context switches are expensive. If you are in the middle of a task, and the person you were working with leaves only to be replaced by someone else, you must take the time to get that person up to speed. Frequent pair switching forces you to get good at context switching, which forces you to improve your code maintainability. If your code is poorly written and difficult to understand, the newly arrived pair will waste far more time getting up to speed. Ideally, the new pair should balk and insist upon more code clarity before proceeding.

The ability to quickly understand code and associated unit tests is what makes software maintenance costs cheap and consistent. Maintainable code is what allows you to be able to sustain the quality of your system over time.

Atom