Extreme programming (XP), on the surface, is a hot, trendy way of developing software. Its name appeals to the reckless and annoys the cautious (“anything extreme can’t be good!”). A recent article in Wired magazine attests to its popularity amongst the cool set, who believe that XP is a silver bullet. The article also talks about some of XP’s severest detractors, who view XP as dangerous hype. The truth lies somewhere between. XP alone will not save your project. Applied properly, XP will allow your team to become very successful at deploying quality software. Applied poorly, XP will allow your team to use a process as an excuse for delivering–or not delivering–low quality software.
Applying XP
XP is a highly disciplined process that requires constant care. It consists of a dozen or so core disciplines that must all be followed in order for XP to work. The practices bolster one another, and each exists for not one but for many reasons. For example, test-driven development and constant code refactoring are critical to ensuring that the system retains a high quality. High code quality is required in order to be able to sustain iterative, incremental development. And without constant peer review, many developers would quickly lapse from doing these critical practices. XP provides this review through the discipline of pair programming.
Yet many shops shun pairing, viewing it unnecessary for developers to pair all the time. Not pairing is not necessarily a bad thing, but in lieu of pairing, some other form of peer review must take place. (Fagan inspections are an OK second choice.) Without the peer review, code quality quickly degrades, and you now may have a huge liability that no one is aware of. The lesson is that if you choose not to do an XP practice, you must understand what you have just taken out. You must then find another way to provide what the practice brought to the table.
Good software development will always require these things:
-
planning
-
requirements gathering and analysis
-
design/coding
-
testing
-
deployment
-
documentation
-
review
-
adherence to standards
Contrary to popular misconception, XP says that you should be doing all of these things, all of the time.
Common Traps
The most common mistakes I’ve seen in the application of XP include: not creating a dedicated customer team, not doing fixed iterations, misunderstanding test-driven development, not refactoring continuously, not doing design, and not pairing properly.
Not Creating a Dedicated Customer Team
If you read earlier literature on XP, you will be told that there is one customer for an XP team. The practice was called “On-site Customer.” Since then, Beck and others have clarified this practice and renamed it “Whole Team.” The whole team aspect is that we (developers and non-developers) are all in this together.
The customer side of the fence has been expanded to include all people responsible for understanding what needs to be built by the developers. In most organizations this includes the following roles.
Customer / Customer Proxy | In most cases, the true customer (the person buying and using the system) is not available. Product companies have a marketing representative that understands what the market is looking for. The customer or proxy is the person that understands what the functionality needs to be in the system being built. |
Subject Matter Experts | Used on an as-needed basis. |
Human Factors Experts | Developers are good at building GUIs or web applications that look like they were built by… developers. The user experience of a system should be designed and presented to the development team as details to stories presented at iteration planning. |
Testers | Most customers have neither the background nor the time to define acceptance tests (ATs), which is the responsibility of the customer team. Testers work closely with the customer to build specific test cases that will be presented to the development team. |
Project Managers | While a well-functioning XP team has little need for hands-on project management, project managers in a larger organization play a key role in coordinating efforts between multiple projects. They can also be responsible for reporting project status to management. |
Business/Systems Analysts | Customers, given free reign, will throw story after story to the development team that represents addition of new functionality. The development team will respond well and rapidly produce a large amount of functionality. Unfortunately, other facets of requirements will often be missed: usability, reliability, performance, and supportability (the URPS in FURPS).
In an XP project, it is debatable as to whether these types of requirements should be explicitly presented as stories, or should be accommodated by functionality stories. I’ve seen the disastrous results of not ensuring that these requirements were met, so I recommend not trusting that your development team will meet them. The business or systems analyst in an XP team becomes the person who understands what it takes to build a complete, robust system. The analyst works closely with the customer and/or testers to define acceptance tests, and knows to introduce tests representing the URPS requirements. |
Note that these are roles, not necessarily full-time positions. Members of the development team will often be able to provide these skills. Just remember that the related work comes at a cost that must be factored into iteration planning.
Not Doing Fixed Iterations
Part of the value of short-cycle iterative development is the ability to gather feedback on many things, including rate of development, success of development, happiness of the customer, and effectiveness of the process. Without fixed-length iterations (i.e. in an environment where iterations are over when the current batch of stories is completed), there is no basis for consistent measurement of the data produced in a given iteration. This means there is also no basis for comparison of work done between any two iterations.
Most importantly, not having fixed iterations means that estimates will move closer to the meaningless side of the fence. Estimating anything with any real accuracy is extremely difficult. The best way to improve estimates is to make them more frequently, basing them on past work patterns and experiences, and to make them at a smaller, more fixed-size granularity.
Putting short deadlines on developers is painful at first. Without fixed deadlines, though, customers and management are always being told, “we just need another day.” Another day usually turns into another few days, another few weeks. Courage is needed by all parties involved. Yes, it is painful to not finish work by a deadline. But after a few iterations of education, developers learn the right amount of work to target for the next two weeks. I’ll take a few failures in the small over a failed project any day.
Misunderstanding Test-Driven Development
Small steps.
Small steps.
Small steps.
I can’t repeat this enough. Most developers that learn TDD from a book take far larger steps than intended. TDD is about tiny steps. A full cycle of test-code-refactor takes on average a few minutes. If you don’t see a green bar from passing tests within ten minutes, you’re taking too large steps.
When you are stuck on a problem with your code, stand up and take a short break. Then sit down, toss the most recent test and associated code, and start over again with even smaller steps. If you are doing TDD properly, this means you’ve spent up to ten minutes struggling; I guarantee that you’ll save gobs more struggling time by making a fresh attempt.
Tests are specs. If you do test-driven development correctly, there is no code in the system that can’t be traced back to a spec (test). That means that you don’t write any code without a test. Never mind “test everything that can possibly break.” Test everything. It’s much harder to get into trouble.
As an example of a naive misunderstanding of TDD, I’ve seen tests that look like this:
public void testFunction() {
SomeClass thing = new SomeClass();
boolean result = thing.doSomething();
assertTrue(result);
}
In the production class, the method doSomething
is long, does lots of
things, and ends by returning a boolean result.
class SomeClass {
boolean doSomething() {
boolean result = true;
// lots and lots of functionality
// in lots and lots of lines of code
// ...
return result;
}
}
Of course, the test passes just fine. Never mind that it tests none of
the functionality in the doSomething
method. The most accurate thing
you could do with this code is reduce the doSomething
method to a
single statement:
boolean doSomething() {
return true;
}
That’s all the test says it does. Ship it! 😉
Test everything until you can’t stand it. Test getters and setters, constructors, and exceptions. Everything in the code should have a reason for being, and that reason should be documented in the tests.
Not Refactoring Continuously
Understanding how to approach refactoring and do it properly is probably the most misunderstood aspect of XP. Not refactoring well is also the quickest way to destroy a codebase, XP or not.
Refactoring is best viewed as the practice of being constantly vigilant about your code. It fits best as part of the test-driven development cycle: write a test (spec), write the code, make the code right. Don’t let another minute go by without ensuring you haven’t introduced junk into the system. Done in this fashion, refactoring just becomes an engrained part of your every-minute every-day way of building software.
As soon as you aren’t on top of it, however, code turns to mush, even with a great design.
XP purists will insist that constant, extreme refactoring allows your design to emerge constantly, meaning that the cost of change remains low. This is absolutely true, as long as your system always retains the optimal design for the functionality implemented to date. Getting there and staying there is a daunting task, and it requires everyone to be refactoring zealots.
Many shops experimenting with XP, however, understand neither what it means to be a refactoring Nazi nor what optimal design is. Kent Beck’s concept of simple design can be used as a set of rules. The rules are straightforward; adhering to them takes lots of peer pressure.
Not Doing Design
I’ve heard far too many people saying that “you don’t do design in XP.” Hogwash (and I’m being nice).
As with everything else in XP, the “extreme” means that you do design all the time. You are designing at all sorts of levels. Until you get very good at following all the XP disciplines, the iteration planning meeting should be your primary avenue for design.
Iteration planning consists of task breakdowns. The best way to do task breakdowns is to start by sketching out a design. UML models should come into play here. Class diagrams, sequence diagrams, state models, and whatever other models are needed should be drawn out and debated. The level of detail on these models should be low–the goal is to solve the problem of general direction, not specify all the minute details.
For a typical two-week iteration, a planning session should last no longer than half a day (how much new stuff that needs to be designed can possibly fit into two weeks?). But if you’re just getting started, and the design skills on the team are lacking, there is no reason you can’t spend more time in the iteration planning meeting. The goal should be that you spend less time next iteration.
Design should also be viewed as a minute-to-minute part of development. With each task tackled, a pair should sketch out and agree on general direction. With each new unit test, the pair should review the code produced and determine if the design can be improved. This is the refactoring step. Ideally you’re not seeing large refactorings that significantly alter the design at this point, but things like commonality being factored out into a Template Method design pattern will happen frequently.
At any given time, all members of the development team should have a clear picture of the design of the system. Given today’s tools, there is no reason that a detailed design snapshot can’t be produced at a moment’s notice. In fact, I highly recommend taking a snapshot of the existing system into the iteration planning meeting to be used as a basis for design discussions.
In XP, design is still up-front; it’s just that the increments are much smaller, and less of it is written down. In a classic, more waterfall-oriented process, more design is finalized and written down up front. The downside in XP is that there is some rework based on changing requirements. The downside in the classic process is that there is no such thing as a perfect design, and following an idealistic model results in overdesigned and inflexible systems. There are always new requirements, and a process that teaches how to be prepared to accommodate them is better than one that does not.
Not Pairing Properly
Part of pairing is to ensure knowledge sharing across the team, not just amongst two people. Ensure that your pairs switch frequently–at least once per day if not more often. Otherwise you will get pockets of ability and knowledge. Corners of your system will turn into junk, and you will have dependencies on a small number of knowledgeable people.
Developers will resist switching pairs frequently. It is a context switch, and context switches are expensive. If you are in the middle of a task, and the person you were working with leaves only to be replaced by someone else, you must take the time to get that person up to speed. Frequent pair switching forces you to get good at context switching, which forces you to improve your code maintainability. If your code is poorly written and difficult to understand, the newly arrived pair will waste far more time getting up to speed. Ideally, the new pair should balk and insist upon more code clarity before proceeding.
The ability to quickly understand code and associated unit tests is what makes software maintenance costs cheap and consistent. Maintainable code is what allows you to be able to sustain the quality of your system over time.