TDD and Speculative Design

Student: “I know that I’ll need a HashMap somewhere in my very near coding future. Why shouldn’t I just introduce it now?”

The seeing eye
“The seeing eye,” courtesy Valerie Everett.License.

Jeff: “This is one of the better questions about TDD, and one of the most common concerns about how it’s practiced. Do you really need a HashMap? Maybe you won’t by the time you’re done. By introducing this speculative design element now, you’d be creating complexity prematurely. You will incur additional cost every time you revisit the code in the interim. You might even end up with the unnecessary complexity forever.”

Student: “But I know I’ll need it in about 20 minutes.”

Jeff: “Maybe. Sometimes I’ve found that my most-cocksure design speculations were wrong, and TDD drove out a simpler solution. But for now, let’s say you’re right. With TDD, you’re learning how to think and build a solution more incrementally–a skill that can improve your ability to accommodate new, never-before-conceived behaviors into the codebase.”

Student: “Right, but this whole incremental thing requires some reworking of code. You told us that we start with the solution for the simplest case, which for this example requires only a hardcoded return value. Then we write another test that requires us to replace the hardcoding with a scalar. Then another test that requires segregating data, which I can do using a hash-based collection. Seems wasteful to write some code, only to throw it away a few minutes later.”

Jeff: “We’re trying to adhere to the rhythmic TDD discipline of red-greeen-refactor. Part of the success of TDD hinges on seeing a failing (red) test for a new behavior. One important reason: If you never see a test fail, you have little clue that it’s a legitimate test. When you write a more generalized solution than you need, writing a next test that fails first may be near impossible. Ultimately, not following the cycle leads to you writing fewer tests.”

Student: “Remind me, why is TDD and adhering to its cycle important?”

Jeff: “Foremost, the cycle lets us know we’re on track with building the correct code every few minutes. Also, as developers, we’re good about constantly creating lots of cruft–code that’s not as easily understood or maintained. In fact, cruft buildup is where a significant portion of your development costs rise. The cycle gives you continual opportunities to clean up the junk as soon as you create it–before it’s too late. The tests minimize your fear of making the necessary changes.”

“If you stick to the cycle, and write a failing test for each new behavior, you’ll end up with a set of tests for virtually all bits of logic you intended. These tests will document all behaviors you chose to drive into the system. Well-crafted tests can save everyone the countless hours otherwise needed to understand how the system behaves over time and as it grows.”

Student: “So, it’s also important that the cycles are short.”

Jeff: “Yes. It’s generally easier to find and fix problems when you find out about them sooner. Also, if you take larger steps, you’ll create more code than you’ll likely clean during the refactor step.”

Student: “But what about the wasted bits of code, the rework?”

Jeff: “You’re making a tradeoff. You are the tortoise, taking an incremental, steady route as opposed to the hare, who takes larger leaps requiring unpredictable amounts of route correction. The amount of code rework is often trivial, and if you learn to always take on the next-smallest case, the amount of new code is also small. You’ll learn to get good at taking small, incremental steps.”

Student: “Yeah, but rework?”

Jeff: “Let’s not forget that debugging cycles and defects represent rework costs that we rarely account for, and they are significant, even monstrous, on most development efforts. There’s also considerable cost when, however rarely, your speculation is wrong.”

Student: “Right, but I know more than just what’s needed for the current test case. Shouldn’t I create a design that accommodates all of that knowledge?”

Jeff: “It’s always valuable to create a design model given the needs you know, whether in your head or on the whiteboard. But don’t force that design into the system until the behaviors described by the tests demand it. And don’t spend a lot of effort on the details.

“As you test-drive, you can shift the design in the direction of your speculative model, as long as you’re keeping the code as clean as possible. Beck’s simple design rule #4 can help tell you when you’ve gone overboard: Avoid any additional moving parts or complexity than needed to pass the current set of tests.”

Student: “I would rather build the design to what I know I need over the next few hours.”

Jeff: “That’s your prerogative, though I suggest starting with the smaller, more incremental steps. It’s the better way to learn TDD: It’ll keep you honest, and generally result in more tests that describe the behavior you’re test-driving into the system.

“Once you’re comfortable with the TDD rhythm, you might take larger steps. I sometimes do. Much of the time, I end up ok, but every once in a while, I get bitten. That pushes me back in the direction of taking smaller steps.”

Student: “Right, but I know I need the HashMap. Here, see, I’m just about done, and there it is.”

Jeff: “Nice. But it turns out that the PO changed her mind. When you come in tomorrow, she’ll explain that we need to track things transactionally. The HashMap is the wrong abstraction for this–we want a time series collection. You’re going to have more rework than if you’d kept the design simple.”

Student: “Isn’t this rationale for spending just a bit more time on pinning down requirements up front?”

Jeff: ” One of the reasons we do agile is to allow for changes in direction based on feedback–from the marketplace, from the PO, from what we learn. TDD provides a cyclic mechanism that supports the cycle of agile and its goal of producing high-quality software that meets the current, changing needs of the customer.”

“TDD is all about continual, just-in-time, sufficient design. I like to say that design is so important, you can’t just consider it once. You have to address it continually.”

Student: “Sounds like it’s all your opinion. I’d like to hear from someone else.”

Jeff: Tim Ottinger also suggested that TDD gives you the ability to abandon the code after any integration step (which could be as often as every R-G-R cycle)–knowing it’s as clean and complete as the tests indicate.

“Tim also hinted at something else important: None of us are perfect, and many of our predictions about design are flat-out wrong. TDD provides continual humility lessons for most developers, helping us reign our natural hubris at minimal cost.”


Note: this contrived dialog is based on real questions and challenges from actual TDD students (and still coming–I had much the same discussion two weeks ago).

10 Reasons Code Duplication Increases Cost

Eliminating duplication is rule #2 of Beck’s Simple Design.

Duplicates “Duplicates,” courtesy Pelle Sten. License.

Here’s a starter list of the costs that duplication creates:

  • increased effort to change all occurrences in the system. Think about 5 lines of logic duplicated 30 times throughout your codebase, and now they need to all be changed.
  • effort to find all occurrences that need to change. This is often compounded by small changes (e.g. renaming) made over time to the duplicated code.
  • risk of not finding all occurrences in the system. Imagine you need to change the 5 lines of logic; you locate 29 occurrences but miss the 30th. You’ve now shipped a defect.
  • risk of making an incorrect change in one of those places. 30x increases chances of screwing up by 30 times… or more. Tedium induces sleepiness.
  • increased effort to understand variances. Is that small change to one of the occurrences intentional or accidental?
  • increased effort to test each duplicate occurrence in its context. And more time to maintain all these additional tests.
  • increase in impact of a defect in the duplicated logic. “Not only is module A failing, but so are modules B through Z and AA through DD.”
  • increased effort to understand the codebase overall. More code, more time, less fun.
  • minimized potential to re-use code. Often the 5 duplicated lines are smack-dab in the middle of other methods, minimizing your potential for some form of algorithmic substitution.
  • self-perpetuation. A codebase and culture that doesn’t promote code sharing just makes it that much harder to do anything about it.

No doubt some of you can add additional costs associated with duplicate code in a system. Please do (in the comments).

The Compulsive Coder, Episode 6: this Duplication is Killing Me

A few months ago, I got caught up in a series of contentious debates about duplication in test code. I’m the sort who believes that most of our design heuristics–simple design in this case–fall victim to the reality of the pirate’s code: “The code is more what you’d call ‘guidelines’ than actual rules.” Which is OK by me; rules are made to be broken.

this
Image courtesy drivethrucafe; license

Unit tests are different beasts than production code. They serve a different purpose. For the most part, they can and should follow the same standards as production code. But since one of their primary purposes is to provide summary documentation, there are good reasons to adopt differing standards. (I solicited some important distinctions on Twitter a few months ago; this will be fodder for an upcoming post.)

With respect to Beck’s four rules of simple design, many have contended that expressiveness slightly outweighs elimination of duplication, irrespective of being production code or tests. With production code, I’m on the fence.

But for tests? The documentation aspect of tests is more relevant than their adherence to OO design concepts. Yes, duplication across tests can cause some future headaches, but the converse–tests that read poorly–is a more typical and bigger time-waster. Yes, get rid of duplication, absolutely, but not if it causes the readability of the tests to suffer.

My dissenting converser was adamant that eliminating duplication was king, citing Beck himself in Test Driven Development by Example (2002), on page one, no doubt: Step four in the rhythm of TDD says you run all the tests, and step five, “Refactor to remove duplication.” In that summary of steps, Beck suggests no other reasons to refactor! I can understand how a literalist-slash-originalist might be content with putting their foot down on this definition, never mind that “refactoring” has a broader definition anyway, never mind Beck’s simple design to the contrary, and never mind 14 years passage in which we’ve discussed, reflected, and refined our understanding of TDD.

Why a contentious debate? The dissenter was promoting a framework; they had created some hooks designed to simplify the creation tests for that framework. A laudable goal, except for the fact that I had no clue what the tests were actually proving. That just wads my undies.

Where am I going with all this? Oh yeah. So as I watched the dissenter scroll through some of their tests & prod code onscreen, I noted this repeated throughout:

public void doSomeStuff() {
   this.someField = someMethod();
   int x = this.someField + 42;
   // etc.
}

Apparently their standard is to always identify field usage by scoping it with this. (I bit my lip, because at that point I was weary of the protracted debate, and figured I could write about it later. Here I am.)

“Hey, over-using this might be a good idea, because we can more easily identify where fields are used, which is important information!”

OK, I’m offended that duplication-monists don’t view the use of this in this case (i.e. not to disambiguate) as unnecessary duplication. It is. Squash it. Instead, use the power of your sophisticated IDEA or Eclipse IDEA, and colorize fields appropriately. (I like orange.) They’ll stand out like sore thumbs.


Previous Compulsive Coder blog entry: Extract Method Flow
Next Compulsive Coder blog entry: Please AAA

A TDD Bag of Tricks

Over the many years I’ve been practicing TDD, I’ve sought to incorporate new learning about what appears to work best. If you were to look at my tests and code from 1999, you’d note dramatic differences in their structure. You might glean that my underlying philosophy had changed.

Yet my TDD practice in 2012 is still TDD. I write a test first, I get it to pass, I clean up the code. The core goal of sustaining low-cost maintenance over time remains, and I still attain this goal by crafting high quality code–TDD gives me that opportunity.

TDD is the same, but I believe that new techniques and concepts I’ve learned in the interim are helping me get better. My bag of TDD “tricks” acquired over a dozen years includes:

There is of course no hard data to back up the claims that any of these slightly-more-modern approaches to TDD are better. Don’t let that stop you–if you waited for studies and research to proceed, you’d never code much of anything. The beauty of TDD is that its rapid cycles afford many opportunities to see what works best for you and your team.

(But remember: You only really know how good your tests and code are when others must maintain them down the road.)

Should you push more in the direction of one assert per test? Is Given-When-Then superior to Arrange-Act-Assert? Those are entertaining things to debate, but you’re better off finding out for yourself what works best for your team. Also, be willing to accept that others might have an even better approach.

I recommend investigating all of the above ideas and more–keep your eyes open! Just don’t get hung up on any one or assume it’s the One True Way. The practice of TDD is a continual voyage of exploration, discovery, and adaptation.

TDD: A Minority Report

Tim, a modern urbane developer (i.e., he willingly practices TDD), talks about the reality of living in his team, circa 2008, where his organization was attempting to grow TDD.

The Minority Report LexusI am not gonna lie, this place is crazy with buzz words and terms.  They use words like Agile and Scrum as adjectives as opposed to ways of thinking and common sense still.  I hear, “Well, we are doing Agile, so we don’t have time for design.”  My jaw hits the floor… and then I start drinking heavily.

The team I am on currently, is a “center of excellence,” so to speak. We consult development teams within the organization (along with owning development projects, components, and standards). I feel pretty confident in my abilities to help teams out. I have no problem saying that I do not think of myself as an expert or perfect (yet). I have always been of the opinion that this should be a constant learning experience. I find it difficult to relay my thoughts, though, to the rest of the guys on my team. It is basically the three of them attacking me. It’s not totally me vs. the world, but it turns into a joke. I hear:

“…so basically you want me to double my development…”
“…there is no design in TDD…”
“…there is no empirical evidence that says TDD improves code…”
“…I have beens in shops before where the test cases were bogus…”

So…I think I know how you must feel sometimes 🙂 I never once say that it is a silver bullet. What I tell them is I didn’t realize how little I knew until I started using it as an approach. Primarily from a design and coding point of view. Don’t get me wrong, I have some smart-ass retorts, but I am just outnumbered 🙂 It’s a tad difficult for me to try and persuade them, because we all are strong developers (I think) on the same hierarchy, and everyone has their own opinions.

Fortunately or unfortunately, they will have to come on board though, because this is coming from the top down. The quality of code, and overall abilities of the development teams here is borderline atrocious for most projects. At least the ones we see on a regular basis. The good teams obviously don’t need our help. To put it bluntly, I have seen one too many 5000 line classes than I care to ever see again.

I think they (my teammates) have just had either bad personal experiences with TDD/Agile/etc., or bad education on it.

A commenter on last week’s post indicated that he viewed being the minority as an opportunity to show how much better things can go with TDD–fewer defects and getting code done sooner. That’s true, and it can feel gratifying, but you’re not likely to gain new converts by effectively showing them up–at least not immediately.

In a den of dogs with old tricks, some of the smart ones eventually come around, but in the short term they’re unlikely to admit that the new tricks might be better. Tim himself was sure his old way of coding (no TDD) was better until well after I’d left his scene.

In any case, from Tim we once again hear how bad experiences with TDD and agile can sour people. (Remember, this is four years ago.) What can we do to help? Here are a couple thoughts:

  1. Don’t do agile or TDD unless you’re willing to invest in its core notion of continual, honest retrospection and adaptation. You will not succeed with it over time otherwise.
  2. Relay more stories. Is your company succeeding with TDD, or do you know of one that is? Please tell your success story, or help your buddy write a blog post telling theirs. (And hope they’re not at a place where they view TDD as a competitive advantage to be kept hush-hush.)

Related posts:

 

Framing Your Design

A while back, I worked with a team in a cube bullpen to help mentor them primarily in test-driven development (TDD). Their goal was to deliver a decent-sized web product–not huge, not trivial.

The first time I paired with the programmer nearest the bullpen opening, I noticed a large framed document on a nearby cube wall, outside the bullpen, and asked what it was. All I could tell from that distance was that it was a highly detailed diagram of some sort.

I asked my pair about the document.
“It’s our class model.”
“How do you use it?”
“We don’t. It’s out of date.”
Of course it was. We went back to building code.

When we took a break, I walked to the diagram. Sure enough, it was a UML class model–with gobs of detail. Public methods, attributes, private methods, annotations on the associations, attribute types, return types, and so on, all specified in what looked like a couple hundred classes. Arrows all over the place. As I previously mentioned, the document was framed (albeit cheaply), which meant that the model itself was enshrined in a protective layer of glass (plastic?).

The framed model sure looked pretty! And it no doubt looked quite impressive to any non-technical observer (such as a vice president): “They built all that!”

Of course, the team that actually produced the detailed model no longer found it useful. During the remainder of my engagement, I never once saw a developer look at the diagram. And most amusing, when I took my one visit to inspect the model, a closer look revealed that the programmers had briefly attempted to keep the model up to date. On the surface of the glass, there were various scribbles and attempted modifications, all written directly on the glass.

Your software is special, but your design models are not, and they change rapidly. Don’t enshrine your design.

 

Trailing the Way in 2009

I started reading a very recently published book. I don’t want to mention the name of the book, but it is a tome targeted at “application developers.” After reading a small number of paragraphs from the book, I had to check my calendar to see whether this was 2009 or 1995.

Some choice sentences from the book that just grabbed me right off the bat:

  • “Software applications should simulate (model) the real world with close affinity to the problem domain.”
  • “…the average developer should spend 40 to 50 percent of his or her time in design and not writing code.”
  • “Similar to comments and just as important [emphasis added], the design documents a program.”
  • “An artist does not start with a paintbrush and a canvas. There is considerable preparation before painting can begin. … Similarly, developers do not simply start writing code. The requirements analysis must be undertaken, a design drafted, the prototyping [emphasis added] of class operations, and only then, finally, the implementation.”
  • “The goal of TDD is simply to be a framework for addressing customer requirements with software through an iterative approach to testing and coding.”

[ All told, the author uses around 10 paragraphs in the book to describe TDD, not providing a single example. ]

For me, one of the most illuminating benefits of all this up-front emphasis on design seemed to be realized in the brilliant class diagram for a simple retail banking system. It looks something like:

  Employee <|---- Teller  <------  Customer
              |
              |-- Manager

The power of the “real world” comes alive! This ingenious class diagram existed to support building the following code:

static void Main(string[] args)
{
    Customer cust = new Customer();
    cust.Teller = new Teller();
    cust.Deposit();
    cust.Withdrawal();
}

Per the author, this code “validates” the design.

I’m speechless. (Well, no, I’m fingerless, or something. There must be a better word for being so flabbergasted that you can’t start to type a response.) I could rail on this book to no end, but it would only draw attention to the book (and you’ll note that I’m not providing a link to it either).

Test Abstraction

I’m staring at a single CppUnit test function spanning hundreds of source lines. The test developer inserted visual indicators to help me pick out the eight test cases it covers:

//++++++++++++++++++++++++++++++++++++++++++++++++++

Each of these cases is brief: four to eight lines of data setup, followed by a execution statement enclosed in a CPPUNIT_ASSERT. Of course they could be broken up into eight separate test functions, but otherwise they are reasonable.

Prior to the eight tests there are two hundred lines of setup code. Most of the initialization sets data to reasonable default values so that the application code won’t crash and burn while being exercised.

I don’t know enough about the test to judge it in terms of its appropriateness as a “unit” test. It seems more integration test than anything. But perhaps all I would need to do is cleverly divorce the target function from all of those data setup dependencies, and break it up into eight separate test functions.

The aggregation of tests is typical, and no doubt comes from a compulsion to not waste all those 200 lines of work! The bigger problem I have is the function’s lack of abstraction. Uncle Bob always says, “abstraction is elimination of the irrelevant and amplification of the essential.” When it comes down to understanding tests, it is usually a matter of how good a job the developer was at abstracting intent. Two hundreds of lines of detailed setup does not exhibit abstraction!

For a given unit test, I always want to know why a given assertion should hold true, based on the setup context. The lengthy object construction and initialization should be encapsulated in another method, perhaps createDefaultMarket(). Relevant pieces of data can be layered atop the Market object: applyGroupDiscountRate(0.10), applyRestrictionCode(), etc. Not only does it help explain the data differences and correlate the setup with the result, it makes it easier to read the test, and easier to write new tests (reuse!).

I often get blank stares when I ask developers to make their tests more readable. Would they respond better to requests to improve their use of abstraction?

Patterns and Alexander

The architect Christopher Alexander wrote a handful of books in the late 1970s on architecture. These books, among them The Timeless Way of Building and A Pattern Language, have a very spiritual and poetic quality to them. Alexander strived for a classic sense of building, something that he described as the “quality without a name.” Alexander could be construed as an architectural philosopher.

More relevant today is the influence these books have had on modern software development. One of the notions Alexander had was that he could express an architectural ideal in terms of a somewhat structured form he called a pattern. A pattern is simply a recommended approach to a solution, given a certain context. For example, one of Alexander’s pattern is known as “six foot balcony.” The pattern expresses the conflict: no one uses balconies or porches that are less than six feet in depth. Alexander then provides the resolution: make the balcony more than six feet, and recess it into the building if you have to. Pretty simple stuff.

Kent Beck and Ward Cunningham in the late 1980s expressed a series of Smalltalk GUI idioms in a similar pattern fashion. The 1995 book Design Patterns formalized this concept further. It contained 23 software design patterns that the four authors (Gamma, Helm, Vlissides, and Johnson, aka the Gang of Four) had recognized from commonly occurring software design solutions. Now the software community had common names for these solutions as well as a de facto standard pattern form.

Since then, patterns have been all the rage. Publishers have released gobs of books on patterns. Conferences and web sites are devoted to patterns. It’s rare to go on an interview anymore and not be asked about patterns.

The best thing patterns do is provide everyone with a common means to communicate problems and solutions. The fact that interviewing almost universally includes pattern discussions is a testament to the success of the patterns movement.

Like all movements, the patterns movement went a bit overboard. Novice developers demonstrated that it’s easy to learn one hammer-like pattern and then wedge it into every possible space in software. Worse, people seemed to get the idea that by having patterns in your system, you automatically had a good design. Sure, most patterns espouse good design, but stuffing a few patterns into code doesn’t impart good design to the rest of the system. You have to understand what good design is all about to end up with it.

Ultimately, it is a good thing to learn about software design patterns. Agile development teaches you to let the pattern emerge, not force it where it might not fit. While there is still a bit of art to software development, it is more a craft. Patterns provide a craft-oriented, structured tool that can provide a growing foundation for the art.

Odder, some software developers began to worship Christopher Alexander. Patterns became gospel, and Alexander become God-like. Discussions about Alexander include glowing terms, and I suspect some software developers read his material more than they read software books.

Recently, I talked with an architect for the first time since I’d heard of Alexander. I mentioned the name. “Never heard of him!” I was dismayed. I surmised one of two things: either Alexander has more sway in the software industry than the architecture industry, or architects today are like any other profession where most of the people in it know nothing beyond their cubicle. I posed this thought on an XP coaches list. The respondents, some of who included well-known names in the software design industry, had all had similar experiences. Either the architect had never heard of Alexander, or scoffed at the mention of the name.

Digging a little, I affirmed that Alexander has indeed fallen out of favor (which he may have never had) in the architectural world. His ideas appeared either threatening or ridiculous to other architects. Threatening, because one of the things Alexander suggested was that people, not architects, are capable of designing their own structures. This idea carries into the software industry with XP, which suggests that programmers are capable of architecting and designing systems on their own. XP believes that a separate non-programming software architect is not necessary. Naturally this threatens non-programming software architects.

In any case, I find it quite amusing that many software developers worship a guy who is scorned by many of his peers. But I still believe that software patterns have added a lot to the knowledge base of software development. You owe it to yourself and your career to understand what software design patterns are all about, if you don’t already. Just don’t turn it into a religion.

Small Methods: Nine Benefits of Making Your Methods Shorter

I’ve espoused, on many occasions, making your methods short. The pattern Composed Method, defined by Kent Beck in Smalltalk Best Practice Patterns, says that methods should do things only at one level of abstraction. Methods should be short; a good size for Smalltalk methods is one to a half-dozen lines. That translates to about one to twelve lines (not counting braces!) for Java methods.

I recently engaged in a discussion at JavaRanch where someone thought “20 to 25 lines” was too short. After being taken aback, my response was that there are many reasons that your average method size should be considerably smaller. Here are nine benefits to short methods:

  1. Maintenance costs. The longer a method, the more it will take you to figure out what it does, and where your modifications need to go. With shorter methods, you can quickly pinpoint where a change needs to go (particularly if you are coding using test-driven development).
  2. Code readability. After the initial learning curve, smaller methods make it far easier to understand what a class does. They can also make it easier to follow code by eliminating the need for scrolling.
  3. Reuse potential. If you break down methods into smaller components, you can start to recognize common abstractions in your code. You can minimize the overall amount of code dramatically by reusing these common methods.
  4. Subclassing potential. The longer a method is, the more difficult it will be to create effective subclasses that use the method.
  5. Naming. It’s easier to come up with appropriate names for smaller methods that do one thing.
  6. Performance profiling. If you have performance issues, a system with composed methods makes it easier to spot the performance bottlenecks.
  7. Flexibility. Smaller methods make it easier to refactor (and to recognize design flaws, such as feature envy).
  8. Coding quality. It’s easier to spot dumb mistakes if you break larger methods into smaller ones.
  9. Comment minimization. While comments can be valuable, most are unnecessary and can be eliminated by prudent renaming and restructuring. Comments that restate what the code says are unnecessary.

You should get the idea that most of the benefits are about improving the design of your system. Breaking methods up into smaller ones obviously leads to lots of smaller methods. You will find that not all of those methods should remain in the same class in which the original method was coded. The small methods will almost always point out to you that you are violating the basic rule of class design: the Single Responsibility Principle (SRP).

The SRP states that a class should have only one reason to change. Put another way, a class should do one thing and one thing only. A class to present a user interface (UI) should do just that. It shouldn’t act as a controller; it shouldn’t retrieve data; it shouldn’t open files; it shouldn’t contain calculations or business logic. A UI class should interact with other small classes that do each of those things separately.

Break a class into small methods, and you will usually find all sorts of violations of the SRP.

Initially, you may find it more difficult to work with code with lots of methods. There is certainly a learning curve associated with doing so. You will find that the smart navigational features of your IDE (for example, Eclipse) can go a long way toward helping you understand how all the little methods fit together. In short time, you will find that well-composed code imparts much greater clarity to your system.

One oft-repeated resistance to short methods is that it can degrade performance in your system. Indeed, method calls are usually fairly expensive operations. However, you rarely create performance problems with more methods. Poor performance can usually be attributed to other factors, such as IO operations, network latency, poor choice of algorithm, and other inefficient uses of resources.

Even if additional method calls do noticeably impact performance (performance is not an issue until someone recognizes it as one), it’s very easy to inline methods. The rule of performance is always: make it run, make it right (e.g. small methods), make it fast (optimize).

While it’s difficult to do, you can go too far and create too many methods. Make sure each method has a valid reason for existing; otherwise inline it. A method is useless if the name of the method and the body of the method say exactly the same thing.

There are exceptions to every rule. There will always be the need for the rare method larger than 20 lines. One way to look at things, however, is to look at smaller methods as a goal to strive for, not a hard number. Instead of debating whether 25 lines, 12 lines or 200 lines is acceptable, see what you can do to reduce method length in code that you would’ve otherwise not touched. You’ll be surprised at how much code you can eliminate.

Few people promote pushing in the other direction. Increasing the average method size represents sheer carelessness or laziness. Take the time and care to craft your code into a better design. Small methods!

Atom