Why We Create Unnecessary Complexity

complexityQ. What’s “unnecessary complexity?”
A. Any code that doesn’t need to be there, or that is more difficult to understand than necessary.

Your system might exhibit unnecessary complexity if it contains duplicate code, abstractions that aren’t needed, or convoluted algorithms. Unnecessary complexity permanently increases the cost to understand and maintain code. We create it for various reasons, most of which we can begin to eliminate.

Time pressure. “We just have to ship and move on–we don’t have time to make it look nice!” You’ll regret this sadly typical choice later (not even that much later) when everything takes longer. Learn to push back when you know short-term time-saving measures will cost everyone dearly later.

Lack of education. To create quality designs that you can maintain at low cost, you have to know what that looks like. Most novices have little clue about how they are degrading their system. The concepts of cohesion and coupling (or SRP and DIP) are essential, but most everything you can do to learn about good design will pay off. Consider start by learning and living the concepts of Simple Design.

Existing complexity. The uglier your system is, the more likely that the average programmer will force-fit a solution into it and move on to the next thing. Long methods beget longer methods, and over-coupled systems beget more coupling. Incremental application of simple techniques (such as those in WELC) to get the codebase under control will ultimately pay off.

Premature Conjecture. “We might need a many-to-many relationship between customers and users down the road… let’s build it now.” Sometimes you never need to travel down that road. Even if you do, you pay for the complexity in the interim. Deferring the introduction of complexity usually increases the cost only marginally at the time when it’s actually required. When you choose the wrong unnecessary abstractions, it can significantly increase the cost to change to the right ones down the road.

Fear of changing code. “If it ain’t broke, don’t fix it.” Maybe we have the education we need, but we often resist using it. Why? Imagine you must add a new feature. You’ve found a similar feature implemented in a 200-line method. The new feature must be a little bit different–five lines worth of different code. The right thing would of course be to reuse the 195 lines of code that aren’t different.

But most developers blanch at the thought–that would mean making changes to the code that supports an existing feature, code  that many feel they shouldn’t touch. “If I break it, I’ll be asked why I was mucking with something that was already working.”

Good tests, of course, provide a means to minimize fear. In the absence of good tests, however, we tend to do the wrong things in the code. We duplicate the 200 lines and change the 5 we need, and make life a bit worse for everybody over time.

Code Socialization

The first rule of Fouled-up-code* Club is you don’t talk about the code.
The second rule of …

We don’t talk about the code, and so most of us are members of F’d-up-code Club. We’re not socializing the code enough. Shop upon shop I visit, I ask when the programmers discuss what’s in the code with each other. The usual answer?  “Once in a while.”

If you’re not regularly talking about the code as a team, it’s getting worse. And “it” includes the time to understand the code, the time to fix the code, the pain the code causes you, the defect count, and ultimately the extent to which you have a real team.

We create standards but they idle and adherence falls off, and the code shows it, and attempts to get back on track fall short. We pair-program, but don’t switch pairs mid-story, and the solutions show it. (Yes, two heads produce a better-than-one solution, but a pair deep into understanding of their feature can easily produce a solution that makes little sense to others.) We hold tepid brown-bags that bore attendees and ultimately taper off in quantity. We hold retrospectives, sometimes, but the process crud dominates, and the audience isn’t usually right for talking about code problems. We sit in the same area (well, a small number of us do), but rarely call others over to our monitor to look at some cool or challenging code.

Try this (the opposite of the prior paragraph, duh):

  • Revisit your standard once a quarter at least, or simply when someone wants to visit a specific standard issue.
  • Pair program and take advantage of the two-heads effect. If you can’t pair, incorporate a streamlined inspection process. If you can pair, switch pairs at least once mid-feature. See if the third party can make sense of your tests and solution with minimal assistance from you.
  • Run end-of-day code sharing sessions on the LCD projector. Limit them to 15 minutes, and shoot for 5 minutes. “Hey, can we stop doing this in the code?” Or, “Here’s a new reusable construct that we should all be using.” Or, “Here’s an effective tool tip I learned.”
  • Find 60 minutes for a programmer-only retrospective each iteration. Make sure what comes out of them are commitments to improve that are actually met (otherwise don’t bother).
  • Don’t wait for the end of the day or a pair switch to discuss something important in the code. People should feel comfortable about standing up and finding an audience for a 5-minute ad hoc code discussion.

* More blunt folks can substitute other words for “fouled.”

How’s Your Daily Standup Working for You?

a daily scrum
needing the walls to stand!

I posted a response to a blog post entitled “Why I hate SCRUM daily standup meetings,” but it’s still awaiting moderation after a couple days. I’m impatient, so here’s my comment:

====== From: @jlangr ======

“On top of this, they need to setup a meeting to learn what my collegues are working on feels so wrong to me.”

Agreed. If you are a good team that already finds ways to get together and talk about what’s important, a formal meeting is a waste of time. Sitting in a common area where this can happen throughout the day can make it even less useful.

Having said that, it’s great starter discipline, and can be useful in environments where it’s not easy to get people together (I’ve been in places where I wasted way too much time trying to track people down or when my attempts to discuss things were rebuffed by people who were “too busy”). I’d start a new team on daily standups, but would push the team to find ways to eliminate the need for them once they got better at working together.

Also, most shops that run daily scrums and don’t get much out of them aren’t collaborating enough. It becomes one person reporting status, while the others worry about what they’re going to say when it’s their turn (because “that stuff” has little to do with what they’re doing). If that’s the case, you may as well revert to people sending an email with their status to the project manager, who gathers and emails a summary of what’s important to the team.

But…that’s not what works best in agile (or lean). See Stories and the Tedium of Daily Standups: What works best is real collaboration, which in turn makes the stand-ups far more useful and engaging. There’s also an Agile in a Flash card for that!

I’d love to hear more positive stories about stand-ups, given that most of the time I hear from people who’ve learned to detest them. Good or bad, how’s your stand-up meeting working for you?

A Smackdown Tool for Overeager TDDers

Image source: https://commons.wikimedia.org/wiki/File:Cross_rhodes_on_gabriel.jpg

I’ve always prefaced my first test-driven development (TDD) exercises by saying something like, “Make sure you write no more code than necessary to pass your test. Don’t put in data structures you don’t need, for example.” This pleading typically comes on the tail of a short demo where I’ve mentioned the word incremental numerous times.

But most people don’t listen well, and do instead what they’ve been habituated to do.

With students in shu mode, it’s ok for instructors to be emphatic and dogmatic, smacking students upside the head when they break the rules for an exercise. It’s impossible to properly learn TDD if you don’t follow the sufficiency rule, whether deliberately or not. Trouble is, it’s tough for me to smack the heads of a half-dozen pairs all at once, and some people tend to call in HR when you hit them.

The whole issue of incrementalism is such an important concept that I’ve introduced a new starting exercise to provide me with one more opportunity to push the idea. The natural tendency of students to jump to an end solution is one of the harder habits to break (and a frequent cause of students’ negative first reaction when they actually try TDD).

I present a meaty first example (latest: the Soundex algorithm) where all the tests are marked as ignored or disabled, a great idea I learned from James Grenning. In Java, the students are to un-@Ignore tests one-by-one, simply getting them to pass, until they’ve gotten all tests to pass. The few required instructions are in the test file, meaning they can be hitting this exercise about two minutes after class begins.

Problem is, students have a hard time not breaking rules, and always tend to implement too much. As I walk around, I catch them, but it’s often a little too late. Telling them that they need to scrap their code and back up isn’t what they want to hear.

So, I built a custom test-runner that will instead fail their tests if they code too much, acting as a virtual head-smacking Jeff. (I built a similar tool for C++ that I’ve used successfully in a couple C++ classes.)

Here’s the (hastily built) code:

import org.junit.*;
import org.junit.internal.*;
import org.junit.internal.runners.model.*;
import org.junit.runner.*;
import org.junit.runner.notification.*;
import org.junit.runners.*;
import org.junit.runners.model.*;

public class IncrementalRunner extends BlockJUnit4ClassRunner {

   public IncrementalRunner(Class klass) 
         throws InitializationError {

   protected void runChild(
         FrameworkMethod method, RunNotifier notifier) {
      EachTestNotifier eachNotifier = 
         derivedMakeNotifier(method, notifier);
      if (method.getAnnotation(Ignore.class) != null) {
         runIgnoredTest(method, eachNotifier);

      try {
      } catch (AssumptionViolatedException e) {
      } catch (Throwable e) {
      } finally {

   private void runIgnoredTest(
         FrameworkMethod method, EachTestNotifier eachNotifier) {
      runExpectingFailure(method, eachNotifier);

   private EachTestNotifier derivedMakeNotifier(
         FrameworkMethod method, RunNotifier notifier) {
      Description description = describeChild(method);
      return new EachTestNotifier(notifier, description);

   private void runExpectingFailure(
         final FrameworkMethod method, EachTestNotifier notifier) {
      if (runsSuccessfully(method)) 
            new RuntimeException("You've built too much, causing " + 
                                 "this ignored test to pass."));

   private boolean runsSuccessfully(final FrameworkMethod method) {
      try {
         return true;
      } catch (Throwable e) {
         return false;

(Note: this code is written for JUnit 4.5 due to client version constraints.)

All the custom runner does is run tests that were previously @Ignored, and expect them to fail. (I think I was forced into completely overriding runChild to add my behavior in runIgnoredTest, but I could be wrong. Please let me know if you’re aware of a simpler way.) To use the runner, you simply annotate your test class with @RunWith(IncrementalRunner.class).

To effectively use the tool, you must provide students with a complete set of tests that supply a definitive means of incrementally building a solution. For any given test, there must be a possible implementation that doesn’t cause any later test to pass. It took me a couple tries to create a good sequence for the Soundex solution.

The tool is neither foolish-proof nor clever-proof; a small bit of monkeying about and a willingness to deliberately cheat will get around it quite easily. (There are probably a half-dozen ways to defeat the mechanism: For example, students could un-ignore tests prematurely, or they could simply turn off the custom test-runner.) But as long as they are not devious, the test failure from building too much gets in their face and smacks them when I’m not be around.

If you choose to try this technique, please drop me a line and let me know how it went!

TDD for C++ Programmers

C++Recently I’ve been getting a good number of calls from C++ shops interested in doing TDD, despite my heavy Java background. I took on some of the business and had to turn away some to avoid being swamped. Many other folks I know (name dropping time!)–Tim Ottinger, James Grenning, JB Rainsberger, others–have also reported doing C++ work recently.

Is TDD finally taking hold in C++ shops? Does TDD even make sense for C++? I think so, and two current customers believe they’ve been seeing great benefits come from applying it. Building and delivering a C++ TDD course recently helped me come back up to speed in the language to the point where I was comfortably familiar with all of the things I hated about it. 🙂 It makes no sense to take such a difficult language and stab at it without the protection of tests.

I’ve been simultaneously writing more (after a typical winter writing freeze) and looking at Erlang–a much cooler language, challenging in a different kind of way. Meanwhile, my editor at PragProg has been asking for new book ideas. Here were some of my thoughts:

  • Refactoring 2012
  • Modern OO Design (not template metaprogramming!) / Simple Design
  • Object-Oriented Design “In a Flash” (card deck, like Agile in a Flash)

No matter how hard I try to run screaming from C++, there it is right behind me. It’s indeed a powerful language, and there is gobs and gobs of code written in it, and it’s about time we started trying to figure out how to make the best of it. It’s not going away in my lifetime. I also think C++ programmers are not well-served in terms of writings on TDD out there.

So… I decided it was going to be TDD in C++. Tim Ottinger and I put together and just sent out a proposal for a book tentatively named TDD for C++ Programmers (with a catchy subtitle, no doubt). We hope there’s enough demand and interest to get the proposal accepted. If all goes well, we’ll be soliciting reviewers in a few weeks.

I look forward to writing again with Tim! More in an upcoming blog post about our collaborative writing experience.

My First TDD Exercise

Finding a good first exercise for TDD learners can be challenging. Some of the things I look for:

  • Not so simple that it’s trivial, e.g. Stack, lest students dismiss it as pointless. There should be opportunities for students to make a few simple mistakes, but…
  • …not so complex that students mire in the algorithm itself instead of focusing on technique.
  • Something students can relate to–not so obscure that students struggle with understanding the domain or problem.
  • Games and fluff (bowling, fizz-buzz, e.g.) can create a first impression–and first impressions can be very hard to break–that TDD is for trivialities only, despite what you say. These things can be fine as later exercises.
  • Not something that already exists in the library. (“Why am building a multi-map when C++ already has one?”)
  • Can be test-driven in 10 to 15 minutes by an experienced TDDer, meaning that it’s probably a 30-to-45-minute exercise for newbies pairing up.
  • Provides a good demonstration of the ability to incrementally grow a solution, including opportunities to spot and remove duplication.
  • Provides an opportunity to show how to deal with at least one exceptional case.

I’ve been demoing a stock portfolio tracker for some time–a simple collection class that allows purchases of stock symbols. With Java students, I follow up with a multi-map, a class that would be useful in most shops (though a similar jakarta implementation exists). Both have worked well.

The message an exercise sends can be damaging. As a second or third exercise, lately I’ve been using the Roman numeral converter kata. I think it’s a cool exercise that can show how you can build an elegant algorithm by following a simple, incremental approach. It’s had 99% effectiveness: Out of the past ~100 students, one guy–an algorithm specialist–took a completely negative view of TDD after it. His stance was that he could have derived the algorithm much more quickly using a top-down, non-test-driven approach. From that he dismissed TDD completely. During subsequent pairings, I think he saw some of its benefits (we talk a lot about algorithms and TDD), but it’s an uphill battle and it might have been too late.

Currently I’m experimenting with an exercise to implement the soundex algorithm. More on that in an upcoming blog post.

Dave Thomas’s PragProg CodeKata site is a great place to get ideas. You might also check out the TDD Problems site (to which I contributed a handful of problems).

Refactoring and Performance

In the first portion of the book Refactoring, Martin Fowler presents an example where he whittles a verbose method into an improved design. Here’s the original, more or less:

   public String statement() {
      double totalAmount = 0;
      int frequentRenterPoints = 0;
      Enumeration rentals = this.rentals.elements();
      String result = "Rental Record for " + getName() + "\n";

      while (rentals.hasMoreElements()) {
         double thisAmount = 0;
         Rental each = (Rental)rentals.nextElement();
         switch (each.getMovie().getPriceCode()) {
            case Movie.REGULAR:
               thisAmount += 2;
               if (each.getDaysRented() > 2)
                  thisAmount += (each.getDaysRented() - 2) * 1.5;
            case Movie.NEW_RELEASE:
               thisAmount += each.getDaysRented() * 3;
            case Movie.CHILDRENS:
               thisAmount += 1.5;
               if (each.getDaysRented() > 3)
                  thisAmount += (each.getDaysRented() - 3) * 1.5;


         if (each.getMovie().getPriceCode() == Movie.NEW_RELEASE && 
             each.getDaysRented() > 1)

         result += "t" + each.getMovie().getTitle() + 
                   "t" + String.valueOf(thisAmount) + "\n";
         totalAmount += thisAmount;

And here’s the (relevant portion of the) factored code. I think this is as far as Fowler takes the example:

   public void addRental(Rental rental) {

   public String statement() {
      StringBuilder result = new StringBuilder();
      for (Rental rental: rentals)
         appendDetail(result, rental);
      return result.toString();

   private int calculateFrequentRenterPoints() {
      int totalFrequentRenterPoints = 0;
      for (Rental rental: rentals)
         totalFrequentRenterPoints += rental.getFrequentRenterPoints();
      return totalFrequentRenterPoints;

   private double calculateTotalCost() {
      double totalCost = 0;
      for (Rental rental: rentals)
         totalCost += rental.getCost();
      return totalCost;

The while loop in the original statement method intertwines three things: construction of a text statement, addition into a total cost, and addition into a total for frequent renter points. The refactored version separates and isolates each of these three behaviors.

At the point of presenting this code, Fowler discusses the performance implications: Instead of a single set of iterations, the code now requires three iterations over the same collection. Many developers object strongly to this refactoring.

I’ve done quick (and rough, but relatively accurate) performance measurements for this example. Things aren’t three times worse, as a naive guess might suggest; instead we’re talking about a 7% degradation in performance. That’s still a significant amount in many systems, maybe even too much to accept. In many other systems, the degradation is negligible, given the context.

Some conclusions: First, always measure, never guess. Second, understand what the real performance requirements are. Create end-to-end performance tests early on. Run these regularly (nightly?), so that you know the day someone introduces underperforming code.

The refactored, better design provides the stepping stone to many other good things. The small, cohesive methods can each be read and understood in a few seconds. You can glance at each, and almost instantly know they must be correct. They’re also extremely easily tested if you aren’t so trustworthy of glances.

Looking further at the code, you might also spot an abstraction waiting to come into existence–the collection of rentals itself. Producing a statement is a responsibility separate from managing this collection. You could create another class named RentalSet or RentalCollection. Moving the calculation operations into the new class is a trivial operation, given that the calculation logic is now completely divorced from statement generation.

Other benefits accrue. Fowler talks about the ability to provide support for an HTML statement, in addition to existing support for a plaintext statement. Migration to a polymorphic hierarchy is simple and safe with the refactored design.

Following the mantra of “make it run, make it [the design] right, make it fast,” you can return to the performance issue now that you have an improved design. Suppose you must restore performance and eliminate the 7% degradation. You could un-factor the loop extract and again intertwine three elements in the body of one for loop.

But since you have a simpler design, the optimization doesn’t need to impact the design so negatively:

   public void addRental(Rental rental) {

      // optimization--cache totals:
      totalFrequentRenterPoints += rental.getFrequentRenterPoints();
      totalCost += rental.getCost();

   private int calculateFrequentRenterPoints() {
      return totalFrequentRenterPoints;

   private double calculateTotalCost() {
      return totalCost;

(Note the rare, debatably worthwhile comment, which I could eliminate by another method extract.)

Small methods are about many things, ranging from comprehension to reuse. In this case, having an optimized design allowed for an even simpler solution to the optimization of performance.

Agile 2011: Live Speaker Feedback

I delivered a talk on “The Only Agile Tools You’ll Ever Need” at Agile 2011, and had a great time doing so. Most of the written feedback from the session indicated that people were very happy with the talk. Negative feedback, received from a handful or so of the ~60 attendees: It didn’t necessarily meet expectations based on what the session summary said. I apologize for that–I’d prepared the summary months before putting together the talk, and what I’d felt important to say changed during that time.

I also muffed a bit–I made a forward reference to the notion that task tracking is a smell, but didn’t quite fully close that thought out as I talked about limiting work in process. If it wasn’t clear, the key point was: if you minimize work in process, the need for tracking tasks in a software tool diminishes significantly.

I handed out index cards at the outset of the session, with the intent of gathering names so I could give out a couple copies of Agile in a Flash. But that’s boring! What else could I have people do with the cards (to bolster my contention that they’re wonderful tools)?

Aha. I gave the following instructions:

“I’m looking to get live feedback during my session. On one side of the index card, draw a big fat smiley face. If you’re happy with what I’m saying, or you think I’m making an astute point, hold up the smiley face. On the other side, let’s see. If you think the believability of what I’m saying is suspect, put something down to represent that. Hmm. Believability Suspect, just abbreviate that, and write down ‘BS‘ in big fat letters. Finally, put your name below the smiley face for the drawing.”

I got a number of smiley faces throughout, but no BS cards. One guy looked like he was about to hold up a BS sign, but changed his mind–probably the one guy who hated the talk. So, my takeaway is that the silly mechanism worked in terms of getting positive feedback–but I suspect that it’s probably a bit too intimidating for most people to challenge a speaker with negative feedback.

Buying Back My Soul

I recently traded away my beliefs for security and comfort. I had taken a three-month contract-to-hire at a local software company.

What had I hoped to gain?

  • The company is three miles away from home, a commute of 6 minutes with only one traffic light on the way.
  • They are stable. I received an offer at the end of three months. Due to the industry (government “defense” software), it’s likely I could have stayed there for many years to come.

What was my alternative?

  • Travel. I ultimately chose to return to a life of sporadic travel, consulting and training for Langr Software Solutions. The work is great–I love what I do and love helping people. But it’s travel. For those who’ve never traveled extensively, it’s entertaining the first couple times out. After that, you quickly realize that living in hotels, suffering the hell of airline travel, eating in often-lame restaurants, and being away from home and family is simply not fun at all.

So what was wrong with the local gig?

  • Money. I’m not all about money, and we don’t live extravagantly, but this was the second time I’d taken a pay cut in two years. It’s always tough to cut back on your lifestyle.
  • Industry. I’m a realist, so I understand that “defense” work is necessary, but I’d just as soon not be part of it.
  • Process. I thought I’d be ok with skipping process for a while (they had close to zero process elements). I’d hoped to help introduce some useful techniques and ideas, but it quickly became clear that the culture wasn’t going to support it. They had been reasonably successful without any real process. Not to say that they couldn’t have taken their game to the next level, but one individual can’t change a multi-hundred-person company with zero support.
  • Culture. The folks were nice enough, but I didn’t feel like I was a part of anything. Developers tended to stay in their cubes and keep to their existing cliques. I worked on a two-man development effort (fun stuff; I learned a good amount of Flex/ActionScript). I saw my team member and my boss (both good guys) a few times a week each. The remaining 97%+ of the week, I had close to no additional human interaction.

I sold my soul. Granted, I got to build software, something I love to do, but it was a lonely existence. Had I felt like I was part of a team, as opposed to a bunch of individuals who just happen to reside in nearby cubes, I might have stayed.

I’ve bought back my soul. What’s interesting is that I’m still not part of a team–as a consultant, you are an outsider when you’re traveling, and on your own when at home. But the tradeoff is worth it: I get to help others experience the joy of being part of a true, effective team.

Pitting Teams Against One Another

I was asked this via email:

I just read your article

And, I came across this principle:
Don’t use metrics to compare teams. Metrics are best viewed as probes and indicators of problem areas that may warrant further investigation.

Interestingly, this is one of the drivers that my employer wants to collect metrics.

My responses:

Simply put, no two teams are alike.

From another perspective, comparing two points completely misses the goal of doing agile software development. A good agile team is continually looking for ways to improve their productivity–i.e. within the team itself. Most teams have massive potential room for improvement, never mind worrying about what the other teams are up to. They don’t need distraction and concern about keeping their jobs–what they need instead is good leadership and support.

Competition in the marketplace can be a great way to end up with better product, but creating competition between two teams producing different products is artificial and pointless.

There’s also no “fair” way to compare two teams working on different products. Many factors can be a reason why any one team has a significant advantage or disadvantage over other: difficult vs rapid technology (e.g. C++ vs Ruby), team size, geographical considerations, quality of existing code, weak team members, duration together working as a team, bad managers, fire drills, crankier customers, and so on. How do you provide a good measurement of productivity based on all these factors?

You might come up with something that seems reasonably fair, but you’re better off just focusing on getting each team to use their own past productivity as a baseline. The only realistic measurements are relative (and even those are skewed by things such as departing team members).

It can also be counter-productive to focus on racing other teams: What if one team finishes 50% more stories but delivers 20% more defects, resulting in a customer threatening to drop your product? What if you worry about technical metrics (such as code coverage), and one team gets high coverage but their tests stink (seen this–it’s subjective, and easy to tell for a seasoned TDD developer but not anyone else)?

If management focuses on a small number of metrics, and mandates certain goals for them, you can guarantee that the smarter teams will do what they can to play the numbers game. More often than not, a singular metric will have little to do with legitimate goals (customer satisfaction, on-time delivery, high quality). Due to management mandate, I saw a team rapidly increase their code coverage by rapidly cutting and pasting tests, producing such low quality tests that they eventually abandoned. The end result was a complete waste of investment in the unit tests, because the team wasn’t concerned about what really mattered.

Short-term goals may make the numbers look good, but in the long run you will pay for the misguided focus.

Worse, it can be demoralizing. I worked in a large Fortune 500 company where a VP with seemingly good intentions pitted teams against one another by issuing letter grades (A, B, C, D, F). He insisted on having a meeting with each of 30+ teams in his organization and lambasting those who he perceived to be inadequate. Never mind that there were factors for some of the teams that had nothing to do with team productivity, but instead with customer or internal politics.

The only time I would even consider a team-vs-team competition is if both were high-perfoming, kicking-rear teams. That might be fun. But it’s hard enough to find one top-notch team, and almost impossible to find two in the same place.