Why We Create Unnecessary Complexity

complexityQ. What’s “unnecessary complexity?”
A. Any code that doesn’t need to be there, or that is more difficult to understand than necessary.

Your system might exhibit unnecessary complexity if it contains duplicate code, abstractions that aren’t needed, or convoluted algorithms. Unnecessary complexity permanently increases the cost to understand and maintain code. We create it for various reasons, most of which we can begin to eliminate.

Time pressure. “We just have to ship and move on–we don’t have time to make it look nice!” You’ll regret this sadly typical choice later (not even that much later) when everything takes longer. Learn to push back when you know short-term time-saving measures will cost everyone dearly later.

Lack of education. To create quality designs that you can maintain at low cost, you have to know what that looks like. Most novices have little clue about how they are degrading their system. The concepts of cohesion and coupling (or SRP and DIP) are essential, but most everything you can do to learn about good design will pay off. Consider start by learning and living the concepts of Simple Design.

Existing complexity. The uglier your system is, the more likely that the average programmer will force-fit a solution into it and move on to the next thing. Long methods beget longer methods, and over-coupled systems beget more coupling. Incremental application of simple techniques (such as those in WELC) to get the codebase under control will ultimately pay off.

Premature Conjecture. “We might need a many-to-many relationship between customers and users down the road… let’s build it now.” Sometimes you never need to travel down that road. Even if you do, you pay for the complexity in the interim. Deferring the introduction of complexity usually increases the cost only marginally at the time when it’s actually required. When you choose the wrong unnecessary abstractions, it can significantly increase the cost to change to the right ones down the road.

Fear of changing code. “If it ain’t broke, don’t fix it.” Maybe we have the education we need, but we often resist using it. Why? Imagine you must add a new feature. You’ve found a similar feature implemented in a 200-line method. The new feature must be a little bit different–five lines worth of different code. The right thing would of course be to reuse the 195 lines of code that aren’t different.

But most developers blanch at the thought–that would mean making changes to the code that supports an existing feature, code  that many feel they shouldn’t touch. “If I break it, I’ll be asked why I was mucking with something that was already working.”

Good tests, of course, provide a means to minimize fear. In the absence of good tests, however, we tend to do the wrong things in the code. We duplicate the 200 lines and change the 5 we need, and make life a bit worse for everybody over time.

Legacy Quadrants for Increasing Confidence Coverage

Your legacy system presents you with a choice: Put a stake in the ground and attempt to do something about the daily pain it causes, or let it get incrementally and certainly worse. Often its challenges seem insurmountable–small improvements appear like ripples in the ocean. Perhaps an investment to improve the software product is not worth the return in value, given its future.

Chances are that most of the time, though, you’ll be stuck with your legacy system for some time coming, and it makes sense to put a foot down to stem the degradation. Working Effectively With Legacy Code provides you with primarily tactical guidance for preventing entropy, focusing mostly on individual code-level challenges. The Mikado Method provides you with primarily strategic guidance, focusing mostly on managing large-scale refactorings across a code base (legacy or not).

In terms of where to put your effort, the best strategy is always to begin with the areas of current impact–what needs to change for the feature you’re currently tackling? But if you can afford additional effort to add characterization tests–i.e. increase “confidence coverage”–consider this quadrant that pits rate of change against defects for a given class.

Legacy Quadrants

How might you determine what classes belong in which quadrants? Tim Ottinger devised a heat map while at one shop, a simple Python tool that cross-referenced JIRA tickets with trouble tickets reported in Git. (I got to help build it!) You might consider filtering the defects to ignore those with low severity.

The Trusty Old Engine. “If it ain’t broke, don’t fix it!” Classes that work and aren’t changing should be the last thing you worry about. Sometimes code in these areas is tempting: it may exhibit obvious duplication, for example, that a refactoring weenie might view as a quick-win opportunity. Don’t waste your time–there are bigger fish to try.

The Can of Worms. You should have very few classes in this quadrant–having lots of  stable classes that exhibit many defects suggests a reset on how your team does triage. Fixing these troubled classes might seem like opportunity to make your customer happy. Open the code up, however, and you’re likely to find fear-inspiring code. You might view this quadrant as a sandbox for ramping up on legacy code techniques: Classes here will provide you the time to experiment without having to worry as much about contention issues with the rest of the team.

The Mystery. Getting all the tiny bits correct in the face of rapid change is a significant challenge. A single, low-defect class with lots of change could suggest at least a couple things: The changes are generally trivial in nature, or the class rampantly violates single responsibility principle (for example, a large controller class that doesn’t simply delegate but also does work). With respect to the latter, perhaps you’ve been fortunate so far, or perhaps you have good test coverage.

It might be worth a small amount of effort to prevent any future problems in such an SRP-violating class. The nice thing is that splintering off new classes can be a fairly quick and risk-free effort, one that should begin to isolate areas of risk. You might then be able to code a few key characterization tests without extreme difficulty.

The Garbage Disposal. Usually there are at least a few classes in your system that are very large and fraught with problems, and they change all the time. This is the obvious place to improve your confidence coverage.

Good design, which starts with the application of the SRP, starts to solve many problems that the garbage disposal represents. Small, single-purpose classes provide more opportunities for reuse, make performance profiling easier, expose defects more readily, and can dramatically improve comprehension of the code, to name a few benefits. A garbage disposal is a dangerous place–with lots of hands in one big sinkhole, contention is frequent and merges can be painful. Design that adheres to the SRP thus also makes continuous integration easier.

Scaring You Straight: Legacy Madness

legacyMadness

I saw this classic comic book cover and couldn’t resist. There could be a whole True Coding Crime series.

Atom