- Consulting / Coaching
- Jeff’s Blog
The unit tests you produce in TDD represent an investment. You’ll want to ensure they continue to return on value and not turn into a liability.
As a first step, find the right frame of mind. Start by thinking about your unit tests as part of your production system, not as an adjunct effort or afterthought. We consider the diagnostic devices built into complex machines (e.g. automobiles) as part of the product. Software is a complex machine. Your tests provide diagnostics on the health of your software product.
Here are some common concerns about TDD:
I’ll tackle each one of these in a separate post, offering some thoughts about how to reduce your anxiety level. (No doubt I’ll add other concerns along the way.) Then I’ll move on to how to craft tests that are more sustainable and anxiety-free.
It is absolutely possible for a unit test to be defective. Sometimes it’s not verifying what you think it is, sometimes you’re not executing the code you think you are, sometimes there’s not even an assert, and sometimes things aren’t in the expected state to start with.
Strictly adhering to the TDD cycle (red-green-refactor) almost always eliminates this as a real-world problem. You must always see the completed test fail at least once, and for the right reason (pay attention to the failure messages!), before making it pass. And writing just enough code to make it pass should help reinforce that it was *that code* that passed the test.
Other ways to ensure your tests start and stay honest:
Next: The tests are handcuffs.
Over the many years I’ve been practicing TDD, I’ve sought to incorporate new learning about what appears to work best. If you were to look at my tests and code from 1999, you’d note dramatic differences in their structure. You might glean that my underlying philosophy had changed.
Yet my TDD practice in 2012 is still TDD. I write a test first, I get it to pass, I clean up the code. The core goal of sustaining low-cost maintenance over time remains, and I still attain this goal by crafting high quality code–TDD gives me that opportunity.
TDD is the same, but I believe that new techniques and concepts I’ve learned in the interim are helping me get better. My bag of TDD “tricks” acquired over a dozen years includes:
There is of course no hard data to back up the claims that any of these slightly-more-modern approaches to TDD are better. Don’t let that stop you–if you waited for studies and research to proceed, you’d never code much of anything. The beauty of TDD is that its rapid cycles afford many opportunities to see what works best for you and your team.
(But remember: You only really know how good your tests and code are when others must maintain them down the road.)
Should you push more in the direction of one assert per test? Is Given-When-Then superior to Arrange-Act-Assert? Those are entertaining things to debate, but you’re better off finding out for yourself what works best for your team. Also, be willing to accept that others might have an even better approach.
I recommend investigating all of the above ideas and more–keep your eyes open! Just don’t get hung up on any one or assume it’s the One True Way. The practice of TDD is a continual voyage of exploration, discovery, and adaptation.
I’m staring at a single CppUnit test function spanning hundreds of source lines. The test developer inserted visual indicators to help me pick out the eight test cases it covers:
Each of these cases is brief: four to eight lines of data setup, followed by a execution statement enclosed in a CPPUNIT_ASSERT. Of course they could be broken up into eight separate test functions, but otherwise they are reasonable.
Prior to the eight tests there are two hundred lines of setup code. Most of the initialization sets data to reasonable default values so that the application code won’t crash and burn while being exercised.
I don’t know enough about the test to judge it in terms of its appropriateness as a “unit” test. It seems more integration test than anything. But perhaps all I would need to do is cleverly divorce the target function from all of those data setup dependencies, and break it up into eight separate test functions.
The aggregation of tests is typical, and no doubt comes from a compulsion to not waste all those 200 lines of work! The bigger problem I have is the function’s lack of abstraction. Uncle Bob always says, “abstraction is elimination of the irrelevant and amplification of the essential.” When it comes down to understanding tests, it is usually a matter of how good a job the developer was at abstracting intent. Two hundreds of lines of detailed setup does not exhibit abstraction!
For a given unit test, I always want to know why a given assertion should hold true, based on the setup context. The lengthy object construction and initialization should be encapsulated in another method, perhaps createDefaultMarket(). Relevant pieces of data can be layered atop the Market object: applyGroupDiscountRate(0.10), applyRestrictionCode(), etc. Not only does it help explain the data differences and correlate the setup with the result, it makes it easier to read the test, and easier to write new tests (reuse!).
I often get blank stares when I ask developers to make their tests more readable. Would they respond better to requests to improve their use of abstraction?