The problem with TDD is in the real world you often have to work with people who refuse to do it. In my experience with rules of thumb/policies/best practices everything comes down to my ability to sell the technique/technology/belief system. If I can’t sell the solution, I don’t put much weight on it. Minimalistic testing is my solution for the TDD game. The TDD game is played when everyone says they do TDD … and therefore reap the status benefits therein … but no one actually does it.
I believe in no-excuse policies. A no-excuse policy is when you take a rule, and then you do not allow yourself to make an excuse when you break that rule. This is distinct from a no tolerance policy, which is effectively abiding by a rule (by threat of penalty) no matter what. I learned about no-excuse policies initially from the Marine Corps, and then from a book called Focus by Jack Canfield. In the Marine Corps, people don’t apologize (at least, none of the Marines I knew did). In boot camp you learn that if you make a mistake and say “I’m sorry” (actually, you literally say “This recruit is sorry, sir!”), your superior’s standard reply will be something supremely negative. For whatever reason, Marines don’t like to be apologized to. They want you to just avoid failing. You have to live with your failure until you succeed. This is the same mentality that makes no-excuse policies work. In a no-excuse policy, you aren’t allowed to make an excuse when you break the rule. You can only acknowledge that you failed.
In order for a no-excuse policy to work, you need to have a very simple rule. The rule has to be so simple and so easy to accomplish, that it’s just hard to imagine a good reason why you wouldn’t accomplish it. I have a no-excuse policy for a 2 mile run in the morning. I’ve decided that I always have an extra 15 minutes in the morning, and if I don’t then I’m fooling myself. So effectively, if I don’t run at least 2 miles in the morning I’ve failed.
The same thing can be said about testing. My no-excuse policy for testing is:
No-one is allowed to check in anything that breaks the tests.
If I check in code that breaks existing tests then I have failed. There are numerous benefits to just this one rule. The best benefit is it takes away a lot of the standard excuses for not even trying to test in the first place. I feel I can get the most buy in with the maximum amount of programmers with this rule. I think this rule is worthy of having a heated/emotional debate over (when working with people who don’t want to test), without worrying about fall out (people quitting). Basically if someone quits over this rule then I believe the project as a whole is better without them (or wouldn’t have succeeded anyways).
There are numerous objections to this rule ranging from it does too little, to it goes too far (yes, seriously). Here are thirteen common objections and responses:
1. A no-excuse policy for actually writing tests is much more important than one for not breaking existing tests!
I can write tests that cover someone else’s code, even if they don’t write tests.
If you force people that don’t like writing tests to write tests all of the time, you get passive resistance, perhaps in the form of gamed tests.
Not everything can be tested.
Not everything that can be tested can be sold as “testable”.
Not everything that is “testable” can be sold as needing testing.
2. You need to avoid teams that don’t test!
Most teams don’t test, and I’m not willing to avoid most teams without considering other factors.
Given that most teams don’t test, I think it is good to have rules that are applicable to most teams.
3. You should be doing 100% code coverage!
Then do it. This rule is compatible with that.
4. 100% test coverage is not worth it!
Then don’t do it. This rule is compatible with that.
There is nothing stopping you from having this rule along with more ‘no-excuse’ policies/rules, but I would suggest that you have at least this rule.
5. You should have automation that prevents people from checking in things that break the code!
I think the signal when someone checks in code that is broken (whether it is a mistake or it is on purpose) is more valuable than stopping people from checking in broken code (into development). Why do you want to check in broken code?
Repeated checking in of broken code is a good signal for a one on one discussion. This is the segway for selling testing and the existing test suite to another developer. If I can’t sell the existing test suite (it is too painful) or testing itself, then either there is a problem with how I am expressing myself, the test suite quality needs to be fixed, or the code style needs to change into a more testable style.
The response to forced testing compliance is passive resistance (gamed tests, gamed code that avoids tests). If everyone tests via natural motivations and incentives, then you don’t need to force them. If everyone is making mistakes checking in broken code, then they aren’t running the test suite enough.
6. Everything that goes in production should be tested!
Some teams can’t be sold on this.
Some teams can generally be sold on this, but within specific scenarios those same teams can’t be sold on this (high profile, extreme deadline scenarios).
7. Some tests are broken and shouldn’t exist!
Fix that test before checking in your code.
8. This is a high profile project with an extreme deadline … and this test is in my way!
If the test isn’t valid, remove the test … but you own the code (when it fails) and you have no right to complain when it is overwritten on a whim, perhaps with a new test.
9. We can’t test, its too hard to test this domain!
If I find out a way to test something in your domain, then don’t be mad.
If you break one of my tests in the ‘hard’ domain, then be ready to talk about why this is not a valid test. If it is not a valid test I need to be more conservative with my tests. If it is a valid test then you shouldn’t break features (perhaps talk to the stakeholders and have the requirement removed).
10. The requirements change too fast to have tests!
Usually when requirements change extremely fast, there are a lot of situations where the requirement is removed, then added back, removed again etc. You can just comment out the test in anticipation of the stakeholder changing their mind again.
11. I can’t write tests and compete with the walk-on-water guy next to me that seems to check in lots of working code and never tests!
Then don’t write tests all of the time. Write tests when you have the time for the code that gives you, personally, the highest return since you are in such a competitive situation.
12. No one will obey this rule except for me! This can’t work unless everyone does it!
It can work even if only you do it. In this situation the returns are more for your personal sanity and reputation.
Here is a minimal test process that always works:
- Find something that is prone to break but is high visibility and easy to test.
- Write a test for it.
- Privately tell people when they have broken important code
- If (3) doesn’t work, publicly charge people with breaking the high profile code.
13. People should just own sections of the code base, regardless of test coverage. People get upset when you rewrite their code!
No one should own any code.
Code bases are often interdependent where if one part fails everyone is responsible.
Some basic rules for covering and rewriting code:
1. You shouldn’t rewrite someone else’s code unless you can write a test for it.
2. You shouldn’t/can’t write a test unless you understand the requirement.
3. If you don’t understand the requirement, you shouldn’t rewrite the code for it.