Here's a philosophical problem that I've just come across while attempting to follow as many XP practices as I can (subject to ExtremeProgrammingForOne
). If it matters, I'm programming in Java, and using JavaUnit
for my unit tests.
I've captured a load of UserStories
. I've assigned importance and cost values to them and decided which are suited for this release. I've worked out the interaction between objects using CrcCard
s, so the next stage seems to be to write UnitTest
So I start writing some "high level" unit tests. As I write them, I often need to think through some of the details of the object relationships and interfaces, and that's fine, it shows that the XP process is working. Then I spot something:
One or more of the unit test cases I'm writing are very similar to ones which would be needed for a user story not in scope for this release.
I seem to have several options:
- Ignore this coincidence and leave it for me or someone else to find (or not find) later. A YouArentGonnaNeedIt approach, even though I know I "will" need these tests.
- Create appropriate UnitTests for the out-of-scope user story while I'm here.
- Note the similarity down somewhere else and carry on coding.
- Bring the other story into scope for this release anyway, and write the tests for it now.
Option 1 worries me. I really don't like the idea of just throwing information away. There's no guarantee that this similarity will be so obvious to someone later, as once a user story has been done and tested it's no longer bubbling around at every PlanningGame
Option 2 has its problems too. I need to be able to say that all tests pass at the end of this release, so I can't have tests for future stories kicking around in the test suite. On the other hand, code that's not used by anything else is an obvious candidate for merciless refactoring or just forgetting about when it comes to be used. After all, no one wants to have too look through a potentially infinite pile of "in case you need it" code before making any change.
Similarly for option 3, except that the "infinite" pile of stuff to look through is not even code but some sort of external documentation. Anathema.
Option 4 undermines the strongly "user-driven" side of XP. It's adding work to a release that the users are not aware of, and there was probably a reason for not choosing that particular story for this release. Even if there is enough spare time to do it, the users might have preferred another story instead.
(s), what should I do? Do I have any other choices? Am I missing something?
You're right to note that Option 4 is a mortal sin.
If you can't stand the possibility that you won't be smart enough later to remember this test, what form of Option 3 would be least costly?
You're right to note that Option 2's code will be subject to being removed because it's not used. C3 sometimes puts tests that should work later into a separate testing protocol. It hasn't ever helped much. What would have to be true for it to help?
You comment under Option 1 that you know you will need these tests. How many circumstances can you list where in fact you will not
need them? (Hint: there are more than three.)
Therefore, what should you do? -- guessWho
P.S. I'm not aware of the concept of "high level" UnitTests. What are they? Compare to XP UnitTests, which exercise everything about a single class that could possibly break.
Sorry, a sloppy phrase. I mean that I started writing the UnitTest
s at the same level of detail as the CrcCard
s, and used these to prompt me into mode detailed tests and therefore more detailed design. I find this the only reasonable approach to go from the loose design embodied in the cards to the very tight design represented in the code. Do you consider all UnitTest
s to be at the same conceptual level? --FC
You could describe the similarity to your customers and suggest to reschedule the user stories. Based on that information, let them decide whether they want the extra functionality or not (or not now). XP is about the right people deciding, it doesn't abolish thinking or suggesting. -- HaskoHeinecke
I realise that this example gets a little more abstract as it goes along, but for the purposes of this, I am assuming that although the tests
are similar, there may be a significant code difference between the implementations, which would cost time and money to implement. Consider two device drivers for different devices which conform to the same driver interface. I would guess that a mere similarity in UnitTest
s is little justification for a reschedule in most users' eyes. --FC
And therefore, grasshopper, you now see that the answer is ... <yourAnswerHere>
get turned into AcceptanceTest
s. One could view them as high-level unit tests in that they have many similarities. But they are handled differently and have a different ultimate purpose. You can and should write and include them early. Just note that they will not run at 100% until later in the project. That is allowed with AcceptanceTest
s and is a difference between them and the UnitTest
s. I suspect that you may have mixed your unit tests and acceptance tests together to some degree. We did this at the VcapsProject
. But as you have already seen there are implications in doing so. Some good and some bad.
s are tests that exercise the code you have already created and will create during the next 4 hours or less. KentBeck
suggests adding them one at a time and code to get just that one working and then add another. I admit to trying to do that and not often succeeding because it just so tempting to add a bunch of tests, but it is the ideal to strive for. So in this case you would not want to add those tests early or even consider their existence before adding the code. It is good to have the unit test code be more reliable than the code under test. So you do want to keep your test code simple and never add anything before you need it.
So, if the tests you are considering adding really belong to the acceptance test suite choose option 2. Go ahead and add them but keep them separate and don't expect them to pass. Having them will help you track your progress. If however the tests really are unit tests, wait until just before you create the code. Choose option 1 and let go of solving problems prematurely. If you just can‘t let go, choose option 3 and make a note on a 3x5 card. But you are going to be surprised at how many of those cards either become obsolete or get done anyway without you worrying about it.
"I was going to call my next child 'Yagni', but now we've decided not to have another one." And the student was enlightened.