If the future tense describes the near, foreseeable future and changes (very) likely to happen in this short time frame are considered, I think that these two principles do not necessarily exclude one another.
Item 32: Program in the future tense
- ... Good software adapts well to change. It accommodates new features, it ports to new platforms, it adjusts to new demands, it handles new inputs. Software this flexible, this robust, and this reliable does not come about by accident. It is designed and implemented by programmers who conform to he constraints of today while keeping in mind the probable needs of tomorrow. This kind of software -- software that accepts change gracefully -- is written by people who program in the future tense. ...
- -- ScottMeyers, MoreEffectiveCeePlusPlus
While this is only a short quote from the beginning of this chapter, I hope it gets the idea across. -- OliverKamps
Is there any reason why the XpSimplicityRules
are not enough to accomplish this task thoroughly?
It depends on how far your program is to adapt, and what the constraints on the system are. The basic tenets of the XpSimplicityRules - DoTheSimplestThingThatCouldPossiblyWork (DTSTTCPW) then RefactorMercilessly until you DontRepeatYourself - tends to avoid and resist certain classes of pervasive changes (those that are 'cross-cutting', including reflection, optimization, security, dynamism (configurations, scripts, pubsub hooks, soft layers), etc.) that might come along in the future. In particular, the cost to perform these changes rises with every feature added unless provision was made for their eventual inclusion from the start, in which case the cost increases only marginally with each feature added. To implement configuration options or security late in the game, and especially to handle the necessary refactoring afterwards, can often require taking an XP project almost entirely apart then reassembling it - which is not 'software that accepts change gracefully'.
This seems awkward to me, because every major school of methodology advocates solving the problem you actually have. ForthValues
, etc. Hence, I think it needs to be mentioned, for the benefit of the reader, that programming in the future tense likely isn't
intended to be used as a license for such nefarious things.
I think it likely that 'program in the future tense' implies exactly what it says: making provision for the probable needs of tomorrow. I.e. don't "DoTheSimplestThingThatCouldPossiblyWork"; instead, "DoTheSimplestThingThatCouldPossiblyWork that won't need to be completely torn apart and replaced to handle (some probable X) at some point in the future". It is demanding that you use that foresight of yours where you are blessed enough via experience and intelligence to possess it. Resist being so greedy when seeking the 'simplest' thing that the 'simplest thing' tomorrow becomes AddingEpicycles.
But, DTSTTCPW combined with RefactorMercilessly
means that no such epicycles would ever be added, at least by integration time.
The problem is that this is an untrue assumption. RefactorMercilessly doesn't remove epicycles; to remove epicycles requires a (pardon the pun) 'revolutionary' shift in one's perspective. It is often difficult to recognize the 'epicycles' your own code is forcing upon you until you've added quite a few, at which point the codebase resists those changes, too. For some changes, such as those involving cross-cutting concerns mentioned earlier, a near-complete re-engineering of the entire codebase or at least a very large chunk of it that requires touching everything else. Re-engineering != refactoring; if the section of the codebase you touch is large enough, you've essentially reached PlanToThrowOneAway.
... OK, you might have to re-engineer the code backing some feature periodically, but that is the price you pay for having a clean design at all times. Hooks are
epicycles, unless made explicit in the application's requirements or
via a refactoring operation. But, then, if you do that
, then they're not hooks anymore -- they become intrinsic requirements of the application.
Let the application evolve on its own. It knows what it wants to do.
That's YOUR advice (not that applications know how to evolve outside of GeneticProgramming). And your advice has it has its flaws, which even you admit to. The advice offered on this page is: ProgramInTheFutureTense. It isn't saying that you need to add features before you need them, just that you utilize what little foresight you possess to make provisions for future features and avoid shortcuts that will make it harder to add probable features later. One way of saying it is: DTSTTCPW is probably a PrematureOptimization - a naive, greedy heuristic with myopic scope that takes just as much effort to recover from as does any other premature optimization. Instead of DTSTTCPW, find something that works that leaves your options open in the future.
My own approach is to always do something that will provably work. I.e. there are always two criterion for me: it works, and I can explain or prove why it works. Being 'simple' is implied only because simple things are much, MUCH easier to explain or prove. I happen to believe programmers have a responsibility to be able to prove their code will work and that we won't be more than a bunch of sputtering poets before we start doing so. This isn't at all the same as ProgramInTheFutureTense, but it does require use of a little foresight and especially resisting the myopic and hackish DTSTTCPW (proofs often require the full context, and doing stuff just because it would 'possibly' work seems like a good way to get a bunch of solutions that quite possibly don't work. Admittedly, I have expensive and heavy machines moving around based on my code...)
The class of changes requiring 'revolutionary' refactoring bordering on PlanToThrowOneAway
depend on the basic infrastructure provided by the programing language and environment. A dynamic environment like SmallTalk
which can even add something like continuations dynamically to the system can support changes incrementally (like cross cutting logging every method call) which would force weaker systems that didn't take something like this into account into the rewrite.
I wont deny that there are better languages and frameworks/runtime environments for programmers that wish to code in myopic manners. (Trivially, there are languages truly awful for XpSimplicityRules, such as BefungeLanguage where any non-trivial addition can require restructuring all the code written thus far.) Smalltalk and Ruby and XMF with their MetaObjectProtocol, and languages with AspectOrientedProgramming support, and perhaps SymmetryOfLanguage in general, allow one to stave off the more common causes of revolutionary changes much longer. Albeit, even they rarely achieve resistance to all vectors for such changes; an extra layer of indirection can solve many problems except for too much indirection, the indirection being in the wrong place, and various negative constraints (like security or privacy or optimization). If all your frameworks are already built to be highly dynamic, however, you're in a good position to avoid thinking ahead - it means that your language designer and framework/library writers already did most such the thinking and foresight for you (e.g. playing scenarios and UserStories like: "imagine I had written myself into $(this corner); what language features would allow me to get out without rewriting a bunch of stuff?"); they are the ones who ProgramInTheFutureTense, or at least think about it so you don't have to. But as noted near the top of DoTheSimplestThingThatCouldPossiblyWork, DTSTTCPW doesn't ever give you a new programming language with all these features as something on the path towards completing a program, not even if the net result would be much simpler in the end.
Knowing that change is inevitable, I choose to embrace change, rather than fear it. ProgramInTheFutureTense
might be common sense for a lot of people, but it's proven to offer more dead-ends than ways out, in my experience. Very often, such coding is often the cause
for said dead ends, resulting in the very rewrites you seek to avoid. I find this to be as true for ForthLanguage
, etc. It's not an issue of writing one to throw away. That implies, after having rewritten it, it remains fixed for all time. Thinking of rewriting as "1st edition," "2nd edition," etc. is more valid. You can't defeat the 2nd law of thermodynamics, not even in code. Eventually, you will
have to rewrite. --SamuelFalvo?
I don't believe change to be inevitable... except in certain domains. The RelationalModel isn't going to change very rapidly or any time soon. The basics of mathematics haven't changed. The requirements to cause deadlocks, and the fact that great foresight and care is required to avoid them when using lock-based concurrency solutions, will probably never change. The notions behind DisruptionTolerantNetworking and delay tolerant networking will always exist. The ideas for atomic operations and transactions have and will likely remain the same. Logic and inference and the notions of axioms, models, and theories have and will likely remain consistent. The need to parse formally written text and apply semantics to it isn't going away anytime soon. What do change rapidly are business policies, rules, HCI configurations, and service-requirements. I know how ProgramInTheFutureTense takes this into account; how well does DTSTTCPW do?
How well do you want
it to do? It's like the difference between analog and digital: analog is the most precise method of representing a signal, but digital is as precise as you need it to be. Likewise, with DTSTTCPW, your modes of failure are addressed via user stories no differently than the color used to paint the OK buttons. As such, solutions for those stories get unit and integration tested. To ProgramInTheFutureTense
, you make assumptions -- which could well be right! -- which inevitably go untested
in the greater context of the deliverable. As a result, opportunities for bugs increase.
I can see your POV about being a myopic programmer, but I just don't agree with it. It perhaps is better to think of it more as beaurocratic -- it's all about covering my
posterior in a world all to eager to place blame. --SamuelFalvo?
I don't believe DTSTTCPW or XP in general really addresses modes of failure and negative constraint user-stories of ANY sort (security, optimization, safety, future code-change requirements, et. al.) You can address certain certain specific 'positively' described failures (if X is detected in situation S, then do Y), but you certainly cannot say 'prevent X'. Even ONE negative-constraint user-story has an impact on ALL past and future stories AND how said stories interact.
As far as covering your posterior - yes, DTSTTCPW will often do the trick. ProgramInTheFutureTense is a bit harder to justify. Of course, my own solution of always choosing provable code is better than EITHER of these.
The irony with DTSTTCPW is that I didn't know what that acronym stood for (and had to look it up on this page), therefore it was not the simplest thing that could possibly work. :-) AhSimple
was, however, the simplest thing that could possibly work (debatable though! laugh
). The DTSTTCPW is such a long drawn out phrase that the acronym itself is just as long
as the entire phrase AhSimple
(8 characters). DTSTTCPW has obviously failed, and I have proven that the phrase "do the simplest thing that could possibly work" is far from
the simplest thing that could possibly work! By the way, what is a "thing"? So unspecific, and so over simplified.