Design In Xp

ExtremeProgramming includes as much or more design as other processes. It differs in when it does design, and in what artifacts are produced.

Waterfall processes use BigDesignUpFront, and look kind of like this:

Incremental and spiral processes look more like this:

ExtremeProgramming looks like this:

[Minor edit: I changed dct to dtc in the previous line. My apologies if I am wrong. -- interlock.arinc.com]

[Or, if I understand RefactorMercilessly, it should be "tcd"??? -- JeffGrigg]

dtc is best. a little carding, or a moment of thought, then write test, then code. In refactoring, there is generally no new test-writing, though of course one runs them incessantly.

Perhaps the order shouldn't be fixed? Perhaps you design when you need to design, and not when dictated by a heavy methodology.


In ExtremeProgramming, we admit that we aren't smart enough to invent a full-scale design that will actually be optimal. This means that doing a BigDesignUpFront will be wasteful.

We also admit that we aren't smart enough to do a lot of coding and have it all work. This means that doing a lot of coding and only then testing will also be wasteful.

Instead, we do a little bit of design, we code it up, we test it, again and again, many times a day.

In the middle of the project, we're faced with a new UserStory. We need to define the EngineeringTask's that will implement the story. If it's not obvious (and often it is), we will do a quick CRC session to determine what needs to be done. A handful of developers will sit down, usually with a customer, and put down cards for the objects relating to the new story. They'll add cards representing the new inputs and outputs, and cards representing the objects that transform the inputs to the outputs. In a few minutes, the developers will know what is to be done.

The next step is to throw away the CRC cards and implement the tests and the code. Developers step to the machine, write a UnitTest for the method or object they're about to create, run the test (oops, it doesn't work), implement the method or object, run the test, fix the method, add a method, run the test, until done. This should take a few minutes. Then the cycle repeats.

Where's the design? Part of it is in the CRC session, if one was needed. Most of it comes during a "dct" cycle. [Write the test then...] Go the object that needs a new method. Write the method, giving it a name that expresses your intention. Implement the method by refining it, also by intention:

Add the test first right? I think it would look something like this... -- TimMackinnon --

  testGetTaxDeductions
	aTaxObject setTaxDeductions: self taxDeductionsTestData.
	aTaxObject getTaxDeductions.
	self assert: aTaxObject getDeductionTotal = self taxDeductionsKnownTotal.
	...other assertions....

Then write the missing method...

  getTaxDeductions
	[taxDeductions atEnd] whileFalse: 
	[deduction := taxDeductions next.
	self addDeduction: deduction]

Now either you're done, because you used only methods that already exist, or you need to write another method:

  addDeduction: aDeduction
	taxDeductions add: aDeduction.
	taxDeductionTotal := taxDeductionTotal + aDeduction amount.

Pretty soon you've implemented the feature. Run your tests and make sure it works.

Question: When did a test get written for this second method #addDeduction:? It may be that the initial test correctly covers its scope, but usually more testing is required for it. I've heard Kent say SitOnTheOtherCards, I take this to mean that I should also queue up another card to remind myself to write a test for #addDeduction:. I wonder if SitOnOneCardDiscussion covers this? -- TimMackinnon

Now examine the code you have created. Look for violations of OnceAndOnlyOnce, names that don't communicate, methods that seem to be on the wrong object, other infelicities. RefactorMercilessly, until the code is a neat as a pin. Run your tests often during this process, and of course, when you're done.

There's a lot of design in this process. When you express your intention, you're designing how the thing should be done - the meaning of the objects and methods involved. Then when you refactor, you are ensuring that the overall system remains cohesive and that the objects relate to each other well. These are the main things you want to accomplish with design.

If design is done more up front, the discoveries during coding either have to be fed way back into the design, or the design will be allowed to drift from reality, or (god save us) the design will be adhered to even though the code tells us the design is wrong.

If design is done more up front, it becomes harder and harder to do design, so less and less is done. But since "design and coding aren't the same thing", the slack will not be picked up during the code/test cycle, so the system diverges from good design.

XP does more design than most processes; it does it longer; and it does it closer to the point where it matters: closer to what goes into the code.


Here is the deal. You get to make the code as good as you can. You will always be able to make the code be as good as it can be. You can take as long as you need to do this. It is built into the schedule. As you learn more, you are allowed, even expected, to add that to the code.

But there is a catch. You aren't allowed to write code to solve problems you don't yet have. If you start solving problems that you might have then you loose the ability to estimate because it becomes a contest to see who can think up the most hypothetical problems.

This do you/don't you design business is a bunch of bunk. If we get caught up in semantic arguments about what is/isn't design then we will quickly forget what we do know: it takes effort, but not unbounded effort, to make a clear program. -- WardCunningham


This might be a bit off topic... refactor if necessary, and thanks in advance. (Oh, and please correct me if I'm out to lunch, I'm obviously not an engineer. :)

What's wrong with the idea that the blueprints for a bridge match with TheAlmightyThud of paper, and the grunts with the shovels match with the coders with their ide's?

Where does the compiler/interpreter fit into this? Is a compiler not the grunt with the shovel that does the laborious work of turning a fully specced design into a finished product? Did this not have to be done by hand, years ago? Grunt work is so called because it doesn't require creativity, just a doggedness to finish the job, and the willingness to ask a question if unsure (think compiler errors).

The 'up-front design' therefore seems more like an engineer's notebook, where he scratches out thought experiments, works out the math behind a design, and records why a decision was made. Note that the blueprint he creates can still be understood by both a grunt and another engineer without the aid of that notebook; it's a transient item. The math behind the design, although also stored in that notebook, is akin to the unit tests in XP. If the design is changed so that they no longer fit in their constraints, the unit test fails. If the grunts build the bridge without due regard to the plans, the unit tests will again fail. -- WilliamUnderwood

''It seems to me that there are a few basic differences between building a bridge and writing a program. They can be pretty easily summed up:

-- DanUznanski (Woohoo! My first edit here! :D)''


See also ExtremeDesignArtifacts, also ExtremeHumility

[CategoryExtremeProgramming]


EditText of this page (last edited March 16, 2005)
FindPage by browsing or searching

This page mirrored in ExtremeProgrammingRoadmap as of April 29, 2006