(Note from AlistairCockburn
, I edited down my sections of this page jan 8, '98 to incorporate later comments, and on oct 26, 1998 added the link
http://alistair.cockburn.us/crystal/articles/ms/methodologyspace.htm (the site exists, but this page hasn't been created yet! )
The reason I like ExtremeProgramming
is that it is an example of what I am after in a methodology - ultra simplicity or minimalism. It is my reasoned position that every new methodological component adds large extra cost, as it attempts to replace face-to-face communication with a lower grade form of communication.
I am working out a series of such methodologies, and suffering for a name. I am currently using the working name Crystal. A CrystalMethodology
should use the maximum amount of human potential and the minimum number of methodological components. You will instantly see that ExtremeProgramming
fits the bill for a candidate Crystal methodology.
I have divided up the methodology space into two dimensions, one Potential for Damage or Criticality, where a mistake results in loss of comfort up through loss of life, and Size or staffing level, in number of people. Each point in the space zone has its appropriate methodology characteristics.
I tend to live in the area from loss of comfort up occasionally through loss of essential moneys, but not loss of life, in the Damage direction, and 3-50 developers in the Size direction.
is one valid candidate in the small staff up through loss of discretionary moneys zones, but is missing the publicly evidenced correctness needed for a life-critical project. The base methodology in my book Surviving Object Oriented Projects handles damage up through loss of essential moneys, up to 50 people, but is missing the testing needed for a life-critical, and is heavier than needed for smaller staff, comfort-only projects.
I am trying to find one or more minimal methodologies for each region, or ways to characterize the needs of the regions.
I like the space. I'm curious if you divided it into quadrants what you would find for money/year invested and money/year returned.
I'm a bit leery of number of people as a measure, however. Some people stop listening to me when they see that what I'm talking about can't possibly work for teams larger than about 10-12. The question they should be asking is how big a system will ExtremeProgramming
work for, but measured in functionality, not how many COBOL programmers it would take to do the same thing.
Ah, maybe the problem is one of scale. Since the incremental productivity of additional programmers drops so rapidly, I'd like red=1, orange=2-3, yellow=4-6, green=7-10, blue=11-100, and violet=100+.
I'm curious about your statement that you couldn't use ExtremeProgramming
for life critical projects. I swiped the FunctionalTest
ing ideas and WorstThingsFirst
directly from JonHopkins?
(the most underrated object thinker on the planet) who was reporting on a PacemakerProject?
You couldn't use out-of-the-box ExtremeProgramming
for a life critical project for several reasons:
- most regulatory agencies want more in the way of requirements analysis and documentation;
- you'd need to add some hazard and safety analysis;
- you need to prove requirements traceability; and
- you need to insert various safety mechanisms to detect and safely handle exceptional conditions.
That said, you could use many of the core practices of ExtremeProgramming
in such a project to enhance both quality and maintainability, among other things.
For number of people, you exactly named the methodology trap - of thinking that a certain size problem has to be addressed by a certain number of people. There is NoSuchThingAsSize
. I agree with you that people think ExtremeProgramming
won't scale, and their mistake is in thinking that a certain problem 'needs' a certain number of people. Don't make that same mistake when staring at a methodology. The grid does not assert how many people you need, only takes as input how many people you have.
What I am comfortable asserting is that if you have more people, there is going to be an additional communications burden. If you choose to put more people on the project then you have to take that into account in the methodology. If you are you, then you choose to put less people on, in order to lower the communications burden, in order to get more done, in order to deliver a larger project with fewer people. If you are me, then you argue that the same project should use a smaller-staff methodology, because the additional cost burden of a larger-staff methodology is way higher. (it seems clear from the murkiness of those last sentences why I am struggling for a name and a metaphor for the space).
I think Jon's pacemaker had only 3-4 people on the project, so it would qualify for smallest-staff. I don't know enough about the system to comment on the size of the problem or what actually goes wrong when the thing hits a defect, or what they did to ensure that there really were no defects. I should like to know. I currently think that at the essential moneys potential damage, regression testing and design and code walk throughs are a minimum, and that at loss-of-life damage, more than that is needed - pre- and post-conditions and public inspections and stuff. I have never worked on such a project so I don't know.
I have been finding that there is typically a lot less damage on software error than people initially think. Also that there are different sorts of failure, e.g., bad calculation not so bad (comfort) but downtime intolerable (irreplaceable loss of money).
all comments welcome...
I currently have a GridOfThirtyFive?
methodology cells. For each cell, there are several possible methodologies depending on your fears (as Kent correctly points out) and philosophy. I shall probably never address all 35, but perhaps other people will. -p.s. Jan 8, 1998, I have bunched cells together, so there would only be perhaps 6 total, 4 running up the diagonal, and one each for the extreme top and extreme right.
Not really a suggestion for a name, but perhaps a suggestion that might inspire a name...
said something to the effect that "a scientific theory should be as simple as possible, but no simpler."
Replace scientific theory
and you have the EinsteinPrinciple
of software design which I quote frequently.
Replace it with methodology
and you have MinimalMethodologies
. It nicely covers the goal of simplicity while accommodating the situations where HighCeremony?
may be justified.
As for the name, perhaps Einstein carries too much baggage...
The only loss-of-life project I ever worked on had as its purpose the loss of life. I guess that's not in the spirit of Alistair's comments, much less in the spirit of essential humanity.
As for loss of essential moneys (or is it monies), been there, done that. The Oak Tree / Cold River Veritax product hit the wall when the entrepreneur decided he had spent enough. Several of the ExtremeProgramming
principles would have helped a lot:
- DoTheSimplestThingThatCouldPossiblyWork would have kept our focus on essential functionality, leading us to a complete, if smaller, product.
- YouArentGonnaNeedIt would have had us refuse to implement features, like networked distribution of updates, that would only become important if the rest of the product became successful.
- Stronger UnitTests and FunctionalTests would have let us move faster, get more done, have a better shot at commercial success.
- More PairProgramming sooner would have sped us up and increased reliability.
- SpartanUserInterface would have left us a lot more precious time to work on the real problem, computing people's taxes quickly and correctly.
As I look back on my history of mistakes, and I have made most of them, loss of essential moneys is usually a result of delivering too little too late. (Technical failure is another possibility, but in my experience it is much more rare.) Lighter-weight methodology with the concomitant focus on the real point of the project (delivery) would very often have shifted those projects closer to success.
At the same time, I'm not sure that the big problems were in the development process at all. Were the real roots of failure perhaps
- in communication up and down the chain of command, or
- in the willingness to accept a death march mentality, or
- in the desire to be seen as a team player and thus signing up for an impossible deadline?
I think perhaps they were. The software process could have helped, but the real roots probably lay elsewhere.
The other comment that I wanted to make is that scaling IMHO is a pseudo-issue. The problem with scaling is almost always due to bad partitioning. The loss of productivity in large projects is due to an ever increasing communications overload and this is at least partly due to two factors: 1) complexity and 2) bad partitioning. Complexity is unavoidable ... it WILL cause you to expend more time more inefficiently. Bad partitioning, which is usually, but not always, partitioning along lines with high coupling instead of low or weak coupling increase bug-incidence and and communications overhead both oral, managerial (meetings and what not) and written - especially construction and inspection and approval of interface documents. -- Ray Schneider
This is my first attempt to 'speak' rather than lurk. My boss has said that we will grow our way through the CapabilityMaturityModel
. It strikes me that the whole essence of of ExtremeProgramming
(and what I am reading in Surviving OO Projects) is a solid sense of community with full communication amongst the members. With regard to the communication burden, is dicing the project into subTeams who at least once a week show their stuff to all Project Members ( aka ?codeReview? ) useful??? Our problem has been LipService?
... -- Scott Jackson
In brief, it'd be better than nothing. Unless the project is way too large, closer is better. The C3 team of 12 or 14 people are all in one room. The customers are over one divider in the next space. Believe it or not (I didn't) it works wonderfully.
Generally speaking CMM and XP are horses of two different planets. CMM is a very heavy-duty methodology, and XP is the lightest one we could imagine. We can teach a team XP in a week and get them reasonably good at it in a couple of months at most. It takes years to make it through the levels of the CMM, and very few organizations ever make it to the top.
I beg to differ. The CMM is NOT a methodology. It simply describes the practices which need to be present in order for an organization to possess a certain level of capabilities. That said, XP could probably be shown to be compliant with some level of the CMM (perhaps even Level 3) since it implements many of the CMM Key Process Areas (KPAs), albeit in very integrated and mutually-dependent ways. -- KenBoyer
seems to fit into this category of MinimalMethodologies
as well. Its externally maintained feature list and other process rules seem to make it more formal than ExtremeProgramming
. But it appears it ought to work better when the users are not "just over the wall." -- MarkSwanson
More and more of ScrumProcess
users are including XP as a practice methodology. Scrum addresses product management, the workings of development teams, and the interactions of the two. XP addresses what the development teams do. -- KenSchwaber
I have a few questions:
If half of the crew decides to quit, the documentation won't help anyway...
- Why is complexity associated with more time and money spent? In fact, what is complexity? IMHO, there is no such thing as a complex system, just a system badly partitioned into subsystems. So the only reason for costs and time growing with project size are project management mistakes determined by the stress generated by higher responsibility (and the lower qualification of the people managing large projects). In a normal world, costs and time would grow linearly with project size.
- OK, XP, but how about the persistency of an organization using XP in an extreme way? What if half of the crew decides to quit, and an entire small to average project remains completely un-covered by people knowing the source code? Maybe I didn't get it right, but what I read above should eliminate documentation completely, at least for small to medium projects.
- Isn't a team of ten programmers too big? Over time, I found out that the size offering the highest efficiency is about five. Has anybody else had similar experiences?