We cannot build in a jillion features in anticipation of every possible event. The more I think about something, the more what-if scenarios I can usually conjure up (this drives some managers crazy because they have to stop and think). Thus, we must make estimations about how and how often future changes will occur so as to prune our future-proofing features down to a digestable size. Assuming you don't subscribe to pure YagNi
, you are essentially gambling, or "making investment decisions". The trick is to minimize, or at least understand the risks and trade-offs involved. The concepts I learned in investment classes and books are often very appropriate to designing software that is easier to maintain and change down the road. There are few certainties and many probabilistic decisions
People tend to fall on a spectrum between putting every future-proofing idea into software designs and putting none (YagNi
). Most probably fall in the middle somewhere. The two primary decisions are:
- How much future-proofing should we do?
- Which features should we include (and which should we vote off the island)?
Topic Title Issues
Perhaps "investing" is more accurate than "gambling". Gambling implies addiction. Investment implies managing risk. (See SoftwareDevelopmentIsInvesting
Not to mention, the odds are better. In gambling, the odds always favor the house; with investing the odds are not so stacked against you.
A few do make a living at it, usually PokerGame
Of course, poker is unique in that it's one of the few games played in a casino where players play each other and not against the house -- the house makes its money by taking a cut of the pot (or of the ante), but is uninvolved in the outcome of the game. (There's no way for the house to participate in a game of poker other than hiring expert players to represent the house; but an expert player could probably make more money playing
against the house than by taking a salary playing
for the house).
I am sure that if we put as "winners" only those projects on time (without overtime), on budget (without underpaying your developers), and with the exact amount of quality needed (for example: high quality if risk of loss of life is part of the problem, and low quality if the impact failure is almost irrelevant) the statistics will show that the house wins the majority of the time
Perhaps "investing" is more accurate than "gambling". Gambling implies addiction. Investment implies managing risk.
More along the lines that it is a continuum of investment. The one extreme is pure and simple gambling, in programming and in finances.
Perhaps we should change the topic name to SoftwareDevelopmentIsInvestmentManagement?, SoftwareDevelopmentIsRiskManagement?, or InvestmentTheoryAppliedToSoftwareDevelopment?.
This is not an issue exclusive to software development. No created work addresses every possible contingency.
Gambling does not imply addiction; there is no essential distinction between gambling and investment. Note that "gambling" is often used in reference to games of chance where the risk is known precisely, and adding to your pension fund is often referrred to as investment, but carries an imprecise risk that the government will introduce a new tax specifically on pension funds (or pensions). Also, there are created works that address every possible contingency (mathematical proofs, for example).
I'm not sure where to put this discussion, since the elements of process affect so many things as to remove the unknowns and reduce the "gambling." However, let me say this: an architecture determines what can't change, and, by implication, what can. A design fixes one approach to solving an architecture. The code ends up implementing one solution for the design.
Thusly, software development need not be a gamble of any kind once the architecture has been fixed, a design selected, and an implementation started. Sure, certain requirements may change, but only those things allowed to change according to the architecture. A good design will accommodate the foreseeable changes that will arise and even allow for some unforeseen changes.
Code will always be changing with the maturation of a product development cycle, but this is strictly implementation. The code isn't changing because the design is changing willy-nilly, nor is the architecture being altered to shoehorn in a newly-requested feature. The code changes to make the product behavior meet the requirements of the design, which evolves to meet the requirements of the architecture, which doesn't change.
Software development is not
gambling. It is applied science as sure as any other form of engineering.
I'm not convinced it is possible, or at least not economically feasible, to create software designs that continue to "change well" over the longer term (except in a few niches/domains). At least the techniques for doing such are poorly documented such that when it happens it's a lucky accident or undocumented skill that will die when the person retires.
For example, making software be flexible to change often involves adding more indirection (references that reference references). Initial coding of indirection-heavy apps is often tedious, makes the code harder to read on average, and slows the project down. Thus, the "investors" have to be willing to pay an up-front indirection tax. Because currently-accepted investment theory wants a return on investment within a few years, such is often skipped.
And I disagree it's like other engineering disciplines. The solution paths (set of all possible implementations that satisfy requirements) are far far greater in software than hardware such that each path cannot be dissected and analyzed with anything near the same rigor that say a bridge would.
Meh. I'm not buying your argument, Topper. First off, change is a necessary and unavoidable part of the development of any product. (Well, almost any.) My point about separating architecture from design is that you know what can change and what can't by the boundaries of the architecture. Creating a design to meet the requirements of the product's architecture isn't magic nor happy circumstance, it is science. Changing that design -- within the bounds of the architecture
-- is also a science. The science of engineering design is well known and well documented, so we need not get into that here.
Your point about indirection is a bit stretched, but I follow your reasoning. (Indirection is a technique, not a design component.) The question about what flexibility needs to be dialed into the design up front must be resolved in order to create an adequate design in the first place. This is why ya gotta go through the painful exercise of determining what is architecture and what isn't. If the "investors" don't want to pay for a design that can encompass foreseeable changes then the architecture must be expanded to include rules eliminating the possibility that the design will change in those areas. It's our responsibility to make sure the client understands this and the long term implications of setting a too-rigid or too-loose architecture.
And I am in complete disagreement about software development not being a hard engineering discipline like bridge building. It most certainly is. Who would commission a bridge to be built without knowing the costs involved? Or the schedule? Or the up-front costs? There is no reason that software development, if treated like the real engineering discipline it is, can't produce hard schedules and cost predictions. There are many, many smart people out there who have written on this topic and produced data backing it up. Take a look at Crosby, Fagan, and others.
Finally, I want to point out again that the separation of design-time activities into architecture, design, and implementation is a vital part of the overall discipline of product development. Without knowing what is malleable and what isn't we can't design our way out of a wet paper bag with a sharp knife. I put a little of this discussion at the end of TheSourceCodeAndTheArchitecture
, so you can look there for more.
I disagree that there is a clear science behind it. Many of these authors blow hot air. Their claims need to be independently verified in controlled studies to get real science. I welcome you to cite such papers that actually measure and test rather than present the brain drippings of those who sit around and speculate in ivory towers.
As far as cost estimating, I realize there are techniques to guess fairly well, such as plugging in the number of draft entities, table columns, and screens to produce estimates with roughly known probability boundaries, but very little on "which architecture is best".
Ed Deming, Carnegie Mellon, Michael Fagan, the late Phil Crosby, lots of others. All with piles of documented case studies. I am not doing your homework for you. Denying the existence of the sun will not stop it from rising.
Cost and schedule estimation are an outgrowth of most of these process disciplines. Quite a few of the regimented process schools even teach how to reduce unreasonable expectations by replacing them with very attainable goals. Once again, I am not looking this stuff up for you.
Naysaying isn't a valid form of argument, tops. "Many" authors blow smoke is the same as Fox News claiming "some say" as a news source. My point is that people with lots of experience and much brain power have come up with means by which the risk can be mostly removed from product development. I do not wish to nitpick over semantics, nor quibble over whose approach is "best," nor any of the zillion other rabbit holes we could end up chasing. My point is that the science exists, it is repeatable, reliable, predictable, and available right now. To claim that there is no science and that all software development is a crap shoot is simply keeping one's head in the sand.
I never claimed there was "none". You are pulling your own Foxnews in mis-representing my position. In general, I'm saying the art still far outweighs the science.
Hmm, again. We seem to be at the level of trivia now. However, your claim that the art outweighs the science still
fails to hold water in light of the vast array of science that is available. Perhaps we can agree that the bulk of software development efforts are hampered by a lack of the proper application of science, thusly relegating them to the domain of magic, fortune telling, and other pointy-hat-and-wand stuff. This isn't the fault of the science, but of those professionals whose lack of understanding prevents them from using the powerful tools that are readily available.
Maybe this whole thing is about professionalism. What would you think of a plumber who tries to fix PVC leaks with putty? Or of a surgeon who uses Scotch as a disinfectant? Or of a programmer who creates business applications with assembly? We are responsible for knowing the state of the art in the software development profession just as any other professional is responsible for knowing the current level of technology in his field of endeavor.
Modern plumbing is done with leak-free PVC pipes and joints. Surgeons have a vast array of chemical, ultraviolet, and other means to disinfect and maintain sterility in the OR and recovery areas. Application programmers have development tools and environments out the ying-yang with which to create running applications. It is incumbent upon all professionals to know what tools are applicable to a task and how to use those tools to the best effect.
I think it is past time for software engineering in general to stop hiding behind a cloud of FUD when it comes to issues like quality, schedule certainty, functionality and feature coverage, etc. We have the means by which we can determine what the product will be once we're done creating it, how long it will take to create, and what it will cost to do all that. Let's not pretend that we can't know this stuff. We can.
Let's instead focus on how we can bring this professionalism to the workplace and start applying the knowledge and wisdom that the aforementioned ground breakers -- and lots of others -- have brought to our
field of endeavor. We gotta stop cowering behind fear, uncertainty, and doubt. We gotta be bold enough to stick our necks out and promise the client that if he holds up his end then we'll hold up ours.
In my observation, there are TooManyVariablesForScience. We can study a few factors in isolation, but not capture the relationship between them all. See also: AllRoadsLeadToBeeMinus.
- I would hope that somebody tested the various options and that there is empirical evidence backing each choice. Sometimes, certain techniques just keep being used because nobody bothers to challenge them. It's kind of "voting by inaction". It would be interesting to see the full story of those. But if such became an issue, then tests can be devised. A practical example is chiropracty versus a "traditional" doctor. It has turned into a heated debate. (The last empirical tests I've seen, chiropracty works, but is more expensive than the traditional doctor approach.) -t
Yeah. In your
observation. That's the rub, isn't it? Your
conclusions. I've been doing this stuff for almost 35 years now and I wouldn't want to trust my experience as indicative of the state of the art. If I had to do that I'd also conclude that it is impossible to create any product worth spending money on, much less one that encompasses the best of the best.
Fortunately, I have studied the science behind our discipline well enough to know that my own experience is insufficient to describe the best practices of our profession. To insist that such practices are doomed to failure based on your
exposure is the height of hubris, top. The fact that you hammer this point about your observations into the ground all over this board despite being rebutted by multiple sources leaves you open to the charge of being closed-minded. Just on this page your repeated refusal to acknowledge the works of elder statesmen of our craft borders on trolling, a charge I would not want to level at any fellow Wikizen.
Read Fagan. Read Crosby. Read Deming, and the QFD stuff, and whatever else comes up in the Google search for "software development process" or some equivalent. You will see that these people have dedicated their professional careers to gathering the data not from just one aspect of process, but from the entirety of engineering. Crosby in particular points out that a standard of quality has to apply to everything and everybody in the chain, else the chain breaks (weakest link and all that).
Your conclusions that our work efforts will never produce anything but mediocrity are simply wrong. I stand by my statement that software development is not
a gamble, but a tightly regimented scientific endeavor that can produce reliable, predictable, repeatable results -- just like any other engineering. The only question is: will we make it do so?
Everybody's looking for the magic way to turn software engineering into a real discipline. If Crosby etc. had truly found it, many would be cheering right now. If Crosby has the illusive magic formula, then I congratulate him. Someday I might check it out. Not now. Others have mentioned Booch, Meyers, etc. as the "true experts" also. I have some of their books and there are far more claims than true science. Rather than rely on ArgumentFromAuthority, how about a tidbit about how he found the magic formula and factored in all the possible variables.
Okay. At this point I have to assume you are trolling, because you won't even take the time to look up the works of people smarter and more insightful than you and me put together. There's a reason people on this board don't like you, top. If you actually had anything valid to say we could AgreeToDisagree
, but you aren't even paying any attention. You're like one of those jerks who never really listens to other people -- you just wait for them to stop talking so that you can inject more of your profundity. Have at it. TenSeven
- [But... He does add color/variety/liveliness/action/controversy/patronage to Wiki...]
I always thought it look less work to be informed than to raise ignorance to a noble art. Top shows this to be an error I think. I could sign my name, but why bother? It's only top. -- Bottom
If you were as well informed as you imply, you'd provide better evidence for your claims, such as coded examples with clear numeric metrics that demonstrate your pet tools are better using things like code size and change impact analysis. You'd be able to RaceTheDamnedCar and win instead of yap on about how evil and lazy I am. In other words, you are a hypocrite. You are afraid of the full science cycle. -t
Not even I would claim to be as well informed as I think I am, but if I did it would be misinformed narcissism, not hypocrisy. You are not evil. And the comment above actually marvels at your work ethic. Maybe we should AgreeToDisagree
about my emotional state. I did wake, in a cold sweat, from a nightmare last night, but it was not about the 'full science cycle'. -- Bottom
With compliments like that, who needs insults? As far as cold-sweat nightmares, usually Microsoft is the cause for me.
for some insight into the nature of these discussion participants.)
OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy discusses existing studies further.
Actually, that page just throws more dirt in the air, but why interrupt Yet Another Topper Rant?
Yes, I'm bad, evil, lazy, and stupid; and you are a Nobel-winning angel who farts cancer-curing glitter. -t
No. We've already established that TopIsNotEvil
. Bad, lazy, stupid -- you'll have to deal with those on your own time.
After the glitter study.
If there are actual empirical tests out there somewhere as the above appear to claim, I expect them to generally follow this formula:
- 1. State what is being measured
- 2. State how it's being measured
- 3. State possible outcomes BEFORE the test and the interpretation of the different outcomes, along with the justification for each interpretation.
- 4. Run the test
- 5. Measure
- 6. Describe the results of the measurements as interpreted in #3
While there have been very specific tests for specific aspects, most would agree that they focus on too few practical factors to use for sweeping decisions. An example would be speed. While a fast language is a good thing, it's only one useful factor among many. -t
Sadly, topper, you once again prove that you are arguing from a position of profound ignorance. Deming, Crosby, Fagan, and others have built their entire careers on gathering exactly this kind of statistical data to back up their claims. Had you bothered to spend 30 seconds in a Google search you'd see that, but you won't. This is why Wikizens refuse to take you seriously, despite the fact that you are obviously bright and have a keen, if perverted and misdirected, perception into the inner workings of the software development universe. You could be a valuable contributor around here if you weren't such a dweeb. And I mean that with all the respect I can muster. For reals, dude.
I never claimed that metrics don't exist, only that they are not used 'in practice
to back various claims made, such as "thin" or null-free tables are better than "wide" tables. Potential metrics do indeed exist, including CodeChangeImpactAnalysis
. If somebody claims a GoldenHammer
, I expect them to apply a good selection of metrics to a sample but representative application(s) using a dozen or so change scenarios. Instead, some of my harshest critics use "descriptive logic" to justify their GoldenHammer
. But that should be rejected beyond use as "interesting suggestions" because GoodMetricsProduceNumbers
. Words cannot replace numbers. It is reasonable and rational for me to request numbers instead of just descriptive logic. If that makes me bad, dumb, evil, lazy, stupid, or whatever, so be it. I'm sticking to my number guns and other rational people will also.
I realize that applying sufficient metrics is time-consuming and probably not practical for this wiki, but that does not mean that shortcuts such as descriptive logic is a satisfactory replacement. It's the ugly stepdaughter of full numeric metrics. --top
You know, I think we're talking at cross purposes again here. Let's not let the discussion wander off into an analysis of this or that software technique; let's keep the focus on what we've been hashing out -- namely, that software development is a hard science and a rigid engineering discipline the same as bridge building or road construction. We're not looking for golden hammers here, but rather trying to grind a sharp point on the argument that Deming or Fagan type numerics are the basis of the metrics you are looking for.
The primary argument is that software development is
not gambling. It is an investment in a regimented, measurable, predictable discipline that will produce a known result when applied properly. The rub lies in using the processes that contain the measuring, what units are measured, how the data is collected and interpreted, yada yada. Now,
that we can get into if you want to argue details. But first we have to establish a baseline of tit versus tat.
"Gambling" was a poor naming choice. SoftwareDevelopmentIsInvesting
is better replacement. And it's not at the stage of being comparable to bridge building, in part because there are so many different ways to get to an acceptable result. It has a much higher fan-out of options than bridge building. One can run simulations on bridge construction to get fairly tight time and cost estimates of the open options. There is nothing comparable in software. If you had a tight-enough simulation, you'd already have the result; or at least the effort to build the actual result wouldn't be significantly more than the simulation such that you might as well build it and skip the simulation.