Software quality is a multifaceted thing. Depending on the project and the task you might be worried about reliability, scalability, security, or usability, for example. How do we balance these facets of quality against themselves? How do we guard against GoldPlating
on one hand, and on sloppiness on the other?
A lot of programmers believe all code has to be bullet-proof. This can lead to a multitude of sins:
- Overemphasis on security: "We have to spend an extra week writing this password system because somebody could crack it with a cross-channel attack."
- Overemphasis on efficiency: "I'm going to write my own garbage collector because I want this to be the fastest bulletin-board system the Web has ever seen."
- Overemphasis on reliability: "Let's make our content-management system load-balanced, because if we got SlashDotted the site would be down for a day or two, which would be pretty catastrophic."
All these things cost something, and sometimes we as programmers are too willing to incur those costs. If you're writing MP3 jukebox software for the next LAN party, a bubble-sort will do just fine. If you're writing a login system for a Web bulletin board, you probably don't need a security audit.
Let's be honest with ourselves: The vast majority of the work we do isn't really that difficult. Sure, you want it to work most of the time. But most of us are not making software for financial transactions or pacemakers or missile guidance. The bread-and-butter of the software industry is harmless stuff: Content-management, marketing reports, you know the drill. If it's a little pokey, if it has a downtime of a few hours once in a great while, it won't be the end of the world.
Perhaps we have a tendency to act is if our work is difficult because we consider that to be the highest form of programming. The work is only important if it's difficult. Maybe we should think differently. Maybe the work is important if it helps people
- and sometimes helping people is deceptively easy.
most of us are not making software for financial transactions or pacemakers
This is perhaps a slightly skewed worldview. While I've done my share of working in the e-world building gee-whiz things, I've spent most of my career working in fields where slightly more care is required. I cut my teeth working in a medical educational institution. Among the tools I worked on were some that directly affected patient care - certainly not pacemakers, but things like software to ensure that accurate lab reports reach the physician who requested the lab work in a timely and secure way. Patients and doctors do not
appreciate inaccurate or late results on tests for the presence of potentially fatal conditions. Now I work for an insurance company. Claims management, HIPAA, disability payments, retirement plans - all forms and forces on financial transactions. These sorts of things, IMHO, are the bread and butter of software development. -- StevenNewton
Yeah, perhaps I was overgeneralizing from my personal experience. I have yet to ever code anything of life-and-death importance, though obviously there are people out there who do so. I haven't met many of them, and I have this inkling that numbers-wise most programming is not actually life-and-death, but I certainly don't have a hard numbers to back up such an intuition.
Wow. You live in a pretty different world than mine. In my
world, software is inefficient, insecure, and breaks down all
the time. So I doubt there really is a surfeit of programmers adding too much bulletproofing to everything. Oh, and I believe it helps people
when their systems aren't regularly getting hacked and their files aren't regularly getting corrupted. -- AlainPicard
It's not that software is always efficient, secure, and extremely reliable. But in aiming for these goals, programmers often incur other costs, and sometimes those goals aren't that important. Some programmers often say things like "That'll be too slow" or "That isn't scalable" or "That won't be five-nines" ... This is why some people here say stuff like YouArentGonnaNeedIt
, because there are a lot of people who think you always will.
These are always ideals, but they're not always necessary. I was once at a hacker conversation and overheard a conversation with someone who was running his own relatively insecure server, and didn't care. "Sure, someone will hack it one day," he said, "but none of the data is private, and it's regularly backed up, so if something happens, I can just reformat the drive and start from scratch."
Of course, such a lackadaisical approach does not apply in many situations. But there are many positions where it does
apply, and the tendency to take the code overly seriously may blind us to recognizing the times when we can do things quick and sloppy.
As if the level of amateurism in our industry is not big enough ! Here we even try to make a case for it. "Sloppy is not acceptable under any circumstances when you get paid for it." Go upfront and tell your customer or to your manager that by normal standards you're goping to do a sloppy job. Or deliver a shring wrapped software package clearly labelled: "sloppy quality, but it costs you 50$ instead of 100$". See if you get paid.
The only seeming justification for "sloppyness", is the price. What is contended is that even decent quality has to be obtained most of the times by substantial effort, resulting in significantly increased costs. That's a RedHerring
. What it really costs is to GiveUpOnAmateurism?
, and this price should be paid by the many developers who have barely minimum education and training for their job, and should be paid with their own private time. The solution to produce better software is most of the times not to work more or to work harder, but to work better
Some of the examples given are all too unfortunate and all too present. Simply said that guy was probably doing a poor job and even worse, he was proud of his amateurism. To run a secure server vs. an insecure one is largely a matter of having a well trained and knowledgeable sysadmin vs an amateurish one. It is very little a matter of someone having to put substantially more effort into administration tasks. Similarly, "planning" for hours of downtime is just as ridiculous even if we're talking only about marketing reports. -- CostinCozianu
On rereading the example of the hacker with the poor server, I realized I did not explain it clearly - it was his own server I was talking about, meaning he's effectively the client and is allowed to make the decision to be half-assed about security. (I've amended the description above.)
Let's say, for the sake of discussion, that you could
make a strong argument for de-emphasizing things like Security, Efficiency, and Reliability. (I find such an argument ridiculous.) There is still no good purpose in pointing out that argument, because owing to the RampantAmateurism?
in our field that Costin points out, we already
have such a de-emphasis in place. Most programmers are either too stupid or too unskilled to build Secure, Efficient, Reliable systems. The 5% who can reach a reasonable level on those aspects don't need to spend an inordinate amount of time on them. Those who need high quality hire great programmers. Those who don't, hire the dregs. So to some extent, the programmer market already assigns importance to high quality wherever it's needed. (If the salary gap between the dregs and the best were greater, the market would be much more effective at this assignment task.)
Are you're saying that such an idea could be dangerous, therefore it should be suppressed?
I wasn't thinking in terms of danger, but usefulness. But depending on your audience, yes the idea could be dangerous. To tell the average programmer this, there is no harm. He can't achieve those goals anyway, so he need not waste valuable thought cycles on them. To tell the good programmer this, there isn't any real harm either. She can achieve those goals to a reasonable degree without taking great pains, and she knows when to ignore advice. To tell a manager this could be dangerous if he needlessly prevents the good programmer from attempting those goals. The danger is not tremendous though, because the good programmer can ignore that manager or go work someplace better.
This week (late March 2002), I convinced a system architect for a national organization that having all the branch offices send self-extracting ZIP files (EXEs) was a bad idea - that ordinary ZIP files would be better.
If one hacker got a part-time job at any branch office in the U.S. and took a look around, it wouldn't take them long to recognize the naming standard for EXE files sent up to be executed automatically on the central server with "root" authority. (And given the nature of the business, questionable part-time help is very common!)
Not a good plan. Plain ordinary data files are a much
better idea than importing and running suspect code.
That "architect" is a perfect example of someone too stupid to build secure systems - par for the course. It also serves as an example that holding the
title architect does not make one a true architect, nor does mere title imply one has much brain. As often as not, a holder of the title architect serves merely to call out the supreme incompetency of the hiring manager - also par for the course.
deliver a shring wrapped software package clearly labelled: "sloppy quality, but it costs you 50$ instead of 100$". See if you get paid.
Sure you'd get paid. Show me a shrink-wrapped product without one of those weasel-word warranties that say, in effect, "the product you have just paid good money for does not work, and we never said it would. If by some chance the one you have does work, it won't do what you're expecting it to." Show me a company that produces shrink-wrapped (or even bespoke) software that has a genuine warranty guarenteeing merchantability, fitness for purpose, that the product will be fixed at no charge
if it doesn't work as advertised, as one would expect with any other complex technological poduct with no user-serviceable parts inside (we aren't talking about open source here), and then we might have something to talk about.
- This brings up a side point: are such disclaimers in EULAs legal? I have occasionally heard it said that they might not be, but that no case testing their validity has ever come up. IANAL, so I have no basis for agreeing or disagreeing with the claim, though I find it hard to believe that no one has ever contested such an ubiquitous practice, even if only as an arranged test case. Does anyone know if any EULA of that type has stood up to a legal challenge? - JayOsako
The original poster does have a point. It seems that insufficient thought is put into what balance is required, and where the costs and benefits lie. The "e-" world suffers especially with this. There seems to be an idea around that every page on every e-commerce site needs to be up at full effectiveness 24/7, even if it can only be used by people in a small number of time-zones. That every page of every coporate website has to have enough capacity behind it that it can remain available when slashdotted twice over. That every byte of every file of corporate data needs to be protected from the massed onslaught of every hacker in the world.
And maybe yours do
. So fire ahead. And maybe they don't. Has anyone bothered to find out, either way? Has anyone worked out if the cost of downtime at 2 am in the only timezone where the site may be used exceeds the cost of making the system be up 24/7? Has anyone worked out how likely it is that the site will be slashdotted? How much it would cost if it was unavailable for a while, once in a blue moon? Whether that cost would exceed the cost of the extra capacity that sits idle almost all the time? (The real villains here, by the way, are not gold-plating developers, but eager consultants from the hardware vendors.)
I'd love to be in a world were I was able to make the kind of decisions that Costin derides. I'd love to know that I was able, as a consumer of software, to balance purchase price against reliability, or functionality. It seems, though, that the idea that every piece of software has to be bullet proof infects the transaction, so that vendors are extremely reluctant to admit that their product has any deficiencies or limitations at all, even thought they all do. To the extent that little in the way of comparison can be made. I can't even identify a baseline. Consider motorcycles. I can compare motorcycles by cost and by functionality, and I can see that I can pay extra for, lets say, increased power and torque, rider comfort and long-range crusing reliabilty. But that would make no sense if what I'll be mostly using the bike for is hacking down to the shops and back. But even the smaller, relatively uncomfortable, relatively unreliable bike I buy for tooling around town will meet certain minimum standards for safety and reliability. Can't do that kind of thinking with software.
I rode a motorcycle around with some friends a while back, meeting the minimum standards. Hit a pothole. Ten minutes of riding later, another rider noticed that a weld had failed, and the pin holding the front wheel to the frame had come out on one side, and was nearly out on the other as well. Decided it would be prudent to stop riding :)
Perhaps the idea needs to be stated more precisely. There are different types of quality, and perhaps some are negotiable while others are not. For example, all code should be well-factored and easy-to-read, unless technical demands justified difficult-to-read code - for example, if parts had to be optimized. Furthermore, software shouldn't fail under unexceptional circumstances. These would be analogous to the minimal safety and reliability standards for consumer products, whether those are power drills or concentrated orange juice. Your power drills can't fly apart when you're drilling through drywall, and your orange juice can't have botulism in it.
But perhaps there are other types of quality that are not the minimum. Not every website needs SSL-based login. Not every database needs to spend the money to have five-nines uptime. Not every possible function in a piece of software needs to be highly optimized. These things would all be good, but they cost money and time. And any feature, no matter how useful, can have a price that is too high. How much it costs depends on your own engineering ability - which we should always be striving to improve - but how much it's worth depends on what the client needs. The client has a finite amount of money, and if we sell em software features that e doesn't need, then we've scammed em. Even if those features work.
Yes, there's a quality problem in the software world, and it's something we should all strive to eliminate. But when something qualifies as GoldPlating
, that's not a solution. There are different sorts of quality, maybe the point of this page is that we should learn to focus on the ones that really matter to the client. (And if we don't know what matters to the client, we should learn how to ask.)
For more on what happens if you don't ask, see TheCustomersAreIdiots
There's often a conflict between two priorities: 1) Overall security / performance / reliability and 2) Making deadlines and including new features. I personally favor 2), as have most of my managers, though my code almost always has good performance and reliability. Many of my coworkers, who strike me as perfectionists, have focused on 1), even if it means missing deadlines or adding fewer valuable features. Customers generally seem pleased with products that have good, though not extraordinary, quality, as long as they receive deliverables, bug fixes, and enhancements promptly. -- JaredLevy
In one of his follow-ups the original poster suggested that sloppy
code is sometimes acceptable. I would take issue with this - in my experience sloppy code almost always comes back to bite you.
However, I totally agree with his original point that programmers often overemphasize security, efficiency and reliability without even realizing they are doing it. These are all forms of PrematureOptimization
then spend time improving the security, efficiency or reliability if you need
to. -- DavidPeterson
Lacking any other input, programmers tend to emphasize things that were important in their last project and that they now know well. Any disfunctionality this may induce is easily avoided by paying careful attention the passage of time and the needs of those served. I think this is what is being asked for at the top of this page. -- WardCunningham
One thing that I continually see is programmers hand rolling things that aren't important to the customer and for which off the shelf components are already available and often better.
I continually see is programmers working extremely hard to shoehorn in some off the shelf component that is already available, ostensibly better, but just a little different than what is needed. Quite often it takes far more work to make an off the shelf component fit your problem than it takes to write an exact fit yourself. Top programmers are best qualified to guess which would be the best route, so let them decide.
And I often see engineers investigating an off-the-shelf component; finding some trivial and/or stupid excuse why "it won't work" or "it's too complicated", patting themselves on the back, and then ReinventingTheWheel like they planned to do in the first place. -- ScottJohnson
Of course, if the component is truly small enough; maybe reinventing is better than shoehorning. But if it's that small, it probably isn't worth reusing. Things that are worth reusing are worth adapting (and probably adaptable with a reasonable amount of effort).
If the thing is
truly not important to the customer, then you do not need one in the first place.
Only inexperienced developers think that perfect software is (a) always possible and (b) always worth the cost. Anyone who has ever been responsible for meeting a schedule and budget knows better. Responsible professionals should constantly be asking themselves this question, and bringing the issues to their managers when the answers aren't clear. See HelpYourManager
Our code is absolutely important
. Period. How about that?
Of course that includes that we shouldn't blow up the costs or the whole project and a lot of many other things. But also including the fact that writing a bubble sort is as inexcusable as writing a garbage collector for the next bulletin board. At a minimum, the code should not be sloppy
even if it's only
a report for the marketing department, or whatever else might be considered not that important
. The idea that your code isn't that important
is also dangerous, because there's no lower bound. And so it is a terribly convenient excuse for the lack of profesionalism that apparently is overly abundant (or some see it that way).
In most cases, you should have a sorting library available; even C has one (though given C's lack of any sort of polymorphism other than PointerCastPolymorphism, generic algorithms are a royal pain in C). If, OTOH, the number of things you need to sort is very small; bubble-sort is reasonable. I would bother writing a quicksort routine to sort an eight-element list; that borders on GoldPlating. I would, however, keep the sort code in its own routine, so if I do need to replace it with quicksort (or something better) later, I can do this with minimal disruption.
Well, other people might think that the problem is in the opposite direction: writing overly complicated code for simple problems. Yes, that happens - IMHO way less than just plain sloppy code, where "sloppy" is an euphemism. But when that happens it is (usually) not because developers over-emphasize the "importance" of their code, on the contrary.
Let's pick a hypothetical but very real example (more realistic than the examples on the top of the page): a simple transactional web/database application . So a bunch of over hyped developers will create an ellaborate MVC/n-layered architecture, with session beans, entity beans and the whole enchilada (you can throw your favorite O/R mapping tool in there). And in the end the whole damn thing is likely to crash on them : huge costs, poor performance, long development time, etc. If they'd had employed my absolute favorite pattern: PutTheDamnDataOnTheDamnScreen?
they'd be finnished a lot sooner with guaranteed good performance, maintainable code and so on. So was it because they considered the code too important ?
I don't think so. If the quality of the code is really that important, PutTheDamnDataOnTheDamnScreen?
is the first thing you need to think about and actually use it. -- CostinCozianu
PutTheDamnDataOnTheDamnScreen? has absolutely nothing to do with what the original poster is talking about. IsYourCodeThatImportant is closely related to DoTheSimplestThingThatCouldPossiblyWork, with an emphasis on accepting ugly code that works, and not trying too hard to solve the general cases that the customer doesn't care about.
Well, ugly is one thing (although I have no idea what ugly code means other than a personal de gustibus
kind of rambling), but Francis was talking about lowering quality (well, that's what sloppy and pokey apparently means).
So the question is how low(er) can we get ? This has been unanswered. And there's the interesting relation between the costs, the quality, and the professionalism of the developers involved, which has to be taken into account.
On further reflection: What's in question is not whether the code overall is important - if code is your job, it should be at least somewhat important, or you should find another career. What's important is that realizing that of all the different metrics of quality - reliability, scalability, security, efficiency, etc., etc. - some are more important than others, and that 100% quality is probably unattainable. Even if it was attainable, most clients wouldn't want to pay the costs of such perfect software.
To state the problem another way:
- In almost any software project, somebody needs to make decisions to prioritize certain aspects over others. This is because time and money are almost always insufficient to satisfy all demands.
- This process of prioritizing is extremely difficult, for a number of reasons.
- Some of the difficulty in prioritizing comes from the developers themselves, who have various emotional reasons for suggesting GoldPlating, PrematureOptimization.
Perhaps part of this might be a legacy of organizational problems: In lots of companies, developers are kept cloistered away from the clients. So a developer's sense of what's important grows separately from the clients' sense of what's important. To make things worse, developers often think that clients (and project managers) don't know what they're doing, which makes them more eager, subconsciously, to take more stewardship of the direction of the project.
"Subconciously"? Just look at pages all over this wiki, many of which assume incompetance or worse in the project (and other) managers...
Well, in my experience that would cover about 40% of projects. YMMV, of course.
There's a valid concept called over training
in which you, say, train a muscle by lifting more weight than it will be required to lift in the activity being trained for. I view metaphorical "mission critical" in this light. What happens when you pretend that every mistake matters? I guess that depends on you, but for some people it results in a successful training effect, leaving them able to produce more and better than before. There are at two reasons why this works when it does. First, it helps isolate and remove lazy or sloppy habits. That's the obvious one. Second is because in all crafts (and we're agreed that software is craft, not science, aren't we?), there is value in slow practice
. When you do something slowly enough to maintain full mental and physical control, you are also building the capability to do it faster without losing control. When you always do things a little too fast and a little out of control, you plateau rather quickly. I'm thinking about piano playing as I write this, but it applies to almost any physical ability that requires control. This may not apply directly to software development, since SD is not purely a motor skill, but in my mind's eye I see the connection.
We're talking about the quest for perfection. Let's not assume this quest arises from life threatening need to be perfect, at least not all the time. Art is partly about showing off the limits of human ability. The symphony played flawlessly. The high wire act. Fine marksmanship. It's not the code that's so important, it's us. So let's pretend.
Some people seem to be equating sloppiness
with failure to indulge in GoldPlating
. They are not the same.
. In fact, GoldPlating
often causes defects, rather than fixing them.
One is failing to correctly implement the requirements as specified. The other is manufacturing requirements out of whole cloth, under the arrogant (and sometimes correct) assumption that YouAreGonnaNeedIt
(but remember YouArentGonnaNeedIt
). In many cases, GoldPlating
invites sloppiness (and defects), when the gold is incorrectly implemented and turns to IronPyrite?
. To make matters worse, many who are guilty of GoldPlating
due so because they find that more interesting work to do than working on (and UnitTest
ing) the more pedestrian requirements that are in fact in the spec.
A related fallacy is confusing several different types of "quality". One is freedom from defects; in the absence of a better metric this means compliance with a requirements specification. (Many would say that the requirements spec is the ONLY meausure of this form of quality). Another definition, used by some, is the closeness to some idealized set of requirements (which may or may not be useful to the customer). This definition of quality is the one invoked by folks who engage in GoldPlating
; when confronted with their activities they loudly insist that they are merely designing in quality, and that anyone who opposes their GoldPlating
is thereby promoting development of an inferior product.
See also QualityElbow