Theoretical rigor cannot replace empirical rigor. In other words, "the rocket must fly", not just merely possess an elegant design. They are both useful (if used right); but satisfying customer requirements, needs, or desires trumps theoretical pureness or thoroughness.
Another way to say it is, "elegant models don't guarantee elegant results".
Vice versa is also true. "The rocket must fly" for the right reasons. If you create a massive slingshot with which to launch the 'rocket', it really isn't a rocket.
If it works, the delivery mechanism may not matter to the customer.
Perhaps not. But if they asked only for a 'delivery mechanism', then even flight isn't required. Train, boat, teleporter, magic wand, etc. would do the job - supposing they work.
- I think you are focusing too heavily on the word "fly". It was an informal statement, not a legal document. Let's just say the customer will ask, "How can I get my package to the moon or orbit in the safest, cheapest, etc. way? (Other metrics listed below)."
However, it may fail his/her requirement that it not damage the payload. Sometimes the customer indeed forgets to include important requirements, which makes for a fun day in court. The "fly" description above is merely a shortcut for "efficient", "safe", "accurate", "reliable", "cost effective", etc.
(I know I've given the rocket illustration before somewhere. If and when I find it, I'll try to refactor it all.)
is probably the most empirical-driven process there is. The "blind watch-maker" of evolution did not give a flying bleep about elegant design. Survival and reproduction success were just about the only factors at play. Any "elegancy of design" is purely a side-effect of these two factors. However, this does not imply I am necessarily for evolution-driven design. It only illustrates that working and powerful designs can form from processes that have no elegancy analysis. --top
The relational model proposed by Codd was a theory on paper. Table oriented programming is heavily based on the theory of tabular organization of data. Theory and practice are not separate like you think. The relational model, a theory, was thought up both from practical experience of other models failing in the business, and from theoretical rigor. They are not separate. How can one even start to theorize about anything, without practicing thinking, by the way? In other words, even theorizing is practicing the mind (the more rigorously, the better). How can one possibly propose a theory that has no merit in practice? If one proposes a theory that has no merit in practice, it isn't useful - therefore they are not separate entities. I.e. the idea that one can't "replace" is ludicrous - since theory and practice are tied together, not replaceable. All good theories have use in practice. Hypocritically, table oriented programming is all in theory - not one complete table oriented programming development environment exists, nor does one TQL environment or language really exist. We'll leave Top with a quote:
You are putting words in my mouth. I am not saying they are mutually exclusive. They may indeed be related. But being related is not the same as being a substitute
. A rocket designed with bad theory is less likely to be useful than a rocket with good theory. However, that does not mean that a rocket without solid theory is not useful and it does not mean that theory ALONE is evidence that rocket A is better than rocket B. Also note that it is possible to design a rocket using genetic algorithms etc. where the rocket design "theory" is not even known. My main point is that empirical evidence should still be the final determiner of "good". If theory helps that goal, then good for it.
In relational's case, IBM created pilot projects in the early and mid 70's and the people who used it like it. Larry Ellison (Oracle) got word of their enjoyment, and the rest is history. If they didn't like it, we perhaps wouldn't be talking about relational now. A lot of pilot projects based on interesting ideas fail to produce enough enthusiasm to change history. Thus, the empirical metric of "customer satisfaction" was key to relational being established in practice. (It is not a very dissect-able empirical metric, but one nevertheless.)
A substitute means that you could replace one for the other. You make it sound like empirical rigor could be an excellent single measure of a product. But of course they are complements, not substitutes or replacements. The customer satisfaction of relational was a side effect of a good theory - although today relational is still debated by the Object Database and Tree Database people (who lack theoretical rigor!).
Example: Since tree storage does not have ample theory behind it, and since XML seems to be doing empirically excellent today.. would you say that XML is excellent since it has obvious empirical evidence that it is better for customers? Relational is more based on theory than it is empirical results and market satisfaction. EmpiricalRigorCantReplaceTheoreticalRigor. The market is indeed very happy with tree based XML, and if the market and customers are satisfied - hey who is to stop XML from taking over. It's what they want, and it is what the empirical evidence suggests - let's go for it? One of the other points is that table oriented programming has no empirical rigor - it is only a theory - so should we believe table oriented programming is useless until proven otherwise? Could table oriented programming even have been thought up without any theory? In other words, EmpiricalRigorCantReplaceTheoreticalRigor.
I won't make any claim that XML databases are "objectively worse". I would rather use relational DB's myself, but that may be merely a personal preference. If some people think better in XML, I cannot change that. Their code for handling XML navigation may be "messy" and "goofy" to me, but they may think its wonderful. I learned the hard way to avoid PersonalChoiceElevatedToMoralImperative
. If Martains think better using mass GoTo
's, thats the way it is. --top
With this sort of attitude, we still would be using GOTO's and we wouldn't have pushed hard for WHILE/FOR/REPEAT loops, structures, and sane control of our programs. Would we be better off today, if we just let the GOTO psychology of all those GOTO programmers take lead? There were even emotional articles written at the time about how GOTO was so great, and that anyone who doesn't like GOTO is a weakling (RealMen?). These psychological and emotional states of mind lack rigor, which is what the page title mentions twice! Taking on an attitude like whatever works best doesn't imply rigor at all. This page should be called TheoreticalRigorCantReplaceEmpiricalThingsThatWorkOkay? or something less rigorous. You are not defending rigor, Top, you are defending psychology, personal preferences. You are taking on a very lax whatever works attitude. This is not rigor.
I'm not sure what your goto comparison is meant to show. Nobody ever proved nested blocks were objectively better. People adopted them largely because they liked them more than goto's, not because math proved them better. And those who resisted may have learned how to use goto's well out of years of experience such that other goto fans could read and modify each other's code. We have no hard data either way. I will not judge them in an absolute way without hard facts. This is not "lax", this is rationality
. I've seen others read and edit code well that I thought was atrocious (not necessarily from goto's). How the hell they did this, I have no fricken idea. It's amazing how differently people think. You need to make sure you are not guilty of PersonalChoiceElevatedToMoralImperative
. I don't see any evidence for that in your writing so far. --top
Gotos instead of while/for/etc. where the latter will serve are objectively worse under the microscope of PrincipleOfLeastPower. Unrestricted 'goto' also carries considerable security risk when allowed within mobile code because one can 'goto' places they aren't authorized to enter;
additionally, considering that programming errors can and will occur on just about any degree-of-freedom that is possible (MurphysLaw), errors with 'goto' have strictly greater potential to be difficult to isolate. These are deductive, absolute, logical truths.
- The issue at hand is not jumping outside of a routine, for even many goto-oriented languages didn't allow that.
- Since you apparently didn't catch it, the issue at hand is that you need a set of principles or axioms by which to determine 'better' or 'worse'. Otherwise one could say: "X is better because it creates more work for me and everyone else" and "Y is better because it leads to slower compile-times which means more time to goof off" and "Z is better because most people don't understand it which means better job security for me". But in answer to your comment: many goto-oriented languages didn't have 'routines'. They just have a stack and jumps - 'goto' being a jump. 'goto' can, after all, replace routines.
These are also reasons that people "liked them more". But you do need to choose a microscope - an initial set of axioms or principles for what means 'better' or worse - otherwise you could say that "higher potential for errors implies better" and "slower implies better" and "greater security risk implies better". Every time TopMind starts waving his hands and flapping his jaw on this subject and using big bold words to claim how 'rational' he is being (as though his claiming it makes it true), I consider him somewhat irrational for his failure to do the rational thing: first develop and describe a set of axioms and principles as to what can reasonably (i.e. for good reason) be considered 'better'. TopMind must have such principles: in the discussion on ProgrammingLanguageNeutralGui, TopMind stated that HTML+AJAX aren't "good enough", but they
- I have yet to see a rigorous analysis. Thought experiments are not sufficient. The hardest part is comparing all possible goto-based algorithms to all possible non-goto-based algorithms.
- You claim this, but it isn't true. Degrees of freedom can be determined rigorously by logical case analysis, which is essentially "thought experiment". EmpiricalRigorCantReplaceTheoreticalRigor, especially with programs and programming languages which are, in every sense, applied mathematics. Enumeratively comparing all possible goto-based algorithms to all possible non-goto-based algorithms is a pointless exercise on the same scale as comparing all integers to all rational numbers.
are technically TuringComplete and capable of the necessary communications.
I am not sure what your point is. The implication is that I am a hypocrite. I may have "personal" axioms, but I don't claim them objective. ProgrammingIsInTheMind
and every mind is different. I used to think that people processed programming thoughts similar to me, but learned I was flat wrong. As far as making claims about AJAX, it is more to exchange ideas about the topic than provide absolute or formal comparisons. It only becomes an issue when somebody claims or implies that technique X is objectively better than technique Y. When I say "better" in the AJAX topic, it is not meant as a claim of objectivity. --top
If you're going to make claims that something isn't "good enough" without any more basis than you complain about in other people, then you are, from definition, a hypocrite. Your hypocrisy is actually very well known on this wiki; you'd be better off just admitting to some hypocrisy than attempting to deny it.
I do flat-out deny it. I do not demand that everyone provide objective proof JUST for using the phrase "better". Rather it is of those people who claim their HobbyHorse
is absolutely better. A typical dialog goes such:
A: X is better.
T: May I request evidence?
A: [evidence given]
T: That seems to depend on subjective assumptions, such as [...]
A: No it doesn't. It is objectively true because [...]
T: That is an "elegant theory" argument, not an empirical argument.
Elegant theory is not sufficient.
[typical design-versus-empirical debate ensues...]
I would only be a hypocrite if I demanded objective evidence when people merely used the word "better".
You're a hypocrite because you believe you have the right to say something is better and "not (explicitly) claim it objective", and you don't afford others the same assumption.
But it seems you're attempting to defend your hypocrisy with another of your gross character flaws: your tendency to use hand-waving defenses when ultimately challenged on your apparently baseless opinions. If you are going to claim things are 'better' without proper defense, you can't demand more from others without being a hypocrite AND your opinions have no place on this wiki - they should be deleted. And in these "typical dialogs", it seems you usually are often mistaken about what constitutes a "subjective assumption" (absence of universal agreement doesn't make an assumption 'subjective').
And 'ProgrammingIsInTheMind' isn't a valuable contribution here - it does nothing to aide in analysis of whether a particular solution is correct or incorrect or better or worse. Mathematics and models are also "in the mind" in the same sense as programming; it is well known that all minds aren't equally 'good' at mathematics, so to imply that all minds have equally 'good' solutions in programming seems utterly fallacious.
With regards to the "I don't claim them objective" - principles and axioms don't need to be universally agreed upon to be 'objective'; they only need to be capable of analysis independent of your (or the observing individual's) personal state-of-mind - so if your claim has more weight than "AJAX+HTML aren't good enough because I don't believe they are good enough", your principles are probably objective whether you bother to claim them so or not. Even "better because it leads to longer compile-times which means more time for me to goof off" is objective. OTOH, I wouldn't be at all surprised if you were just waving your hands and flapping your jaw and making claims that things aren't "good enough" with zero weight behind those words (AdVerecundiam from top = zero weight) - it would neatly fit my mental model of you.
- That is not true. It is only when they push the "universality" of their idea that I demand specifics. I don't claim universality anymore, thus they cannot do the same to me even if they wanted to "catch" me. You have to remember that a lot of these debates started a long time ago. Thus, the context had already been established. If you missed the place where universality was claimed, then it may look like I am a hypocrite.
- I believe you often infer claims of "universality" where none were implied. I recall a large argument regarding this tendency of yours ChallengeSixVersusFpDiscussion.
- I don't remember such as a common complaint. Where is it? Like what percent the way down? I do remember somebody saying something like, "but I'm not the one who claimed it objective". If that's the case THEY are off-topic because the very topic was based on that claim, and thus the fault is not with me. You may have made a mistake in your accusations such that I deserve an apology.
- Look for the part where they start accusing you of 'Lies'. The original claimant had a contextually limited claim to looping constructs - your extrapolation to 'universality' was in error from the start.
- If you mean near the string "My evilness has been exposed", that is just as I described it above. --top
- I believe that you believe it is just as you described it above.
- Note that one person was talking about only loops, but the original claim was strong and general (as I remember it), not just about loops. However, I cannot find it currently, thus its not worth arguing about until found. --top
I will agree that if both parties agree on root idioms, then the results are "objective" based on derivations based on those roots. However, in practice such agreement is rarely the case. As far as the worth of AdVerecundiam
, if you don't want to hear my opinions, then simply ignore them. I'm not forcing you to like my opinion. But I do have a right to state it as much as any wiki participant. (Related: EvidenceTotemPole
Objectivity is independent of 'agreement' on root idioms. And this wiki isn't supposed to be a place for faith and fallacy or unsubstantiated opinion. If you can't or aren't willing to at least attempt to support an opinion with objective and cogent reasoning, it should be deleted. That goes for you as much as any wiki participant.
If software engineering is mostly about psychology and you wish to avoid talking about psychology due to some rule for wiki you envision, then we'd have to delete roughly 95% of this wiki. Think about it. Type theory, relational theory, etc. all depend on base idioms, assumptions, and or notations that are not necessarily properly tied to physical reality. They are based on simplified models of reality. --top
Objectivity doesn't rely on being "tied to reality", either. It seems you STILL don't comprehend the proper distinction between 'objective' and 'subjective'. A principle or axiom or idiom is 'subjective' only if its application depends upon the state of mind or interpretations of an otherwise omniscient observer of the axiom or principle. All proper mathematics axioms are objective even when they don't have any basis in reality, and mathematical predicates are objective even when they are not 'decidable'.
They are only "objective" within their made-up little universe. No. "Objective" is not relative to a made-up little universe even if you claim it to be so. I can't think of any defined properties that suddenly become 'subjective' when applied outside of the universe for which they were developed, except perhaps when talking about 'strange' or 'excited' or 'happy' or 'angry' particles and similar cases where the words have different meanings due to change in context.
It just so happens that we prefer to select objective axioms that are 'useful' to us through their being tied in some reasoned way to reality. But the objectivity of any predicate or measurement is independent of its utility and, further, independent of its applicability given limited cognizance and sensory perception. Principles and axioms for what is 'better' that are actually useful, of course, must be tied to reality via some mechanism that can infer, deduce, or (in rare cases) directly measure a property - but even here 'measurement' isn't good by itself: if I were measuring how much time something adds to the compile on the principle that longer compiles are better, that doesn't do much for someone who disagrees with that principle in the first place.
And your strange hypothetical argument doesn't strike me as very cogent. If, as you wave your hands and hypothesize, software engineering were mostly about psychology, then it wouldn't have much at all to do with the problems being solved or the constraints on how one can go about solving them - i.e. you could solve 95% of every software engineering problem without looking at the problem, the environment, etc. Software engineering, however, is not mostly about psychology; it is mostly about ways of correctly and efficiently solving problems under known or reasonably assumed constraints (such as the requirement to be provably correct, or a cost increase by an order-of-magnitude to make changes after deployment, or the need to meet a particular schedule given a certain set of human resources, or the need to scale up to one million concurrent users).
Most of the cost of software engineering is about maintenance of the code, not about "provably correct" output. At least in my domain. If provably correct was economical, then businesses might buy into your pet contraptions. Until then, your pet contractions are a solution looking for a problem. Face reality dude, and realize you are out of touch. --top
If you need to do much maintenance, the costs will be there - fixing something in maintenance is (according to mounds of evidence collected by the CMMI group) about 10x as fixing it during testing, which is again about 10x as expensive as fixing it in design and coding, which is a couple times again as expensive as fixing it during requirements analysis... and 40% of bugs (again, collected by the CMMI group) can be traced to design and requirements analysis. It's nice to know your approach (create fast, fix, fix, fix) focuses on milking the most money possible out of a business.
You go on and on about CodeChangeImpactAnalysis because you spend most of your time fixing stuff that wasn't working to start with - your domain as you see it, your approach putting the costs into maintenance. Perhaps your hands are somewhat tied by shifting requirements, but then you can focus on making it cheap to maintain or adjust the behavior - increasing flexibility. Getting it right or flexible the first time are the cheapest ways to write software. These things rely on theory, and a great many businesses ARE buying into that truth.
- For clarification, this is not my technique nor goal. --top
I'm not sure I agree with this. Sometimes people just know they need something, but are not sure of the requirements or how to describe them. They are domain specialists, not systems analysts. And, often you find stuff during the programming step that is hard to anticipate during the design phase. When forced to define task details and then test them, you sometimes discover issues that could not easily be found merely by thinking about them. (Related: BigDesignUpFront
Further, deadlines are deadlines. If they don't give you enough time for thorough analysis, then that's just the environment one has to work with
. Politics happen. This is another example of you ignoring the nature of the real world. If your theory assumes an ideal world, it will flop.
Further, the finance *theory* of FutureDiscounting
suggests that short turnaround times are objectively better. I tend to somewhat disagree with FD for software engineering because "infrastructure" needs more thought than say store products in my opinion. But I cannot justify this reasoning in a very clear way so far. It is a gut feeling. FD (finance theory) may be a case where theory bites YOU
this time. How ironic. Be careful which weapon you choose because it may be just used against you. --Top
You make '*theory*' bold as though theorists believe theories and models merely by virtue of them being theories and models. That idea has been contradicted directly by my words and those of others, and it's such a stupid concept that even you should know better than to believe or assume it. Nice hand-waving straw-man approach to argument, though. Even the "if your theory assumes an ideal world" bit - just waving your hands, spouting hypotheticals, and setting up straw men to burn down.
If there is a theory vetting process you can recommend, please state it. Finance theory is well-accepted in the business world, I would note. One vetting process I mention in BookStop
Vetting, is to use empirical studies to test them. In other words, it helps if both kinds of evidence reinforce each other. --top
I have stated it often enough, but obviously your memory is as selective as your arguments are fallacious. Theory vetting process: a theory must make falsifiable predictions, a theory must pass OccamsRazor, a theory must not be falsified within the domain to which it is being applied (which automatically requires internal consistency - inconsistent theories falsify themselves even without observations), and a better-tested theory (one with more data-points that has not been falsified) is preferable to speculation. Theories don't need to make useful predictions to be good... only to be useful.
But like I already said, how to apply these to software engineering is a messy art.
Yes, you keep claiming that. Where's your proof? I see is a lot of decent and straightforward application of the above in every domain I've ever worked in. I feel you're waving your hands and inventing a problem where none exists, and your only support for doing this is that you seem to think that all forms of science must involve numerical metrics - a notion most often defended with much waving of hands.
You are talking anecdotes. You cannot claim that something is objectively *net* better and then ONLY defend it with anecdotes or focus only on a single factor.
I'm not talking anecdotes.
And "useful" is also relative.
Absolutely. GoedelsIncompletenessTheorem ain't all that useful for screwing in a lightbulb or even predicting when it will burn out. One of the neat things about a theory is that utility can be determined largely independently of figuring out whether the theory is invalid (albeit, both utility and validity must exist within the domain of the theory).
For example "provably correct" programing may be useful if it were the only criteria, but if it costs 10 times as much as the alternative, many would not consider it "useful" to them. (I know you will disagree with the "10x" figure, so just consider it hypothetical at this point to avoid re-entering that issue.)
Now you're considering something entirely different. Utility != Economy. Provable correctness is still "useful" even if it costs 10x as much. It wouldn't be unreasonable to say that provable correctness is always 'good' if one can achieve it - i.e. if it were always free, you'd always take it. Of course, it isn't free and its total value and its cost both depend on the domain of application.
The implication was that its usage (and heavy typing) was objectively better OVERALL and that those that don't use it are lazy and unprofessional.
Provable correctness is always better (OVERALL). Costs are always worse (OVERALL). Sometimes there's a tradeoff: costs for correctness. Sometimes there is not. You regularly exaggerate the costs of correctness, doing much handwaving and tossing out hypothetical numbers like '10x'. That behavior certainly is lazy and unprofessional.
Of course we can boost specific metrics with specific techniques, but good software is a symphony, not a single golden violin.
[Whoever keeps deleting my replies without asking is being rude. Please stop or I shall consider retaliation. --top]
You keep injecting comments of no value at all (like "where did I say that!" when I didn't imply you said it, or "this doesn't contradict anything I said!" when I wasn't trying to contradict anything you said - just a fool's flailing, looking for something to fight). That is quite rude, too. Please stop, or I shall continue deleting them.
Should I return the favor and delete your comments which are "no value at all"? If you added information that is not related to something I said, which is what you claim here, then it could be considered "no value" if not related to the topic. "milking the most money possible out of a business" is out of the blue and off-topic and should be deleted by your
criteria. If I merely point out that I didn't say something for clarity sake, then it still serves the purpose of clarification.
- I find it more likely that the interjection itself DEFEATS any attempt at clarity. It divides paragraphs, breaks up thoughts, and is (in general) a bad thing unless some real value can be gained from it. As far as the "milking the most money possible" - you're the one who put the notion that most of the costs - i.e. most of what you charge your customers - is in maintenance. Not me. I'm simply pointing out that you can get (according to CMMI group statistics) two orders magnitude more money by milking money out of that back end than you can by making the system more flexible or more correct from the start. For you, that's a good thing.
- But FutureDiscounting, lack of time given to analyze by owners/managers, and a domain that is just plain poorly understood beyond domain expert intuition are 3 counter factors. Sometimes you just have to "do" before the problems are found. Talking and thinking only go so far. If my manager does NOT let me analyze the project for sufficient time, then I don't have a fricken choice. I work within the constrains given.
The way you worded it *implied* it is something I said because you used the word "your". What other purpose could "your" serve there? Defending against a probable implication that I try to rip businesses off is a WORTHY complaint, it is NOT "flailing, looking for something to fight", it is a brash and ugly accusation
. I wanted to make clear that it was not something I said. To be on the safe side, I generally do not outright delete other people's comments. I find it highly rude and risky. I hope you return the favor. If you really feel strong to delete something, ASK the author along with the reason why. --top
My 'words' only had implication as to your actual behavior - because I don't believe the behavior you profess to and the behavior you actually perform are at all identical (i.e. I consider you a first-class hypocrite). So to complain "where did I say that?!" is completely irrelevant, and, regardless of what you inferred, neither I nor my words implied you ever
said it. And anyone who reads looking for 'contradictions' then gets upset enough to interject a comment when they fail to find one is most likely a fool and is almost certainly spoiling for a fight - I believe you're no exception. A wiser person would accept partial agreements for what they are and examine it in the context of the larger argument rather than break the argument into tiny little chunks as though every single piece needs a reply.
Your justification here is indirect and round-a-bout (like most of your "logic", OccamsRazor
my ass). The accusative interpretation I provided is very possible given the way it was actually worded. I believe a majority of a jury would agree with me. I'd bet money on that.
You infer. I imply. You don't determine my implication via your interpretation, accusative or otherwise; you can only attempt to figure it out. I believe a jury of English experts would agree with me. I'd bet money on it.
- The audience of this wiki is NOT English experts. If your wording can produce confusion in ordinary readers, then I have a right to point out potential misinterpretations. You are inventing weird, convoluted goalposts again. Anyhow, just don't delete other's replies without asking, and it won't be a goddam issue.
- [IsYourRudenessNecessary? Do you have to flame people and this wiki with words like goddam? It seems you preach this "IsYourRudenessNecessary" advice often, so here it is RightBackAtYou.]
- It was an expression of frustration. Your communication style frustrates me. And, it was not directed directly at you. I did NOT say "goddam you", but rather it was directed at the issue. Thus, its not in the same category as name-calling (at least to my ranking of "sins"). On a communications-sin scale from 0 to 9, I'd rank direct name calling about 7 and what I did at about 6. Deletion without permission is an 8. --top
You perhaps cannot see that because you are strange and think different from normal people and are offended in a non-normal way. Anyhow, I have a right to make it clear that something does not represent my opinion and prevent possible misinterpretations. If you disagree, then we'll just have to have nasty EditWar
's when you misbehave like that and we'll never get to interesting arguments, instead ArgueAboutArguing?
all the time. If you don't like the interjection style, then say so without deleting first. I'll try to use a UseNet
-like style where the quotes are repeated. Personally, I find it bad OnceAndOnlyOnce
, but am willing to compromise. --top
We've been over this already when we did our fractal never-ending LaynesLaw
fight over "usable objectivity." The practical issue
is tying such models to reality, which is usually the sticky part. This brings up right back to the crux of THIS topic: elegancy within the model versus
elegancy for reality. AKA, the "ivory tower" fight.
Tying a model to reality isn't difficult. Models make predictions. Scientific models make predictions that have potential for falsification. If a scientific model is applied and its prediction is not falsified, it counts as a datapoint in favor of the model. Ivory tower guys win because their models are tested with every observed application. They choose elegant models because they aren't worse and are simultaneously easier to use - OccamsRazor - from which one can infer that reality itself is largely elegant (albeit admittedly less so when it comes to human policy). The only things to watch out for are models that can't be falsified, and models that have been falsified, and pure speculation - i.e. unsubstantiated opinions that haven't ever been tested. Even ivory tower guys don't like pure speculation (except when brainstorming).
I disagree that being an ivory tower model automatically
makes it "better". There are good ivory tower models and bad ivory tower models. The best way to tell which is which is to empirically test them. QED. --top
Since I never implied that an 'ivory tower model' is 'automatically' better, it seems inappropriate for you to 'disagree'. Regarding your "best way to tell which is which", you're wrong (unless you think it is "better" to waste tons of money to test models). The cheapest ways to test models are with models of models (e.g. searching for inconsistencies) and principles (e.g. OccamsRazor relative to existing models) and seeking known contradictions with past recorded observations. Empirical testing should happen only if a model passes all the cheap test criterion. If you start with empirical testing, you'll just waste a lot of money testing models that most likely won't prove useful.
Physical models are easier to do semi-realistic tests than programming-related stuff. This is because ProgrammingIsInTheMind
, and we don't have a good mind models (yet). This is where the rocket analogy tends to break down.
It is more likely "because" programs are more complicated than most physical things. As far as your "we don't have good enough mind models (yet)" can you prove the existing mind models aren't 'good', or are you just blowing more hot air, waving your hands, and declaring it to be true? Or is it because you, who thumbs his nose at academia, have been actively avoiding learning mind models and the observations and logic that accompanied their construction, such that you are ignorant and incapable of judging any 'good'. I, personally, consider the ActorModel, the ChomskyHierarchy, the RelationalModel, many of the AI models of axioms and fact databases (e.g. predicate calculi & logics), design by contract, CapabilityMaturityModelIntegration, Workflow models, SecurityModels such as CapabilitySecurityModel, etc. to be 'good' models. Can you even present what you (in your typical hand-waving, hot-air, speculative and hypothetical fashion) would consider a strong case to convince me that each of these are not good models?
Programming is not for the mind - it is for the customer. Programming, is obviously done in the mind - because it is done by humans. That is obvious. What's important is to model the end application on what the customer needs (hopefully not just purely what they want, although that helps, sadly). You can ask a console zealot to build an application for the typical customer, and he'll talk about how his mind prefers the console and that's the way he's doing the application because he loves the console and that is in his mind - but if the customer doesn't want a console, then "programming in the mind" and "programmer psychology" doesn't help one bit. Maybe the customer wanted a GUI. If you are lucky enough to find a market where all your customers want exactly what you psychologically want, then you are lucky. For example, imagine a customer that requests a bunch of crud screens, and nothing else.
Further, often the "inconsistencies" of the domain itself overshadow any theoretical purity inconsistencies. In other words, the "noise" from the domain itself (or in one's understanding of the domain) may overshadow any problems caused by lack of theoretical purity. Idealists have a habit of ignoring this. Its sort of a form of SovietShoeFactoryPrinciple
. They focus so heavily on preventing ANY detectable theoretical inconsistencies that they fail to consider how well it deals with a messy or imperfect environment.
It's roughly comparable to an audiophile
purchasing a $5,000 top-of-the-line car stereo system, but installing it in a Chevy Malibu such that the noise from the car overwhelms the purity gained from getting the top-of-the-line stereo. The audiophile is purchasing purity for purity's sake:
for the mere feeling of "having the best". Some of the wiki zealots around here are the software design equivalent, with the same kind of snooty arrogance. The cheaper stereo is easier to install, saves money, easier to operate, and is virtually indistinguishable from the expensive one when going 55 MPH on Old Hill Road in the 1998 Malibu. A decent empiricist has a better grasp of reality while the audiophile waves around the frequency graphs from the German lab test.
The materialistic, product oriented, physically obsessed person will buy the $5,000 dollar stereo, while the person who's done research and theorized will understand that once the stereo is above a certain wattage, these watts won't have much use. Most of the time the volume is set on medium or low anyway. The non theoretical person obsessed with products, peer pressure, etc will buy the 800 watt stereo and never use more than 120 watts in reality. Hood insulation and interior insulation will help your Malibu fend off engine and road noise. Additionally, and evidentially, theoretical smart pHd professor grade people, or intelligent programmers are not too often seen driving around in boom boxes that have stereos worth $5,000. Rather these snobs you speak of listen to lighter classical music, rock music that isn't "death metal", etc. In fact it is the physically obsessed unintelligent teenager or "twenty year old" crowd (especially Rap fans) that purchase these five thousand dollar stereos. This humorous stereotype is coming from someone in their twenties - so don't assume it is a biased comment coming from some old man with a PhD looking down on teens and twenty year olds.
- This does not diminish from my original point. It is difficult to tell those using theory for proper purposes from those using theory for questionable purposes such as bragging points or to satisfy a personal obsessive disorder. Empirical testing is one of the better techniques for distinguishing between these two. To summarize, here are possible reasons why theory may be insufficient or inappropriate:
- Used for mere "bragging rights" regardless of fitness for task.
- A personal desire or compulsion for fastidious-ness or "purity".
- Missing other factors not thought of or difficult to measure that would eclipse the factor measured by the theory.
Perhaps a more relevant analogy for this audience is to buy triple-error-correcting-with-wide-parity RAM when running Microsoft Windows. The errors from Windows will swamp by far any problems caused by cheaper RAM, from a statistical perspective.
Perhaps analogies usually fail and can be shot down fairly easily. It's hard to equate RAM and Microsoft Windows with theoretical or empirical rigor. Besides, if the academic was using such pure technology as this RAM you speak of, he'd probably not be using Microsoft - he'd be using an academic operating system such as Minix, Oberon, or something more secure or robust like OpenBsd. Society is far from pure though - we see all sorts of ranges of people who use all sorts of impure tools throughout the day. Several doctors eat junk food and smoke - yet they are very interested in helping people's health at the hospital. There are clashes.
Nature under heavy empirical pressure often skips ivory tower models also, as in RelationalEvolutionPuzzle
. -- top
I can almost see you waving your hands in the background as you make that claim. Let me know when you have evidence that nature is "skipping" models that isn't predicted by a simpler or better tested theory.
Please clarify. What "simpler or better test theory" are you talking about?
Damn this is frustrating - you should already know this stuff.
Any simpler or better tested theory will do. If you have evidence, it doesn't help you promote your new theories (like "nature skips models") unless it doesn't fit existing models - that's life as a scientist. Additionally, you must pass OccamsRazor - so any simpler theory will do even if it doesn't exist yet. An example simple AND better-tested theory is that involving local maxima preventing most evolutionary divergence - i.e. abductively, we can infer that associative memory neural networks (what we have today) are already a local maxima or we would see a much wider range of brain structures in different animals.
- Often the problem is your poor wording, not my knowledge. Reading your convoluted style is also "frustrating". --top
- My wording was quite straightforward: evidence "that isn't predicted by a simple or better accepted theory". Any scientist knows what that refers to. Hell, it comes with a huge context involving mentions of OccamsRazor and falsification of existing theories - even a non-scientist should be able to get it with contextual clues. I get frustrated, too, when I read stuff beyond my education, but I often feel you remain obstinately ignorant of both your own flaws and of anything you might learn from others.
- I know how science fricken works, but I'm not sure how that relates to my statement. Part of the problem is that its not clear whether the clause starting with "that" is modifying "evidence" or "model" or something else. What in our context is being "predicted"? You don't predict models generally, its the other way around. And "predicting evidence" doesn't make sense in our context. I'll show you a better way to word it (at lest to me) when I finally figure out what the hell you were trying to say. --top
- You claim you know how science works, but it seems like more hand-waving hot air whenever I read what you have to say about it. And "isn't" can't refer to "models", so the second "that" clause MUST refer to "evidence". And evidence "predicted by" a model is clear: a model makes predictions, and what it predicts is something else that should be evident (should you search for it or run the test) if the model isn't to be falsified - a person educated in sciences should understand that. Is English a second language for you? or is it one more thing you thumbed your nose at when receiving an education?
- When somebody has difficulty with a sentence or paragraph I write, I simply find a different way to say the same thing rather than insult them. It usually works. Instead of doing this, you insist it is sufficient as is and call me "dumb". That sentence is goofed up. I'll read it again in a few days to try a fresh angle on it. I don't want a lesson in science, I want to know what your sentence meant to say. I'm asking for an equivalent statement that is worded differently. Here's a suggestion, break it up into two sentences such that each "that" clause is right next to what it is modifying. Then make sure the linkage between the two sentences is clear. --top
- You didn't ask for a clarification of the sentence, and you didn't indicate you had difficulty with it. What you indicated is that you wanted a specific "simpler or better tested theory" ("What 'simpler or better tested' theory are you talking about?") which only indicates you don't know crap about science. If you want to be treated intelligently, ask what you mean and don't pick on correct English (I'll gladly fix incorrect English). As for a rewording, consider: Let me know when you have evidence that nature is "skipping" models where, simultaneously, this evidence isn't predicted by simpler or better-tested models.
- (Continued below)
Well, that's one theory for the RelationalEvolutionPuzzle
. I'll leave it at that because its already discussed in that topic.
But you should get evidence before you worry about simpler or better-tested theories, and I very seriously doubt you have even that. I've never seen TopMind apply theoretical OR empirical rigor. He spends his time, as above, waving his hands, blowing hot air, speculating, hypothesizing,... and all the while complaining about how other people aren't being 'rigorous'.
Perhaps because I value it less than empirical or psychological factors.
I very clearly said I've never seen you "apply theoretical OR empirical rigor" which means, to my observations, you apply neither. Replying that you value it less than empirical factors makes no sense whatsoever.
- I admit I missed the "empirical rigor" portion of your statement. However, like I've said many times, nobody with an agressive HobbyHorse on this wiki has done such. Thus, it comes across as a double standard. Further, there is a 3rd component: personal psychology. This is generally not subject to "empirical rigor". Psychological tendencies on a large scale (average) are to some extent, but thats still limited. Just because a factor is hard to "rigor-ize" does not mean its not important. --top
Theorists are often guilty of the 3 things listed above, including SovietShoeFactoryPrinciple
Empiricists are the only ones that are even possibly guilty of the SovietShoeFactoryPrinciple, as it involves playing to numbers when real measurements are taken (a sort of HeisenBug in measurements). Theorists are free to take it into account as part of the theory (it is common economic theory that people act based on the motivators). And I've no doubt that there are theorists who are often guilty of anything you might list, but I've equally no doubt that the same applies to the empiricists - they're all people.
You seem to want me to counter your theories with theories of my own. But I am outright complaining that theory is not of much use so far in software engineering. You are creating an artificial obligation for me
I believe that theories have already proven themselves in software engineering. So does anyone who practices on the PortlandPatternRepository. We find evidence for the theories we support repeatedly in our own work and in observing the work of others - not strong absolute laboratory evidence, true, but mountains of evidence nonetheless. You don't need to measure statistically how many people get killed if they walk blindly across streets to 'prove' the theory that walking across streets blindly is a stupid thing to do. Theory has already proven to produce good real-world results: it was proven to me when I moved from low-level languages to high-level languages. It was proven to me again when I started dealing with security concerns and migrating code. You've got BurdenOfProof to challenge existing beliefs and existing theories. That is a matter of fact. Complaining about "artificial obligations" just proves how out of touch with reality you really are.
When theory proves that it produces significantly better real-world results (more productivity, more profits, etc.), and we find ways to distinguish good theories from bad/impractical theories, then theory will have credit. Right now it doesn't. Being mentally "elegant" is just plain insufficient. That is NOT "hand-waving", it is expecting that a technique for finding practical solutions be proven first. --top
Mental "elegance" was never proof of a theory or model. It's simply a selector for a theory. If two models exist that make the same predictions, choose the more elegant one: OccamsRazor. So stop waving your hands and pretending theorists are using 'elegance' as proof. How many more straw men will you burn in your fallacious arguments?
What is "make predictions" in terms of software engineering? Being able to reason about the outcome of some programming or query code? "Reason about" is a psychological phenomena.
Ah, you'll burn at least three more straw men. Software engineering is about efficiently building a software product that meets a set of requirements while operating under a given set of constraints. Software engineering models and theories, therefore, are those that "make predictions" as to how certain approaches will meet requirements while working under constraints. Within software, theories and patterns "make predictions" as to the successes and problems you'll encounter tomorrow based on the decisions you're making today. You could probably measure it empirically, but you'll need to wait until you've failed or succeeded... which could be an expensive proposition!
This is incomplete. "Building" is only part of the issue. Maintenance is often more expensive than building. Second, "meets a set of requirements" is not quite the issue when the requirements are fuzzy, which is often in the case in projects I work on. If the requirements were documented well up-front, many companies would just ship the job off to a low-wage country.
I suppose you can attempt to fuzzily divide 'maintenance' from 'building', but maintenance is (in every important sense) just late cycle development. And while taking software product A and making source-changes C is often an efficient mechanism of building software product B (and therefore one that is often favored by software engineering practices), there is fundamentally nothing stopping you from starting from scratch on every single software product you build (even for a slight change in requirements). Anyhow, there is little need to sputter about requirements issues - the fact that requirements may shift and aren't always entirely clear is well known in software engineering, and most software engineering models accept as axioms that requirements will shift some and require clarification. It is simply taken into account - one more normal part of building the software product. Most software engineering methodologies even have dedicated phases for requirements clarification and incremental extensions (spiral model, user stories, etc.).
Three Types of Evidence and Their Pros/Cons
- Pro: Relatively easy to test.
- Con: Often difficult to relate or translate into to the "real world", AKA "customer-side" metrics.
- Pro: Closer to "customer-side" metrics.
- Con: Resource-intensive to test
Personal Psychology (Comfort) Evidence
- Pro: Closer to "customer-side" metrics.
- Con: Very difficult to formulate relevant and complete tests
You got backwards the pros and cons for both Theoretical and Empirical evidence. Theoretical evidence
(by which you must be referring to properties inferred inductively, deductively, and abductively by use of models or theories) is usually easy to relate to the real world (you wouldn't bother inferring properties that you don't much care about), and are often quite close to 'customer-side' concerns. The 'con' would be that they're difficult to test or measure directly - generally because things that can be measured directly people, very simply, measure directly rather than infer. Empirical evidence
refers to data measured correctly. The 'pro' is that the measurement isn't theoretical (though it may still be in error). The con
is that is is rarely
close to "customer-side" metrics, because it rarely possible and sufficiently cheap to measure the exact things the customer cares about - one generally ends up measuring what is easy rather than what is right. When it is possible to easily measure the right thing, of course, one should do so.
Bickery Stuff Continued
Re: If you want to be treated intelligently, ask what you mean and don't pick on correct English (I'll gladly fix incorrect English).
Being correct is not sufficient anymore than a BrainFsck
program producing correct output is acceptable just because it produces correct output.
Welcome to the Internet. No one here likes you. (http://www.beska.net/welcome/ - especially note the part starting with "How dare you!")
I don't demand you act intelligently, only that you act honestly (take care to say and ask what you mean) doing your best to be correct (you're responsible for all your own homework, all your own footwork, and all your BurdenOfProof for any claim you wish to make) and avoid acting foolishly hostile (i.e. don't pick on something if you can't prove it incorrect, don't use fallacy in arguments) if you wish to be treated with kindness. There is no need to treat kindly those of whom I'd prefer to be rid, so I won't treat kindly on this wiki those who regularly offer the impression of dishonesty or fallacy or vociferous ignorance.
- It's unclear who is replying to who here. I didn't state any of that above. If "nobody likes you" is directed at me, I disagree. A handful have defended my position at times. And, no, I didn't keep track of them because I didn't know I would be on trial. --top
- People who like your position are only allies of convenience. In any case, they obviously aren't here.
- This is not usable criticism because it is not specific enough. It's like, "stop acting bad and I'll treat you better."
- There were six specifics in the above paragraph but repeating them won't help you because, although they are clearly listed, you don't want to hear them. If all you can grok is "Stop acting bad and I'll treat you better", that's sufficient. I'll treat you honestly, which means I will not treat you kindly if I've believe you've violated proper code-of-conduct for reasonable discussions.
Re: You didn't ask for a clarification of the sentence, and you didn't indicate you had difficulty with it.
I reread it and to me its clear I did. However, I don't wish to bicker about that today. Nor natural selection today anymore.
Maybe you can explain with acceptable reasoning how one gets: "I'm confused, do you mean to say the evidence or nature or model is being predicted?" out of "What 'simpler or better test theory' are you talking about?". I'll even apologize if you do. OTOH, maybe you can't and you're just droning on to defend your esteem.
(copied from above for comment embedding)
It is more likely "because" programs are more complicated than most physical things.
- Which is even more reason to test empirically. Too many variables often means there's too much going on to test integration in an effective theoretical way. It becomes too hard to know what you don't know. --top
- Complex systems are much more difficult to test empirically, because there are a great many number of things one can go about measuring and almost none of them have any value. You can get a few gross numbers, but those rarely help you much with any decisions you wish to make except in the rare cases that those numbers are a coalesced emergent property of direct concern (which does happen now and again... 'framerate' for a videogame would be an example). I'd fully encourage measuring or observing the right things directly where it can easily be done, but you immediately become subject to the SovietShoeFactoryPrinciple if you choose to measure the wrong things.
- And as far as the "too hard to know what you don't know", no amount of empirical testing will change that for a complex system... which makes it very much an invalid, fallacious argument in favor of empirical testing. (You've done a lot of that on this page and in EmpiricalRigorCantReplaceTheoreticalRigor.)
As far as your "we don't have good enough mind models (yet)" can you prove the existing mind models aren't 'good', or are you just blowing more hot air, waving your hands, and declaring it to be true?
- When I have a competent artificial maid for my house, I'll consider the technique "good". I'll even cut it slack by giving it more time to perform tasks than an average human would. A great example of empiricism that an IvoryTower claim/model will eventually have to face.
- So by 'mind models' you mean "models of the mind" rather than "models in the mind"? Fine. Not that a mind model would get you a robotic maid... not even if perfect (you'd still need the necessary computational power, a sufficient implementation, and some mechanical miracles). We have some well proven models of behavior, of percept, and of autonomic response. In any case, ultimately a claim about 'models of the mind' is meaningless as a statement claim regarding models in general. We also don't have any proven models of the soul... what are you going to do with that fact?
- There seems to be context confusion somewhere here. Needs review.
Or is it because you, who thumbs his nose at academia, have been actively avoiding learning mind models and the observations and logic that accompanied their construction, such that you are ignorant and incapable of judging any 'good'.
- You make up convoluted metrics for "good", and then if somebody claims its not a sufficient or significant metric, you accuse them of being dumb.
- No, I've accused you of being "dumb" for your tendency to say things aren't "good enough" without any metric at all. And you deserve it, 100%. Even "this is good because it creates longer compile times which gives me more time to goof off" is more meaningful and more useful.
- A pot-of-gold accusation. Good empirical metrics are usually based on observations that don't need to involve the details of construction. To compare UPS to Fedex, you don't need to know how cars and planes work, for example. Metrics that involve the implementation rather than the results ("customer's viewpoint) are generally suspect or very limited.
- And? Relevant properties for theoretical inference are also those that are visible on the outer edge of the black box product. You're making incorrect assumptions once again as to the relevant differences between theoretical and empirical 'evidence'. Application of a theory requires comprehension of the underlying construction, but the aim of the inference is for emergent properties (such as safety, correctness, security, survivability, usability, accessibility by handicapped or those using other languages, enjoyable, etc.) that will exist in the product as a whole. True, some are emergent properties you might be able to measure or observe directly when you complete the system (and you should do so) but, in any complex product with highly emergent properties, there is no substitute for the theory at initial design or development time.
- I may disagree with this, but the more important issue is that theory is not sufficient to claim "good" even *if* it was a prerequisite. --top
I, personally, consider the ActorModel
, the ChomskyHierarchy
, the RelationalModel
, many of the AI models of axioms and fact databases (e.g. predicate calculi & logics), design by contract, CapabilityMaturityModelIntegration
, Workflow models, SecurityModels?
such as CapabilitySecurityModel
, etc. to be 'good' models.
- I agree that RelationalModel is good overall, but it may be an accident rather than its mathematical roots.
- It wasn't an accident. RelationalModel was designed to be good. I've no doubt EfCodd scrapped a great number of models he determined to be "not" good for some theoretical reason or another. RelationalModel would be an 'accident' if it was created by some random or unintentional process, which it was not.
- For every idea that sticks or semi-sticks, a good many more rust in back-room folders. The volume of dead-end papers is large. As an analogy, a mutation that gives an organism an advantage is not necessarily divinely or academically inspired. It may just be lucky chance. We don't have enough info to tell the difference. Many good software ideas also come from people just tinkering around with new ideas or making a digital version of some real-world thing. Database indexes come to mind, as illustrated by the barrowing of the term "index".
- Database indexes were also not accidental or merely from tinkering around. Neither were card catalogs, or book indices. They came about because need existed for them and so we directed efforts to produce them.
- Perhaps "tinkering" is not the right world, but anyhow they came about by practical, hands-on means.
- And since you wish for that to be useful for your argument, can you provide the evidence that nobody applied a theory: "hmm... if I alphabetize/categorize stuff into known buckets, it might let me get faster access!" I think you're trying to 'invent' evidence to support your views here.
- And you have no evidence for the opposite. Thus, the stalemate remains. Note that things like phonetic alphabets are fairly well traced back to practical births. Foreign workers in Egypt (their version of H-1B's perhaps) started using the starting sound of Egyptian pictographs (say "H" for icon of "House") to represent things like the pronunciation of their names. This eventually spread to non-Egyptian language in general because the workers were familiar with it due to it being picture-based (assuming they knew Egyptian pronunciations of the icons), giving us our modern alphabet. Pictographs were thus cross-borrowed gradually for phonetic usage. One practical use spread to another. History generally shows things evolving in steps like this. (The capital letter "A" is an upside-down ox-head, see the horns?. "Ox" started with the "a" sound in ancient Egyptian.)
Can you even present what you (in your typical hand-waving, hot-air, speculative and hypothetical fashion) would consider a strong case to convince me that each of these are not good models?
- Why is it my burden to show this?
- Because you're the one trying to change the status quo.
- IvoryTower is NOT the status quo. Most people ignore rambling shit like TypeTheory.
- What I said is that you're trying to change the status quo, which is the set of beliefs that already exist. I said nothing about 'ivory towers'. But you're wrong in any case - the ivory tower is part of the status quo too. And the comment here on TypeTheory - yes, most people ignore stuff they don't want to know, and they even call it names like "math junk" or "rambling shit". These are the people that shouldn't get jobs in related fields because they're (choicefully) poorly educated ignoramuses when it comes to those fields who haven't the knowledge to even properly judge whether something is "shit" or not. That includes you.
- So the status-quo is what you think it should be, not what it is, eh?
- Not at all. The status quo IS what is. And 'what is' is this: people including myself believe that various models and theories are 'good' and that sufficient evidence and reasoning has been offered to prove them. That is the status quo. If you want to challenge the status quo, you've got footwork to do... it is not our job to give those who would be HostileStudents (like yourself) an education on the reasons we came to believe in a model or theory; it is YOUR job to educate yourself on those reasons then challenge. That's life as anyone who wishes to promote an idea: you've got to prepare to knock down contradictory ones - pretending they'll go away just because you believe they should is a fine example of arrogance and ignorance interfering with your goals. If you don't feel like doing this work, don't challenge. It's your I'm-an-ignoramus-so-I'm-gonna-shift-BurdenOfProof-plus-demand-application-in-my-niche approach that creates more heat than light when you challenge HobbyHorses. It not only doesn't work; it also makes you look rude and stupid (like a plumber crashing a party of DiningPhilosophers and declaring that GoedelsIncompletenessTheorem is useless and pointless unless they can both explain it to the plumber (who hasn't even completed Algebra) in two pages or less AND provide measurements and evidence that its application will reduce the total amount of pipe he needs to lay to install a new lavatory).
- Now my turn to be insultive like you: Academic Elitists are often those who are crappy at dealing with the messy, ever-changing real world, so they hide in their pure little made-up worlds.
- I deal with the messy world every day. And every day I gain new evidence to confirm a variety of mental models for how the world works.
- And I'm not your "student" and take offense to being called it. It's not my fault the world does not value your anal-retentive and elitist tendencies and thinking as much as you wish they did. Empiricism is important and valued. Get over it!
- Their objective goodness should be considered "unknown" until proven otherwise. "Null" if you will. If I don't do anything for or against them, they stay that way.
- Incorrect. I consider them proven otherwise, by which I mean I've been convinced by evidence and observations presented and the reasoning that supports the conclusions given the evidence. That you aren't convinced isn't something I consider particularly convincing towards the more neutral cause because I, honestly, consider you a hand-waving ignoramus who'd rather shove his hands over his ears and shout "LALALALALA!" than properly educate himself on the models and the circumstances and observations that led to them. So, as any reasonable person would do in my circumstance, I demand you be the one to present a convincing argument that these models (which are already proven good to me) aren't (by some evidence or some acceptable set of reasons) actually any good.
- Re: "Educate yourself [on the models]" - See BookStop, PageAnchor: Vetting
- Why should I care what you think?
- You shouldn't. You should just plod on your merry way not trying to convince people of stuff when you don't have a valid argument to present. But it seems you care anyway.
- You value theoretical evidence more than empirical evidence and I vise versa. Why should I or anybody change their weights? Maybe we can AgreeToDisagree on the weights and get back to coding. Deal? --top
- I value empirical evidence and theoretical evidence, not 'more than'. I know and appreciate the value of each. A page full of numbers and properties doesn't mean shit without a theory, and a theory isn't worth shit without a page full of numbers and known properties. What I don't see is the value of empirical "rigor" - at least not beyond what is needed to initially test a theory. The reason? Because after the theory has passed its initial tests, every observed application also becomes a test... and so the testing of the theory is, while not well documented, still very thorough.
- That is "traditional" science. It has been difficult to do this with software engineering either because there are too many variables to model well and/or because its largely a psychological endeavor. We'd have to model human psychology well to test it all as a model. This brings us back to the AI maid comment above. --top
- If you'd put your thinking cap on for a minute, you'd quickly realize that software is near impossible to "test" even half thoroughly or half rigorously using empirical evidence. This is because there are so many variations to test that it simply cannot be done reasonably (even over a period of 100 years - and sorry unit testing advocates, tests are okay but they are never complete). The only thing we can do is come up with strong robust theories that cover a great topic, like the relational model, type systems, etc. Empirically measuring all the uses for tables or types instead of coming up with strong foundations and basis (theories) would be an utter waste of time. Had you done some more self reflection, you would have seen for a moment that once again your Table Oriented Programming is entirely based on theories, made up examples, and absolutely no rigorous or thorough empirical results or objective table oriented programming evidence at all. What's sad, is that not only does your table oriented programming lack rigorous empirical and objective evidence - it also lacks theory too (you'd basically have to credit Codd for it, meaning you've provided no theory, only others have). So what sort of rigorous evidence do YOU ever offer, top, for any topic on this wiki? And this means rigorous, not just 3 or 4 personal examples on Tuesday evening that you thought up while sipping cherry cola.
- First let me say that if lack of rigorous empirical evidence is a "sin", then almost every WikiZen with an agressive HobbyHorse is probably going to e-hell. I only claim that TOP fits my head (hence the handle "top" "mind"), not that it is universally better. I shouldn't have to keep stating this.
- Second, you seem to be over-emphasizing "provably correct" and heavy-typing again, your Grand HobbyHorse. Accuracy of output is only one of many factors. In a medical or space-shuttle app it may be the overriding factor, but it depends on the domain. I've already wrote enough about this such that I'm not going to describe my weightings of it again. If you can maximize accuracy without significantly dinging other factors (such as labor cost and turnaround time), that would be great. Until that time, I'll watch from a distance. --top
- And, I did not say that the models were "no good". You are putting words in my mouth. I merely challenge the claim that they are proven NET objectively better. --top
- The words from your mouth were: "don't have a good mind models". This means we either have no mind models at all, or that the mind models we have are not good. Your words. I did not put them in your mouth.
- In that *context* I meant good enough to be a substitute for actual humans. "Good" is relative to purpose. --top
- Be careful there now. Computers are not replacements for humans, and in many people's opinion should not be replacements for humans. The models that humans should use for computing should be for computing and processing batch tasks and data, not for building humans or models of humans. Those in the Lisp/AI field (isn't that dead now?) can argue all they want - but we have plenty of humans in China and USA (and other countries), so we don't need no effing more selfish pricks on this planet trying to take over the world. What we need is automation, and free slavery, according to AntiCreation and SlaveOrientedProgramming.
- There seems to be some context confusion. If we want to test the value of a technique/tool on *human* programmers, then matching humans does matter. I perfectly agree that future AI may prefer BrainFsck or something else equally goofy (to humans). But the assumption is that we are testing stuff for human developers. I'd expect that a being that lived in a 4D universe (real or emulated) may approach programming and databases very different from us. They may have 3D printouts and editors for their code. (Although such is technically possible for us, it is not practical because things tend to overlap and get in the way of each other.) --top
- The default is not "good". Thus should be obvious. I don't know why it trips you. --top
- The default is not "good". I don't accept models merely by virtue of their being models whether they come from you or anyone else. I wasn't convinced of the models and theories I now consider 'good' until I studied them, their application, the reasons for their existence, and the examples and observations associated with them. That should be obvious. I don't know why it trips you.
- By some accounts, nobody really listened to Dr. Codd UNTIL he showed how relational could shorten typical queries compared to navigational query languages of the time. Thus, he found a way to pique interest by turning his mathy gobbledygook into a somewhat quantifiable practical (customer-side) metric, namely CodeSize?. --top
- [Do you have a source for that, and have you actually read Codd's original paper, "A Relational Model of Data for Large Shared Data Banks"? Juvenile comments like "mathy gobbledygook" suggest you haven't, and only serve to further undermine your credibility whilst hinting that your intention here is to belittle, badger and troll rather than engage in meaningful debate.]
- You are letting yourself get distracted from my main point and branching off into side nits. I am not saying his math was "bad", I am only describing what caused people to pay attention: he demonstrated a practical benefit. Otherwise, his paper would end up like any other academic paper: obscurity. And no, I don't have the source for the story. If I encounter it again, I'll link to or reference it.
- [Again, have you read the paper? Sections 1.1 and 1.2 clearly and compellingly describe the practical benefits of the relational model without math, and the paper was the direct inspiration and progenitor of the System R project at IBM.]
- Surely someone in charge of the table oriented programming paradigm such as "Top" need not be familiar with the relational model, nor the history of the relational model, nor the history of tabular data and how and why the table model was founded.
- I never said the paper didn't contain practical claims. You again are putting words in my mouth, Rudeboy. As I remember the articles, relational experiments were adopted by two camps: the IBM camp and the Berkeley camp. The IBM camp became interested in Codd's work when one of the original team members attended a lecture where Codd demonstrated how it shortened queries. IIRC, he saw the original paper before that, but was not impressed at the time. (Implementation independence is another one of Codd's claims, but that was not discussed in the articles as I remember them.) Note that some suggest that the "tabular model" already existed to a large extent at the time, but that Codd merely formalized it, such as requiring that each row be unique and divorcing queries from implementation issues such as indexing and disk blocks. --top
See Also: BookStop