There needs to be a distinction between "macro" and "micro" rigor when it comes to "evidence of superiority" debates. Macro-rigor is required to show that a tool has a net
advantage in a rigorous way.
Micro-rigor can be used to prove just about any
tool has some potential benefit in some narrow
area. Most will probably agree with this. However, it cannot be used as a substitute for macro-rigor claims for anything non-trivial. For example, BrainFsck
may score well in grammatical language simplicity metrics. Language simplicity is a good thing, all else being equal. But its the "all else" that ends up being the domain the macro territory (See IncompatibleGoals
So far, macro-rigor for just about any tool remains illusive. Measuring the total sum of bunches of micro-rigors and assigning weights to them is so far too complex and perhaps too subjective to qualify as "rigor".
I don't get it. What is "micro rigor" and what is "macro rigor"? I am familiar with mathematical rigour, intellectual rigour, experimental rigour, research rigour and so forth, but the above seems to refer to none of these.
Micro rigor is specific advantages in specific areas. Macro rigor is amounting enough rigorous evidence to be able to claim with some confidence there is a "net advantage" to a tool or technology.
How do you define "net advantage"?
[An example would be proving that hammers are better no matter what you're doing, such as polishing your television. TopMind
uses this hand-waving "macro"-rigor as an excuse to feel more comfortable with allowing his mind to stagnate because nobody can prove he could be doing things better no matter how many pieces of rigorous 'micro' evidence they give him... and no matter how directly applicable this evidence is to whatever 'micro' activity he is performing at the moment, be it polishing televisions or putting nails into drywall.]
If one likes Klingon decor, then the hammer-on-TV finish just may do the trick. But seriously, that's not my problem because I am not claiming rigor. If you want a working target, then focus on "blocks make programmers more productive than goto's". And as far as "doing things better", one approach is to pay attention to human psychology. --top
- There's plenty of rigour -- experimental, research, intellectual, and even mathematical -- in the area of computing that focuses on issues where PsychologyMatters. It's generally called HumanComputerInteraction, and MicroSoft and BigBlue pour significant funds into it. While it's usually viewed as dealing with graphical interfaces, input/output devices and the like, it also encompasses programming language usability issues. Top, I've never seen you mention it. Why?
- I have not seen anything from that which contradicts my stated views so far, I would note. You are welcome to site it as a source. --top
doesn't claim rigor. He just floats ideas like 'PsychologyMatters
' without any rigorous support at all, then complains when other people do the same. As far as working targets go, how about "direct expression boolean values makes programmers more productive than encouraging them to construct unique RubeGoldberg
devices in code to compute every boolean constant at runtime."]
And you don't have any proof that psychology doesn't matter. That's not the default either. Welcome to the boat.
[You do like ShiftingTheBurdenOfProof
, don't you? Nobody has any responsibility for proving the opposite of your 'positive' claim. If I said that "The Great Invisible Spaghetti Monster Matters", I'd need to prove it. You said "PsychologyMatters
". YOU need to prove it. If I said "you don't have any proof that the Great Invisible Spaghetti Monster doesn't matter" as though it is a valid counterpoint, I'd be a hand-waving fraud... or just stupid. Since you do say "you don't have any proof that psychology doesn't matter" as though it's a valid counterpoint, that makes YOU a hand-waving fraud... or just stupid. BurdenOfProof
is a duty that belongs to the person making the positive claim. Attempting to shirk that duty makes you a hand-waving fraud. Being ignorant of that duty just makes you stupid. Your choice: shall I assume malice or stupidity?]
Well, that's off the main point because I was addressing a side comment about me allegedly not wanting to improve the state of the art. There's no need to thread-mess this over the pscyho issue yet. The bigger point is that micro-rigor can probably back just about ANY position. Thus, micro-rigor will not settle anything of material.
[I'm of the impression that "macro"-rigor is just a bunch of your normal hand-wavy hullabaloo. Can you provide examples of "rigorous net
advantage" provided by individual product components in any hard science? If not, how can you expect anyone to accept your claims that this 'macro'-rigor is actually a valid or relevant concern for 'product components' in ComputerScience
(such as languages and language features)? Please, start providing examples. I'll play devil's advocate. I believe the following that only 'micro'-rigor matters at the component level, and that there is no such thing as 'macro'-rigor or 'net advantage' from specific components or language features. I suspect that your talking about it is just nonsense you're attempting to use to justify other of your nonsense.]
The rocket analogy that's often used provides a pretty good example because multiple factors are weighed from a wide range of perspectives such that almost anything anyone could "gripe about" is addressed. It may not be perfect rigor, but it provides a decent attempt at thoroughness, something we have not even kind-of approached with software engineering.
[The rocket example you provide isn't about components, and thus makes a very poor analogy to the situations you've been raising this elsewhere because 'gotos' and 'types' and 'closures' et. al. are about 'components' of software and language design. I asked that you provide examples of "rigorous net
advantage" provided by individual product components
in hard science (because if none such exists then you have no such examples to point to when you wave your hands and attempt to explain why this 'macro' rigor should exist for components in computer science). Please pay attention to such critical modifiers, TopMind
[Unless you're making "rigorous" claims about the "net
advantage" of, say, the round nose of the shuttle or the particular design of the landing gear...]
[The following rocket properties moved below because they violate every term of the request (which demands examples of "rigorous net
' advantage" provided by individual product components). In particular: (a) none of the examples indicates a "rigorous" study. (b) none of them are net
advantages (how do you measure Safety vs. Accuracy?). (c) none of them properties is shown as being provided by individual product components
, making invalid any attempt at analogy to the request for macro-rigor from language features and components.]
- Time before notice (prep-time, if any)
- Time after notice
- Dollar per pound, perhaps in terms of:
- Reuse costs
- Original purchase costs
- Rent costs
- Risk to crew (if any)
- Risk to payload
- Risk to service technicians
- Environmental impact
Huh? The corresponding issues in Computing are frequently studied, with considerable rigour, in SoftwareEngineering and HumanComputerInteraction research. Your lack of knowledge of these (which surprises me -- given your views, I'd expect you to be frequently citing studies) is not a point in your favour.
[Indeed they are studied, albeit SoftwareEngineering
focuses more on methodologies and HumanComputerInteraction
more on user interface, and as such neither really studies language features or the underlying 'product' design and so these also aren't particularly applicable to issues such as advantages of closures and other language features and product components.]
Studying the properties of language features in terms of SoftwareEngineering or HumanComputerInteraction raises fundamental research methodology issues, i.e., how do you rigorously evaluate these? It's not for lack of desire that they're not frequently studied, but how feasible is: "Given two identical language implementations, one with closures and one without, obtain a sample population of programmers of equal ability and knowledge ..."? Not very. It's not impossible, of course, but the poor cost-benefit ratio of implementing such studies generally propels researchers in other directions.
[But I note that TopMind
would just wave his hands and call all of these proposed "'macro'-rigor" emergent rocket properties just 'apples and oranges' should they be presented against
him in an argument: How do you measure Accuracy vs. Speed vs. Safety? He'd also scoff and demand the "rigorous" studies that prove them better than arbitrary other rocket designs he waves his hands and invents on the fly. It's really hard to argue 'reasonably' with someone who has different rules for himself than for others.]
Always guilty until proven innocent, am I?
[Your record speaks for itself.]
- No it doesn't. Most accusations against me prove time and time again to be either wrong, nitty, or subjective. You are just biased because you are stubborn. --top
- [Project much? People don't come on this wiki with an initial bias against you, Top. YOU are the one who is stubbornly biased to 'see' accusations against you as proven wrong/nitty/subjective. To everyone else, those accusations are just proven time and time again to be correct.]
As far as the weights, I say let the user/customer plug in the weights
they want for each metric. For example, shipping water to the moon does not require as much reliability as shipping astronauts to the moon, so they may lower their weighting of that factor to save some money. The "worth" of an astronaut's life is a political/social question, not directly a scientific one. This will almost always be the case for tools. We cannot tell the customer what they should want or do with the tool, we can only tell them the nature of the trade-offs available at best. A macro-rigor model would be a framework to test and weigh the known tradeoffs as best as possible, not answer base political questions. --top
[If you're saying that 'macro-"rigor"' reduces to making design decisions based on predicted tradeoffs of guesstimated weights between emergent properties, then ComputerScience
already has plenty of it.]
How are those "emergent"? And, I don't know of any "computer science" technique that has used a wide variety of metrics comparable to the above list. If there is, please show us the list
[Are you kidding me? How can you not know
that, for example, speed
are emergent properties of every aspect of the construction of the ship, from the shape of the nose-cone down to the total weight of toothbrushes in the storage locker? And when it comes to making design decisions and tradeoffs, we're talking SoftwareEngineering
, not ComputerScience
. Two different fields, Top. As far as your "I don't know" comment, that's because you've been spending your time, as usual
, firing off that mouth of yours in complete and negligent ignorance of the subjects you choose to discuss. Here are a few examples:]
- Mean time between failure
- Mean time to repair a failure
- Uptime Percentage
- Probability of Failure
- Scenario Tests
- Gamble/reward money one is willing to put up for breaking it.
- Scenario Tests
- Cost of Failure
- Operator control over Cost of Success (e.g. special and reversible efforts to fire a weapon)
- Latency (time between issuing task and completing it; 'lag' for networks)
- Throughput (tasks / second; 'bandwidth' for networks)
- Space (cost in memory)
- Utilization (degree to which full hardware can be applied to performing tasks)
- Performance Guarantees:
- Hard Real Time: Temporal guarantees
- Embedded: highly constrained memory
- Safety-related: E.g. Aeronautics software cannot allocate dynamic memory during regular runtime (initial startup excepted)
- Configurability degree (and granularity) to which users may modify behavior of the system in a persisted manner (not just during runs)
- Flexibility degree to which system may be adapted to unforseen user needs (ability to write new modules helps)
- Modularity degree to which functionality and features are isolated to a well defined volume of code, both for adding new functionality and maintaining old functionality
- Portability across different hardware and operating systems and operating environments
- Reflection logging, error messages, access to debugger; ability to see 'inside' the system to diagnose problems or help repair errors.
- Robustness: resistance to failure; extent of damage that can be caused by malformed messages or protocols or security violations
- Resilience: ability to recover from fault after one occurs
- Graceful Degradation: ability to maintain service at diminished capacity while in a failing state
- Accessibility: degree to which service is available to authorized users
- Communications Accessibility: limits on location for access (e.g. at particular machine vs. on LAN vs. on Internet)
- Handicapped Accessibility: ability to service users with limited sensory or feedback abilities
- Multi-language Accessibility: ability to service users familiar with a variety of languages
- Cost & Time to Market: resulting cost for end-user in both time and money
- Freedom: to manipulate the software and use it for your own purpose ('free as in beer' under Cost & Time to Market)
- Grokkability: ease with which project can be understood by someone new to the project (modulo experience with language and domain)
- Total code volume: big projects are more difficult to understand
- AccidentalComplexity vs. EssentialComplexity code volume ratios: AccidentalComplexity interferes with the purpose of code.
- Syntactic Noise: in-code ratio of characters there primarily to help the parser to those that express meaningful program components
- Semantic Noise: fraction of code that expresses intent in terms of strategy to achieve it (when there are a number of strategies that could work); generally indicates that the language is too low-level for the problem domain
- Context/Modularity: degree to which a piece of code can be understood in isolation; negatively impacted by use of global variables or coupling into other pieces of code (including other modules). Related to CouplingAndCohesion.
- Standards Compatibility - ability to interface with hardware or software via well-formatted messages or persisted data (protocol, data formats, etc.)
- Service Requirements - giving correct answers to queries, possibly the reasons for them; completion of UserStories; performing correct function upon receiving command or input; etc. (varies heavily per-application)
Good start. If all those were actually
tested, it could be said to be approaching macro-rigor.
[Ah, but they are
as good as the "macro"-rigor you claim above, which reduces to making design decisions based on predicted tradeoffs
of guesstimated weights
between emergent properties.]
However, almost all HobbyHorse
claimers conveniently avoid
most of them and argue for their hammer via round-a-about and indirect reasoning.
Further, the list lacks metrics for a key component: code/system grokkability
[I'll agree that GoldenHammer
claimers avoid them. I do not. As a LanguageDesigner
and someone interested in building some NewOsFeatures
, the above list (and more) interests me greatly. I just happen to find that some things (like types
) are absolutely necessary to achieve most of them. They even help with 'grokkability' when it comes to communication between services in an operating system (as they avoid the AccidentalComplexity
associated with adding parsers and serializers to every service and other gross violations of OnceAndOnlyOnce
[As far as the list lacking 'grokkability': this list lacks a great number of metrics for components that may or may not be 'key' based on how one chooses to 'weight' them. Those listed above are just what I could think of off the top of my head, reorganized from a list provided to you months ago in AbsolutismHasGreaterBurdenOfProof
(which has gone ignored). It doesn't even touch on privacy, secrecy, freedom (open source), modularity (coupling/cohesion; ability to isolate changes for new features and maintenance issues to a small volume of code); other resources than time/space (laptops and robots benefit from software optimized against power consumption
, for example), migration (ability for software to move), mobility (ability for host of software to move), etc.]
We can generally use the customer's weights as a guide to which metrics are important. If we cover most of the customer's concern, then we have a relatively high macro-rigor score. But it is again dependant on the customer
. For example, if a customer insists on a green rocket because the country's flag is green, but your technology can only produce white rockets due to frost from the cooling system, then the other factors are not the driving concerns to that customer.
[True. Unless the customer 'insists on' a green rocket but isn't willing to pay more for it or sacrifice reliability. Some people just want a pony.]
- To simplify the debate, let's focus on a "typical" customer for now. If we wanted to put a little bit of rigor into it, we could survey the customers and get the average or median weighting of the above factors.
- [Typical customers want that pony. They just know they cant get it.]
Similarly, grokkability is important to most software developers. If the customer ranks that factor as important and your metrics don't cover it, from the developer's perspective you are not
approaching macro-rigor. (I am assuming a capitalistic system where the customer's needs make a difference rather than the kind of system were a selected group of elites dictates the tools based on their calculations or decisions.)
[You say that as if software developers were generally the customers of software. From a SoftwareEngineering
perspective, the only reason to support grokkability is if doing so helps keep the Cost & Time to Market
down. Often it does. But when applied in an indirect means, it isn't a customer metric... it's more of an 'internal' metric, like 'elegance'.]
That's almost like saying that house buyers should decide which tools the carpenters use. I agree that developers often don't make the entire choice of tools. In essence, there are "layers" of customers for most tools. But either way, all layers want tools that make developers the most productive (including output volume and quality).
Just to clarify, are you saying that grokkability shouldn't be listed because its too hard to measure, or because its not important?
[Essentially, by arguing for 'grokkability' as a criterion, you're going back on points you've made earlier (and that I agree with) about dividing internal vs. external evidence. Or perhaps you're just making a category error by assuming that code 'grokkability' is directly relevant to typical customers of software products. Whether the code can be understood by a customer is only directly relevant to a customer who is expecting to directly interact with that code. If not, then 'grokkability' is something inside a black-box application that is not important to customers, who care more about things like cost and time to market
and release bugs
and feature addition/ bug killing throughput
. Software engineers may care about grokkability
and type safety
insofar as they have proven useful in achieving these things. But these are indirect. YOU, by presenting 'grokkability' as a criterion, are the person who is arguing that house buyers should decide which tools the carpenters use.]
I would argue that syntactic grokkability, regardless of its measurability (it's usually intuitive, obvious, or of insufficient distinction between one tool and another to be worthy of debate), is important. For example, assembly language is far less syntactically grokkable than an equivalent C program, as the intent of the program must be laboriously divined from individual assembly statements. Therefore, for a given arbitrary project that does not explicitly require assembly language, it makes sense to choose C (or whatever) over assembly, all other considerations being equal.
s, whose customers are programmers, grokkability is really important. For Software Engineers who are representing customers that are not expected to interact with the code, it is not so important... except indirectly, insofar as this syntactic grokkability impacts the introduction of bugs and errors or time to market. They might choose C over assembly for portability, because it is easier to write, because it supports limited type-safety, or for any number of other reasons. For Human Resource people, all that matters is that people with the right skills exist to understand the code... independently of how easy it is to learn the language; their customer is the Program Manager.]
Conceptual grokkability, however, is a different case. Among skilled and educated developers, understanding of abstract high-level conceptual mechanisms like lambdas, closures, object orientation, etc., is generally a given and can be disregarded. Programmers who cannot grasp these are generally reflective of a lack of appropriate skills or ability to appreciate abstractions, rather than an inherent grokkability issue. Programmers lacking the requisite skills and ability are a problem no matter what -- even when the project doesn't necessarily warrant such skills.
Perhaps being productive with the existing pool of talent should also be considered. But in my opinion the proponents of closures etc. have made a poor case for them as significant improvers and they are often dismissed as MentalMasturbation
. Lack of realistic examples is probably the biggest problem. They often use very indirect reasoning or toy lab examples to illustrate their alleged power. That won't work. Practitioners will reject those. Maybe us practitioners are just "too dumb to get it" and will have to live with our Neanderthal techniques. But my opinion is that they are just bullshitting; trying to justify their research grants. If it was real, then they'd be able to produce quality examples and scenarios instead of excuses. Real-world-like scenarios and examples are a powerful tool against bullshitters. True, its not a perfect tool, but one of the better ones for such a job. --top
I use closures only as an example, not a case. However, I think you unfairly deprecate the ability of most practitioners to appreciate the value of high-level mechanisms. The majority of practitioners I've known, in fact, crave such facilities when forced to use languages that do not support them -- myself included. Indeed, my RelProject internally defines lambdas, continuations, closures, etc., and would be considerably more complex, awkward, or require duplication without these. You are free, of course, to download the source code and simplify it by re-writing it in a procedural language of your choice, if you wish to make the case that I'm "just bullshitting."
That is SystemsSoftware
. It may have different needs than direct custom applications. I've agreed that many techniques in OOP may be better suited to systems-software (often because its not practical to use a database for large, interrelated structures). On that note I will agree that different techniques are appropriate for different niches. It's when universality is implied where I get skeptical. --top
What are "direct custom applications"? Would your response be different if I'd been commissioned to implement the RelProject for a client rather than initiating it myself? If you're arguing that high-level facilities aren't needed to implement simple CrudScreen<->DBMS->Report applications, then you're probably right -- all the heavy lifting has been done for you in implementing the CrudScreen infrastructure, the DBMS, and the report generator -- where such facilities are unquestionably of value. However, it can be argued that any high level configuration task, built on top of powerful infrastructure, is the same. You don't need high-level programming facilities to create a Web page either (HTML will do), but they may be of considerable value in implementing the Web browser or Web server. You don't need high-level programming facilities to create a spreadsheet, but they may be of considerable value in creating the spreadsheet software. And so on. If that's the substance of your argument, then you're comparing apples to oranges.
- [More like comparing apple-skins to apples, as though the former would exist without the latter. His use of programming languages is only skin deep. Top doesn't implement libraries... he only uses them.]
- That's hogwash. I have a lot of libraries to assist with repetitious and common tasks (domain abstraction). See HelpersInsteadOfWrappers. I've even built report writers and graphing utilities when budgets kept me from buying them off-the-shelf (although don't generally recommend such a practice). --top
- The analogy is excellent. Merely piping the occasional bit of reusable application code into a library has very little to do with creating whole systems.
- Please elaborate. There is a communications gap here.
- The impression I get is that you create libraries as an adjunct to creating specific end-user applications, even if those libraries are for generating reports or graphs. You created those for a specific project, yes? This is very different from creating whole stand-alone systems intended for deployment in a wide variety of projects.
- As explained in HelpersInsteadOfWrappers, attempts at "generic" cross-project abstractions generally either become bloated feature pack-rats and/or fail. Others have noticed this also in ReuseHasFailed. But a lot of code is sharable across projects, just not as-is.
- Essentially non sequitur, that is. Do you agree with my assessment: You do not do systems programming?
- No, I don't do systems programming. And I agree that techniques that don't work well for custom biz apps may work well in systems programming. If that is what you wanted to know, why didn't you ask instead of accusing me of dastardly deeds such as "non sequitur"? Grow some people skills. You find every opportunity to accuse me of something, like a wife during her period. It's a bad and non-productive habit that just generates resentment and retaliation.
- "Non sequitur" isn't a "dastardly deed." I merely indicated that your reply was irrelevant. As for "people skills," your "like a wife during her period" is about the worst example of "people skills" I've seen in a while, even if intended as a (poor taste) joke. Shameful.
- I readily admit that I retaliate after being insulted. I don't do it with the illusion that revenge will or should somehow fix you, but merely because it satisfies some primal sense in me. But I do not normally initiate personal insults and try to focus on complaints against specific issues. I believe that KeepCriticismNarrow reduces flame-wars.
I think your statement that CRUD-based applications are inherently "simple" is false. But if you are claiming that certain techniques embed certain features into base tools such that application-level developers don't really need to use those features to gain a significant advantages, then we are perhaps in agreement. "Certain techniques help certain niches more than others". I don't have to implement B-trees because Oracle did it for me and packaged it behind a query language, for example.
I didn't mean to imply that all CRUD-based applications are inherently "simple". They aren't, as we both know. Those that are complex, however, might well benefit from high-level programming facilities. I only meant that those CRUD-based applications which happen to be simple may not benefit from certain high-level programming facilities. However, given your agreement, I'm curious: Why do you debate against OO, closures, and whatever else at all? If it's clear that you're not talking about systems programming, and you don't do systems programming, then why are you arguing against the choices of those who are clearly doing systems programming? It strikes me, therefore, that you're arguing against experts from a position of ignorance.
I should note that in the future, this may change somewhat. For example, if I was going to implement a web browser, I might be tempted to use a database in place of DOM. The main reason such is not done now is for performance-related issues. But maybe 20 years from now the hardware will be up to snuff such that one could get away with it and produce a very nice browser. C is often used where assembler would have been used in the past for systems software and embedded systems. Thus, we already see a form of this. See AbstractionInversion
I'd be curious to know what benefit you intend to gain by implementing an arbitrary tree structure (the DOM) in (presumably) a relational database which is notoriously unsuited to ad-hoc queries and manipulations of tree structures, but that's probably OffTopic for this page.
I would tend to make it less tree-centric if given a choice. But anyhow, tree-friendly features can potentially be added to a query language. I think we discussed this somewhere already. Will link when found.
[In any case, even though your 'rocket' example was never a valid response to the above request for the three reasons outlined at the top of this section, I'll amuse myself and play Devil's Advocate a bit: I say that the rocket-example properties are all "micro" rigor because you're talking about properties regarding "very specific" applications of the rocket. I.e. you're not considering how quickly, safely, and accurately the rocket travels through water or rock. Because of this, you can't make any claims - certainly not "rigorous" ones - about "net
advantage" of using rockets. Can you even prove that the rocket is "net" better than a spade?]
You are creating silly-season examples. Let's limit to the scope to "outer space" if it keeps you from wondering into the petunia patch. If and when a vendor or tool zealot claimed a universal vehicle, then and only then testing in water and rock becomes an issue.
[I'm only following your example of assuming each claim about software or anything else has universal extent until explicitly stated otherwise.]
I came here as a human being, not as a lawyer. (The implication of mutual exclusion is semi-purposeful ;-)
GrokTheCode? - Types as a SilverBullet
I don't believe in GoldenHammer
s, but SilverBullet
s have existed aplenty in the past and I believe more will exist in the future (the author of NoSilverBullet
claims that most AccidentalComplexity
has been taken care of by existing language features. I don't believe the guy.)
Measuring that isn't particularly easy. A few possible pseudometrics: At the software level, examining AccidentalComplexity
code volume ratios might be a start (including, for example, all parsing and translation just to interface with other software as 'accidental complexity' of communication). At the language level, one can also examine syntactic noise
(syntax with no meaning), semantic noise
(semantics that don't directly represent programmer intention), and even use of context
(degree to which code fragments can or cannot be understood in isolation; global variables, coupling into other components or modules (CouplingAndCohesion
), and of course context sensitive grammars can all affect ability to understand (and therefore construct or maintain) a piece of code in isolation).
things may increase perceived simplicity when they are examined in isolation, but tend to increase AccidentalComplexity
in everything that uses them. Based on past conversations, I feel you focus too much on the 'simplicity' of one format in isolation (e.g. FlirtDataTextFormat
) that you forget to examine how it fits in and inflicts costs upon the rest of the system. After concluding the thing is 'simple', you make accolades for it about the value of grokkability and forget to ever examine whether it actually made things easier on you. When you come along later and start saying complex tree-types ought to be broken down in order to make them easier to represent in FlirtDataTextFormat
, I say you've got everything backwards: you're inflicting massive extra complexities in both the Database and on the Application just to maintain a simpler communications format. I can't offer statistics, but I can say that a significant chunk of all AccidentalComplexity
in programs today has to do with parsing, translating, and serializing communications (be it from a GUI or a relational database).
I believe StrongTyping
(even if it is DynamicTyping
, though I favor SoftTyping
) is a much better decision than 'type-free' or 'type-light' for reducing AccidentalComplexity
and thereby improving system
grokkability (making a tradeoff of communications data-format simplicity). StrongTyping
for communications between services immediately avoids the need to perform complex parsing and serializing efforts, leaving only translation. Much of the time even translation can be made easy via defining conversions between types as a OnceAndOnlyOnce
effort (and this can be done implicitly
, avoiding language clutter or coupling). Even easier is defining automated conversions between values of the same 'type' with differing representations, since one can simply use a high-level defined encodec/decodec that can be passed made accessible via a URI.
It just so happens that TypeSafety
(defined not so much in terms of types as a guarantee that no 'undefined' operations occur) is also critical for security (in the sense that you cannot analyze security if your system does undefined things). And it happens to be extremely useful for performance optimizations (thus reducing the AccidentalComplexity
of performing many powerful optimizations).
Ideally, individual services can be cut down to performing just essential
communications and calculations. However, such things as optimizations and concurrency and modularity and communications management still introduce yet more AccidentalComplexity
that better language design can fix. Use of types takes care of one big chunk, but there are plenty of others left. There are plenty of SilverBullet
s for those who look... not all of them apply to all situations, but many of them apply to reasonably large domains desiring a variety of common features.
Examples with specific UseCases and/or domains would be helpful to illustrate your points. I disagree with most of the claims about strong typing. However, this may not be the appropriate topic to get into such details. May I suggest ExamplesOfStrongTypingImprovingDeveloperProductivity?.
Such examples have been offered to you many times before (e.g. in CrossToolTypeAndObjectSharing
). It has never stuck before, and I doubt it would stick if I tried again. Part of the problem, I think, is that whenever it comes to a subject you have a strong opinion on (like types), you fail to keep non-functional but still important requirements in your head due to all the rage you feel. Real productivity must be measured in terms of getting the features correct while also
maintaining or improving performance characteristics, reliability, and various other NFR properties. E.g. you make it clear that you're willing to sacrifice performance, sacrifice ease of writing comparisons between matrices or tree-values (or even dates) in relational, sacrifice OnceAndOnlyOnce
and grokkability of the code the customers
write, and on and on... just to be rid of extra complexities inside
the DBMS and ODBC libraries in the form of supporting or sharing datatypes or complex values. Examples of this can be found in DoesRelationalRequireTypes
. I don't consider your views on this particular subject, or your disagreements, to be anything near 'rational'. For now, I'll just AgreeThatYoullDisagreeWithMeNoMatterWhatISayOrWhatEvidenceIPresent.
CrossToolTypeAndObjectSharing is still on-going. Do you have something older that can be used as an example, such as PayrollExample or ChallengeSixVersusFunctionalProgramming?? And, I'm not sure what your tree-value comparison claim is about. It don't dispute dynamic techniques makes comparisons a little more verbose, but I'm weighing the whole shebang, and dynamism wins net. Comparisons need a lot of local tuning anyhow, such as case conversion and space trimming. If we provide abstractions to simplify those, we can put base-type issues into it also with little loss in total net verbosity. Plus, different databases are often inconsistent in their typing anyhow such that we have to translate as needed. We cannot control the world, but can deal with it locally for local needs. Note that many others favor dynamic languages also. You cannot claim I am a loner on that one. Your implication is that us dynamers are dumb or naive. If your type systems are so great, then why didn't you use them to beat PaulGraham and become the rich dude instead of him? --top
- Contrary to what you seem to believe, I support dynamic languages. I just happen to believe that types (foregoing ManifestTyping) are ultimately more dynamic, and can be used to avoid such AccidentalComplexity as dealing with 'case conversion and space trimming'. And, be your memory selective or merely shoddy, if you cannot recall the tree-value comparison mentioned in CrossToolTypeAndObjectSharing then please look it up again and work through the examples presented there.
[If you don't understand the tree-value comparison example, which makes the problems of table-based "type" definitions staggeringly clear, then you're obviously not at the level of understanding ComputerScience
that is generally expected on this forum. For heaven's sake, if you don't understand the example do what high-school mathematics students do and work through the example on your own.
If you still don't understand, then come back and ask questions.
Furthermore, why do you feel the need to switch to PayrollExample
and suchlike? As for "many others favor dynamic languages also," that's argumentum ad populum, and your comparison to PaulGraham
is yet another tired version of the juvenile, "if you're so smart, why ain't you rich?" Please, give it a rest, get the fundamentals under your belt, and go learn some ComputerScience
. Your knowledge of it is sorely lacking, which is the cause of most of these debates.]
Your knowledge of practical evidence and science is sorely lacking. You are misusing ComputerScience. At the time of writing, the tree example was not a realistic example, just a statement of a hypothetical problem with A,B,C instead of real domain variables. Your repeated detachment from real world requirements is telling. You are detached, and probably swimming in seamen from MentalMasturbation.
- [I'm sure the tree example can be trivially extrapolated to a number of practical scenarios, but one that comes immediately to mind is storing pre-existing HTML documents, with the facility to easily extract relevant document components (e.g., document header, title, document body, meta-tags, etc.), while being able to detect duplicates -- as one often needs to do when the same documents are referenced by multiple URLs (hence, the URL itself is not suitable as a primary key). Another example might be storing static source-code snippets, whilst being able to trivially query for relevant elements of the code and (again) detect duplicates. However, I would argue that even simple, well-known, commonly-used examples like complex numbers, times, dates, geometric shapes (often used to represent product dimensions, e.g., windows, doors, storage tanks, roof supports, etc.), and geographical locations are sufficient justification for proper user-defined type support in DBMSes and application languages. It's been amply shown on CrossToolTypeAndObjectSharing (and, no doubt, elsewhere) that support for user-defined types results in simpler, clearer, and less-duplicated code than the equivalent without it.]
Your repeated inability to associate abstractions with real world requirements is telling. He simply sees immediate application, whereas you, being largely ignorant of the wider variety of programming domains, do not.
Until the time they can download our brain state into a simulator, we ALL live one and only one life. This naturally limits out experience. Some of us have a lot of experience in few or one domain and some of us have a little experience in lots of domains. Nobody has lots of experience in lots of domains because people just don't live that long. You obviously don't have a lot of custom biz-app (CBA) experience, otherwise you'd be able to jack out an example from that domain without the need to dance around the bush and make excuses.
If you claim universal domain application to your HobbyHorses, then you need to explore your techniques in the CBA domain. Otherwise, you'd be skipping one of the largest domains. If you are not experienced enough in CBA to produce a CBA example without the flak, then how do you know it's universal? It appears you are guilty of excessive extrapolation.
This would be a logic error on your part. The ability for a solution to benefit some niches without harming others is universally better
- i.e. better overall - for a tool designed for general purpose
. This is trivial to prove with, say, types for the CBA domain because I can very easily prove that all the types you currently have available for CBA today are still available in whatever solution I'm dreaming up. Therefore, all I need to do is show it better for some other domain or niche, and I'm covered: universally better has been proven and I'm not 'guilty' of any logic errors... at least if you agree with the premises that an RDBMS should be domain generic and should avoid LanguageIdiomClutter
. Your belief that I need to prove a solution better in every
domain violates at least one of those premises. Which one is it?
I suppose you could argue that if a rocket is tested on every planet except Uranus, then one is safe to extrapolate it to Uranus because Uranus is not different enough from the other planets to assume there would be sufficient unknown or untested factors or conditions. But there are two problems with this. First, you haven't presented any evidence that you've tested a wide enough variety of domains to be equivalent to 7 planets. Second, they are probably different enough that the analogy does not apply. For example, CBA is one of the heaviest users of databases, which are an integral part of (good) CBA, but much less so in other domains. (I suppose you could argue that databases are not needed, but for at least the medium term, businesses will continue to store most of their data on them even if you proved your magic bullet can replace them.)
- Technically, you may be correct. But that still does not mean that closures etc. are objectively better for a given niche. You also imply that we should do away with domain-specific languages. Otherwise, if we have them, there is no need to pack them with features that are only net useful in other niches. I do agree that in theory the relational engine should be separate from the "domain math" (see DoesRelationalRequireTypes). However, I expect the demand for such is rather small.
- I wouldn't say we should "do away with" DSLs, but using DSLs in manners that end up requiring duplicating efforts per domain (violations of OnceAndOnlyOnce) and generally requiring greater integration efforts (hurting CrossToolTypeAndObjectSharing) is not a solution path that I can consider wise. DomainSpecificLanguages are for domain-specific functionality. Unless you consider RDBMSs and FileSystems to offer domain-specific functionality, you should go for the more generic solution. . . at least after it has been proven useful in a few domains to avoid premature abstraction (as has the RDBMS, the filesystem, workflow languages, communication in general). As far as "pack them with features", the trick to avoiding LanguageIdiomClutter is to find generic features that can be specialized to individual domains. And as far as "the demand for such is rather small" goes, I can tell you from repeated experience that demand for a feature increases most rapidly after it has been provided.
A large part of CBA being the heavy database user is, I am certain, because businesses have a lot of money and a lot of forms and thus drive the databases to meet their needs (which largely means strings and dates and counts and currency). But there is no small part of it because current RDBMS's suck
for systems and science domains, driving them to other solutions. They will, likely, continue to do so until they effectively (without sacrificing too many important NFRs) support a wider variety of types and features. If you wish to avoid LanguageIdiomClutter
, the right solution to this is a generic one: to find a feature that can be used to effectively produce the necessary domain features. Also, if you wish to promote sharing or common solutions across RDBMSs, you'll need to embed this in a standard DML itself rather than rely too heavily on embedding procedures and triggers and modules written in some other language. One known generic solution to these problems is support for types and complex values.
Further, if you feel we cannot communicate any further unless I first live a xerox copy of your own life, then this discussion is over.
You don't need to know my life. I'm certain we could communicate if you decided to simply start implementing
some of the more complex ideas you have, like TQL, and put forth the necessary effort to learn what you need to know to do it. If you aren't willing to do that much, though, if you at least achieved the equivalent of a Bachelor's Degree in modern ComputerScience
, I'm certain we could communicate more effectively.
I do have a B.S. in ComputerScience and graduated with honors. However, I've forgotten a lot of the theory, and also it was a university that prided itself in practical experience over theory, for good or bad. Further, I specialized in Graphics/CAD instead of compilers. Implementing TQL would not be representative of CBA for the most part. I've already agreed that some of your favorite techniques may shine better for systems software. Why do I have to keep repeating that? If you remembered that, then you wouldn't have made the TQL comment, or at least anticipated my response to it. This after accusing me of a poor memory. Glass houses, dude.
I have to keep repeating it because, despite you occasionally giving lip service
to the notion that systems software may require different approaches, you still treat the whole metafield as utterly irrelevant in arguments that involve design criterion for general purpose programming utilities. When you start giving it proper thought and stop automatically assuming that CBA is the only thing worth representing in an argument, it will stop being repeated to you.
If you wish to show practitioners the benefits of your claimed GoldenHammers, you need to produce practical demos with change impact analysis etc. In fact, it may not even have to be in CBA. Just select any niche that doesn't require a lot of up-front description of the domain. Thus, you can no longer accuse me of limiting the discussion to CBA. (However, the further the example(s) from CBA, the less practitioners will relate.
We appear to be at a stalemate here. You demand that practitioners learn more theory and practitioners demand that you produce coded examples in a relevant domain. Road-testers versus eggheads. Or as others have put it, US style technology proving versus Euro-style.
Further, you imply there is a canonical single set of theories that proves you right. The field is actually more diverse and there are competing theories. See "Vetting" in BookStop.
Practitioners that aren't willing to learn PLT or consider solutions relevant outside their little niche and such really have little business voicing their ignorant opinions on design or implementation of general purpose language utilities. They can offer some useful UserStories
, but that's about it; when it comes to these utilities they're customers, not designers. They have their micro-agendas and can't see the big picture. Your micro-agenda happens to be CBA. I don't
demand that practitioners learn more theory. I demand they learn more theory OR get the hell out of the way. Huge difference. Negligent ignorance has no place in rational argument.
If a practitioner in domain X makes claims about their pet tools in domain Y, you are right in that they often need to be corrected or taken with a grain of salt.
And I didn't intend to "imply there is a canonical single set of theories that prove [me] right". I said we could communicate
if you knew ComputerScience
theory, not that we'd agree on everything.
I could say the same about putting in effort and skill to produce good examples that illustrate your claims. You have not proven them un-illustratable. Just because YOU don't know how (yet) does not mean it cannot be done.
I feel you tend to seek superficial excuses to claim examples aren't 'good' in order to avoid confronting them, as with the tree example in CrossToolTypeAndObjectSharing
. I am not under the impression that the problem is on my end.'
You just call it "excuses" because you are too lazy to address the issues face on.
I call it "superficial excuses" because they generally aren't relevant to whichever problem the example is intended to demonstrate.
Untrue. For example you keep trying to find every trick in the book to try to get SystemsSoftware admitted as being sufficient to illustrate universality instead of just rolling up your sleeves and doing a biz example instead. YOU are the one making excuses, not me.
That is not a trick or excuse. It is straightforward reason and logic that leads to the conclusion that SystemsSoftware
is sufficient to illustrate universality when designing domain-generic tools. I simply operate under the premises that CBA is pretty well covered under existing RDBMSs (because CBA is where the majority of the money came from to develop existing systems) and that to enable data-driven design in other fields the RDBMS shouldn't be domain-specific (are you arguing this point?). These logically lead to the conclusion that my tool doesn't need
to prove better than
existing systems for CBA; it only needs to be no worse for CBA and better for other fields... such as SystemsSoftware
the solution would somehow provide a tangible benefit for CBA as well, that's just icing on the cake.
- "Shouldn't be" and "is" are too different things. Existing RDBMS generally don't allow user-defined types and functions. Thus, they tend to be shunned by other niches meaning that stuff done in the DB is instead hand-programmed in such niches.
- True. What should be and what is are two different things, maybe even "too different" as say you. That said, progress is measured in slow plodding steps from "is" to "should be". Other 'niches' shun the DBMS because it doesn't do what they require of it. That should be changed.
[Indeed. Having extensively developed both systems software and custom business applications, I cannot envision any situation where an added advantage to one would be a disadvantage to the other. I suspect that most improvements applicable to either could be a benefit to both; in fact, I see no reason to treat custom business applications and systems software as distinct, except to add clarity to the viewpoints and priorities that operate in some of these debates. Ultimately, software development is software development, regardless of the domain. The same issues apply to all programming, though some may be weighted more highly in one domain than another. However, if you don't like (for example), user-defined type support in your DBMS, or object-oriented programming, or closures, then feel free not to use them. That doesn't mean they're inherently bad, especially for those working in other domains (or even the same one) who clearly think they're good.]
I have to disagree that one-style-fits-all. For one, systems-software requires BigDesignUpFront. Often in custom biz apps, the requirements are either poorly defined, or likely to change because the customer changes their mind when they actually use the result. Nimbleness and timeliness is usually ranked more important than preventing bugs. Of course it also depends on the nature of the app. Something that processes money has to be planned and programmed more carefully than say an interactive drill-down marketing research report. Further is the common biz need to mix different tools and languages but use the same data. You do things differently when there is sharing versus all state birthing and dying in the same EXE. --top
[In my experience, whether BigDesignUpFront
is appropriate or not is project-dependent. Some of the cutting-edge thinking in this area is meta-methodological; it chooses the (perceived) best approach depending on the circumstances, and does not treat any approach as The One True Way (including meta-methodological thinking!) I've seen systems software successfully built via ExtremeProgramming
, BDUF and other approaches, and I've seen custom business applications built via ExtremeProgramming
, BDUF, and other approaches. New programming languages, for example, are probably best designed via BDUF. However, if I were commissioned to build a Web Server (or a DBMS for an existing language), I would use ExtremeProgramming
. In short, development methodology is independent of systems programming vs end-user application programming.]
Without hard data, we'll just have to AgreeToDisagree. Myself, although I like dynamic and type-loose/free languages, I would not feel comfortable creating an air-traffic control or Mars mission life-support system using such, preferring instead an "anal" kind of language. There's a tipping point where reliability outweighs cost and timeliness. --top
does not demand, nor is it even related to, dynamic and "type-loose/free languages." Certain refactoring tasks may, in some cases, be simplified by dynamically-typed languages compared to manifest-typed languages (which must be strenuously distinguished from statically-typed languages employing type-inference), but absolutely nothing precludes applying ExtremeProgramming
practices on statically-typed and even manifest-typed languages.]
If I implied they were related, I apologize.
(Borrowed from ObjectiveEvidenceAgainstGotosDiscussionTwo
But, unlike you, I believe that macro-rigor doesn't exists in any science, and thus that its non-existence in ComputerScience is a complete non-issue. I think it's just hand-wavy mumbo-jumbo you invented to avoid reasonable debate. The only thing I know that matters is micro-rigor as applied to known and predicted micro-situations.
Proponents of contentious topics such as heavy typing, heavy OOP, thin-tables, etc. generally word their convictions as more or less absolute, not spot-per-spot situational. If you are not
one of those absolutionists, then this [issue of "macro"] may not apply to you. --top
(A reminder about not confusing 'absolutist' with 'absolutionist' was given, but turned into a FlameWar
, so has been removed.)