Where and how much does psychology matter in software engineering?
(Used to be named "Psychology
This is the view that outside of issues of machine performance (speed and resource usage), that "software engineering" is primarily about psychology and fitting the human mind(s) of the developers. The better the fit, the better the productivity. The implications of this viewpoint include:
- Since every brain is different, the "best solution" may be different for each developer. DevelopersFromAndromeda stretches this idea even further.
- Since psychology is currently a soft science, a rigorous search for the One Truth may be a long-time coming. (Related: DisciplineEnvy)
- Use of theory alone may be limited because its impact on psychology becomes the overriding concern, not the "elegance" of the theory itself. Focusing only on math-like rigor (what is easily-measurable) may risk SovietShoeFactoryPrinciple because psychology is not a rigorous field yet.
- The big question of changing one's mind to fit the machine/theory/model or the other way around: changing the machine/theory/model to fit minds still remains. (Related: MindOverhaulEconomics.)
Attempting to make this BickerFest?
into something more productive...
Hard vs. Soft Psychology
I've noticed several times in this discussion some advantage of 'hard' psychology being pointed out as though it were a point in favor of 'soft' psychology mattering. Doing so is a form of equivocation, and it needs to stop immediately if a discussion on WherePsychologyMatters
is to be productive.
One might generally distinguish fields of psychology into two classes: 'hard' and 'soft'. These are perhaps over-general, but are well-accepted distinction among psychologists. Hard psychology
is concerned with metrics and statistics, and includes studies of behavior, perception and pattern recognition, memory, conditioning and learning, response and reaction times, error rates while under stress or pressure or various drugs, etc. Hard psychology also has some strong relationships with information theory, computation theory, and language theory. Hard psychology does not concern itself with the subjective, which it deems inaccessible. Soft psychology
concerns itself with aesthetics, tendencies, preferences, beliefs, feelings, etc. - e.g. making a user-interface look 'cool'. It tends to be less formal, and includes a great deal of speculation.
I don't believe anyone will contest that 'hard' psychology matters, and that one should take such things as reaction times and error rates and learning curves and color-blindness into account when designing tools. These are matters of psychology, certainly, and are a place WherePsychologyMatters
. But they are also very objective, and shouldn't be conflated with 'psychology' in general if one is to then make conclusions regarding the value of 'soft' psychology.
This page purports that various aspects of what is (at least currently) 'soft' psychology 'matter'. That is, Top proposes that tools aimed for subjective 'preferences' and such provide significant real, objective benefits. Of course, 'soft' psychology lacks the sort of metrics, data, and predictive models possessed by 'hard' psychology that would be necessary to substantiate this belief. Until such are provided (thereby moving 'preferences' et. al. firmly into the 'hard' psychology camp), the notion that soft psychology 'matters' is itself a matter of pure speculation.
At the moment, we can't substantiate any claim about where, why, or how much "soft" psychology matters when it comes to tool design... but we also can't substantiate a claim that it doesn't
matter. Therefore it being a 'Truth' or a 'Lie' remains an open question.
Hardening "Soft" Psychology:
Part of formulating and verifying any predictive model is controlling a variable and observing consequences. There isn't any reason one cannot do this with 'preferences' or 'tendencies' or anything else that is currently part of 'soft' psychology. One can tweak variables and make predictions as to how the population, subgroups of the population, and even individuals
will respond to this tweak. Upon measuring the (observable) response (where stated impressions qualify as 'observable'), one can validate or invalidate the model and potentially tweak it to improve its quality.
Doing so iteratively, one could come up with models that will, with a known accuracy, precision, and confidence, successfully predict what individuals, groups, and populations will find (or at least claim to be) 'cool' or any other desired impression.
It is worth noting that expert confidence men, magicians, cold readers, martial artists, professional counselors, interior decorators, etc. are already impressively
well trained in this sort of prediction... and that even untrained individuals have learned to predict much of this to a lesser degree simply through exposure. This suggests that much of 'soft' psychology can, in fact, readily be 'hardened'. Further, many of these individuals can work with whole audiences
do their work very little advance knowledge
of their audience, which implies that variation among individuals either isn't all that important or that individuals can be readily pigeonholed (e.g. into 16 classes of people, as per MyersBriggsTypes
Unfortunately, very few of them know the ins and outs of those models they've built in the head (known from 'hard' psychology: brain structures are based on associative recognition and are awful at recall), and several classes of those those that might know (magicians, confidence men, etc.) aren't sharing.
If we were to sufficiently 'harden' the models these users possess, especially those that don't depend heavily on knowledge or models of particular individuals, we could probably create something like a MindControlWithDerrenBrown
-influenced HCI. Given a learning machine, we might be able to further usefully keep an ExplicitUserModel
so that the computer knows what you
find 'cool', and knows what you
want to do next, and knows which subtle reminders will influence you
into avoiding common classes of errors while programming.
Until it is sufficiently hardened, it will be difficult to directly take advantage of "soft" psychology by any automated mechanism. But it might be a tad unfair to argue that all approaches then are 'pure' speculation: we can, with some unknown-but-known-to-be-much-better-than-random probability, usefully take advantage of our 'informal' exposure-based training regarding how humans (as individuals, groups, and populations) will respond to a wide variety of stimuli. Such educated guesses are no replacement for metrics (and I 'anticipate' Top would be among the first to agree), but they are better than nothing. I.e. what we imagine people will find cool, people will likely find cool. We might be wrong, or we might fail to be competitive with the 'coolness' in competing products, but even if we're wrong we're probably closer to 'cool' than to 'white noise'.
Not that any of this diminishes the relevance of already 'hard' psychology or not-psychology properties like safety, security, optimization, correctness, etc. - though one might point out that those are in some ways already accounted for: a calculator program that gives bad results, crashes often, or took several minutes to number crunch '1+1' is, very decidedly, uncool.
So, for which fields does 'psychology' - especially this 'soft' psychology - matter? One obvious answer is: selling
. I.e. 'coolness' sells. So does 'familiarity' One can validly consider ProgrammingLanguage
s to be UserInterface
s between a programmer and a computer, and perhaps a 'cool' 'familiar' ProgrammingLanguage
would sell better than one that is stuffy but 'better' by most objective means (more correct, more safety, more optimization, more security, better integrated concurrency and communications management, less verbose, catches more classes of typographical and logic errors, etc.).
While we can agree (assume) that soft psychology will likely influence popularity
, what we cannot support is what Top proposes at the top of this page and among various others: that soft psychology somehow matters when it comes to productivity
of a UserInterface
Hard psychology is a bit more forthcoming here. E.g. given a 99x99 matrix, we can assert with a great deal of well-researched confidence that the latency
between highlighting a point in that matrix and selecting it is much lower when selecting by mouse than is navigating to it by use of arrow-keys or entering the four characters specifying its location. This suggests that for latency relevant tasks, such as shooting virtual avatars in the head with virtual rocket-propelled grenades, a mouse is a better choice than a keyboard. OTOH, complex mouse gestures and patterns tend to become difficult for humans to get right... and so keyboard macros may be a better decision.
'Productivity' is determined based on the need to accomplish particular goals, of course, and it could be that soft psychology is more significant to accomplishing some goals than others. However, determining which goals those might be, and estimates at the degree to which soft psychology is 'more' relevant than the hard-psychology and math/information-theory/computation-theory stuff, seems to be more 'speculation' than anything else. We can't rely on our 'informal' training for this because we are not naturally encouraged to distinguish those difficulties and accomplishments that arise from our subjective nature from those that are either objective or arise due to our own incompetence.
SovietShoeFactoryPrinciple of "Soft" Psychology
Top often harangues on the SovietShoeFactoryPrinciple
being a problem associated uniquely with focus on 'objective' measurements. But consider for a moment where weight is given instead to 'soft' psychology measurements: subjectives, opinions, views of how people 'find' a product. Going far in this direction, it isn't difficult to imagine products that do nothing productive at all, but instead hypnotizes, cajoles, threatens, deceives, and empassions users into believing and promoting the product as good. Essentially, you'd be encouraging the technical equivalent of a culture of yes-men and sophists: don't be good; it's much easier to spend money into making people think you're good and your competition is worse. Positive impression from the boss is more important than a useful opinion or a valuable warning. Dressing well is more important than doing good work.
Ensuring my flame-proof undies are securely in place, I might go so far as to suggest that 'religion' seems to fall in this category. So would fad, fashion, and various other cult phenomena.
On the other hand, it's difficult to argue with success, even if it is achieved at great cost to society in resources and lives and freedoms all with no measurable benefit whatsoever...
Re: This is the view that outside of issues of machine performance (speed and resource usage), that "software engineering" is primarily about psychology and fitting the human mind(s) of the developers...
I second this. A nice short exposition of items that are correct. I agree that PsychologyMatters
a lot and I wonder how it is possible to make such a fuss out of this (below). Maybe this could even be expanded into FromHardToSoftComputerScience
. -- GunnarZarncke
See also MyersBriggsForProgrammers
I guess fuzzy statements without support appeal to people who embrace logically inconsistent concepts like EverythingIsRelative. If you can embrace one contradiction on appeal alone, it is hardly a stretch to accept any idea as 'correct' based only on appeal rather than due consideration or evidence. None of Top's bulleted points makes a falsifiable prediction, and every single one of them starts by assuming the unsubstantiated. The only falsifiable prediction he makes is: "the better the fit, the better the productivity" - which, while appealing to people who want to do things their way, stands completely unproven. This section, Top's just preaching his religion and wrapping it in slightly more ceremonial dressings than normal.
- And I guess attempts to explore and find ways of working that can't be logically derived in a robotic step-by-step fashion from existing fundamental and universally accepted "truths" are anathema to people without creativity, inspiration, or vision.
- I don't demand truths be universally accepted - there are far too many people who seek comfortable beliefs and call it truth, so universal acceptance is impractical. But, if you can't test the truth of a statement by a direct or indirect but repeatable (meaning logically derived, step-by-step) means, then you really aren't in a position to call it a truth or 'correct'. We don't reject creativity, inspiration, or vision... but if vision and inspiration and creativity aren't then tempered by the real limits of logic and observation, you just end up inventing and exploring fictions. Such an untempered approach is, and should be, anathema to anyone who seeks truth.
- Yes, that was deliberately intended to be harsh, but it's pretty much a restatement of what you said, but from the other point of view. Instead of simply dismissing everything as unscientific, unsupported and unprovable, why not try to suggest experiments and investigations that can improve our understanding of the interaction between psychology and programming?
- Why not design an android while you're at it? Because it takes time and energy to do it right, and the effort is largely futile without a funding source to test and implement the design. I prefer to lay BurdenOfProof where it should be: at the feet of the claimant. If one were to ask for tests with the funding to potentially run the tests, I'd be much more willing to spend the energy to make suggestions.
- I wouldn't mind that much if we changed the topic into a question: "does psychology matter?". Would this make you more comfortable with the topic? Or, think of it as "matters of psychology" where "matters" means "issues" instead of "importance". --top
- Transforming this topic into a question (DoesPsychologyMatter?) or a listing (MattersOfPsychology?, WherePsychologyMatters, etc.) rather than a claim would, I expect, allow for a more productive discussion.
- As a mathematician, programmer, manager and systems designer, I've been forced to accept that psychology is critical to success, whether you deal with it explicitly or implicitly. I don't agree with most of what Top says in general, but I know from personal, first-hand experience that psychological considerations need to be investigated. The blatant dismissal of these things by otherwise incredibly talented people is part of what holds back the subject.
- I disagree. The dismissal of branches of thought that can't bear fruit because they aren't grounded by any roots is what lets incredibly talented people focus on pushing 'the subject' forward. Besides, it isn't as though a conditional rejection is a dismissal - a demand for evidence shouldn't be impossible to meet for any claim that has merit. We all know, for example, that experience and learning - i.e. familiarity with a programming language and model - matter. Learning curves are well studied. That, since it has evidence, is a piece of 'PsychologyMatters' to which I'd readily concede (and I note that Top's Mind failed to even bring that much after requests for evidence to support his claims). But issues of learning and experience are insufficient to support OR test the more generic notion that the learned model is one that would 'fit the mind' better, or that some other model might have provided better productivity.
- Psychology is generally testable in terms of controlled experiments with volunteers. However, its nearly impossible to perform such on this wiki for reason that should be obvious. But your view seems to be to dismiss it until the point it can be subject to rigor. Thus, you seem to want to optimize languages and tools based ONLY on things that can be tested via math-like proofs. But this risks a large SovietShoeFactoryPrinciple kind of error if psychology is actually a big factor. Personal preference is probably the most convenient stand-in we have for psychology testing at this point, and it often contradicts your style of tool testing. --top
- Your view seems to be that you should accept pure speculation about 'psychology' as input to language and tool design. This risks a large GarbageInGarbageOut kind of error whether or not psychology is a big factor. I must say, I think my approach the better one. I would note that observing objective human behavior (e.g. classifying errors committed) would still fall within what I consider valid evidence for decision making. But arguments about "what fit the mind better" - I think you'd be hardpressed to even run a valid test with volunteers on that one, given that any given audience will be biased by prior exposure to similar tools. I.e. if everyone is used to the ComeFrom statement, trying to introduce the GoTo statement would be fighting 'learned' psychology, but the argument that 'ComeFrom' fits the mind better 'because people are used to it' isn't a valid one when dealing with programmers new to programming.
- Couldn't bias also influence which metrics researchers refine? Those obsessed with error correction/prevention/detection may only find and study metrics related that, ignoring metrics related to productivity and fitting developer's view of the domain to ease "communication" between the developer and the code. --top
- Certainly bias can influence which metrics researchers refine. This is a GOOD thing, since metrics need to be targeted at the goals of the developers lest they be waste of resources or not applicable to any decisions the developers will be making. (Of course, the goals of the developers might be in turn influenced by the desires of some customer.) It doesn't mean other metrics will be ignored, but they might be given less weight. So long as the metrics are repeatable, valid, relevant, and not based on pure speculation, the researchers have at least some potential to take advantage of them when designing tools.
- This does not really answer the question. How is bias about which metrics academic type researchers pursue and refine removed? Or is it? Researchers tend to be fad-oriented also.
- Bias isn't removed. Academic researchers pursue metrics relevant to their interests and models. It keeps them motivated. Fortunately, you've got a bunch of different interests represented among academic researchers.
- Claim: Not everything needs to be universally true, falsifiable and explicitly tested by experiment to be of value.
- Agreed. Fiction can be entertaining and valuable in that manner.
- So you think the Soviet shoe factory was "right" in its decision? And Apple shouldn't have designed the iPod and iPhone because there was no rigorous evidence that they would sell at the time? Intuition and anecdotal evidence plays a big part in our industries. If it could have been done via only numerical proofing, then the Soviets would have kicked capitalism's ass.
- Har har. The SovietShoeFactoryPrinciple is about working to the metrics imposed on you by others, and isn't relevant to situations where you're developing your own. And Apple had more than enough evidence that their new tech-toy gadget would sell: they have tons of prior evidence on how past such gadgets have sold in various communities, and on what features people use, and even tons of in-house tests by alpha-users and developers of the phones prior to distribution. Don't wave your hands wildly and make stuff up just to suit your arguments - it makes me think you're arguing in far less than good faith.
- Do you really need to start out with "har har"? That is childish, Lars. Sony has all the same kind of research resources as Apple. Apple's advantages tend not to be objective properties such as feature list quantity. User interfaces and product packaging is difficult to turn into a science, especially when fashion sense is part of the reason for purchase. If you have THE equation for "cool", please present it.
- I doubt even Apple has 'THE' equation for "cool"; like other companies they are just guessing and getting more misses than hits. But Apple certainly had enough to know that their new gadget would sell - i.e. sell enough to earn back the costs spent on producing it, not that it would sell as wildly as it did. And 'science' (the process of making good predictive models) aside, Apple observed behaviors and statistical impressions of alpha users and developers when crafting the gadget - these things ARE "objective" whether or not you in your hypothetical wisdom 'object' to that particular use of the word.
- If the key to success is so easy to isolate, how come Sony keeps getting beat up by Apple? Or is Apple just lucky?
- I've never claimed the key to 'great' success easy to isolate. Apple can't even isolate the wild success of the iPhone to the phone itself; indeed, marketing was critical, and the relative 'newness' factor of the touchscreen isn't something they can duplicate. But the keys to 'plain old' success have (on a per-field basis) been isolated for quite some time now, and learning it is part of becoming a professional or expert in a field. Don't conflate the two. Apple knew the iPhone would sell because they know how to obtain 'plain old' success. Apple is 'beating up' Sony (in that particular field) because of factors they can't duplicate, reproduce, or readily control. Keeping an eye on the long term, Apple has not proven better at making shots in the dark than has Sony.
- I'll admit that speculation and shots in the dark can sometimes lead to something good... another way to say GarbageInGarbageOut is RandomInputRandomOutput. That's what experimental research is all about. But one can't automatically give credit for such success to the speculative theory... e.g. that 'psychology matters'. Doing so is superstitious - no better than giving credit for such success to crossing one's fingers or prayer. While not "everything" (in the general sense) needs to be "universally true, falsifiable and explicitly tested by experiment to be of value", theories and predictive models certainly need to have those properties - if they don't, they're science 'fiction'... but perhaps entertaining and valuable in that manner. (I did assume implicit to this context that the scope of the above 'Claim:' is limited theories, of which 'PsychologyMatters' is one example.)
If you cannot objectively prove a benefit, even one supposedly derived of psychology, then you aren't in a position to rationally or reasonably claim that the benefit exists at all. In previous attempts to get Top to substantiate this position, he rails against the notion of "objectively proving psychology" - but that is exactly what he must be able to do if this claim is more than hot air.
By what authority? The psych view is a working hypothesis. It is at least as strong as the alternatives. DontComplainWithoutAlternatives. Where is your evidence for mind-independent betterment?
- "By what authority?" You seriously believe 'authority' is necessary to claim that objective proof is needed to make any hypothesis more than hot air? Maybe you should create another page to distill this piece of irrationality. The alternative is, of course, to say that: it hasn't been proven when, where, how much, or even that PsychologyMatters, so accepting it as a 'working hypothesis' for arguments is not rational.
- Tell me, Sir, what is the default? And besides, what is wrong with stating a hypothesis? And what's the opposite? Some single magic universal equation or type system? Where's the proof for it?
- In DontComplainWithoutAlternatives, it clearly states the default is to ignore what you stated - not the opposite, but rather the annulment. So stop waving your hands about and raising straw men. And there is nothing wrong with stating a hypothesis... but there is a great deal of wrong with making it a 'working' hypothesis prior to having sufficient evidence to substantiate it. You can't reasonably use 'PsychologyMatters' as a point or premise in any argument prior to substantiating it. Yet you do so. Regularly.
- You are projecting again.
- You are talking out your ass without license or qualification. Again.
- Well, you do have one up on me there, because you are a certified asshole.
- So you belong to an asshole certification group? Doesn't surprise me.
- Yes, and the official stamp applicator is on the bottom of my boot.
I find it likely that Top believes PsychologyMatters
on faith alone, and arguments made on faith aren't particularly sound. So, Top, exactly how and where does 'Psychology Matter', and exactly which benefits do you believe it provides? And what is your evidence for this belief? Or do you claim PsychologyMatters
merely as a way to wave your hands and dismiss arguments - another irrational option in your toolbox to fit alongside the self-contradictory EverythingIsRelative
We couldn't even find objective evidence that nested blocks are better than goto's (ObjectiveEvidenceAgainstGotos). If there is no universal proof for the simple things, then the complicated proofs like thin tables or type-heaviness being objectively better is not very likely. And if by chance EverythingIsRelative, and that makes objective proofs impossible, I am just the messenger. I did not build the universe, I only observe its properties. If the truth bothers you, see your therapist; don't whine to me and call me names.
"Better than" is always relative to a set of goals, and there certainly is ObjectiveEvidenceAgainstGotos
when a set of non-subjective goals is stated - such as degrees-of-freedom when it comes to error. Your argument regarding 'type-heaviness' is completely non-sequitur, and EverythingIsRelative
is absolutely false like any other logical paradox - it can't even be true "by chance". And the truth doesn't bother me... If you got off your high horse and began to methodically obtain and speak truth, I'd have no cause to whine. But you don't speak truth - instead, you attempt to educate others in your fallacy, and THAT bothers me. Your tendency to turn unsubstantiated hypotheses into working premises is a reasonable cause for objection - objection that you rudely call "whining". And I didn't call you any names, except 'Top' - a name you've chosen to be called; implying I did is quite rude.
Further, note that YOU turned this into a personal squabble. I didn't single out individuals nor call them names. You started this one, dude. You are flat guilty this time. I have to point this out because you or somebody like you keeps blaming ThreadMess on me. It could have been a pleasant little exploration of an idea. Instead, you bicerkfy it. --top
- I disagree with your "degree of freedom" claim, but let's leave that topic there. In general, everybody will probably agree that a narrow metric can be created or found that shows X better than Y or Y better than X. BrainFsck may win "fewest base symbols" metric, for example. But obviously it would take multiple and well-rounded set of metrics before we start to have a somewhat complete picture, not too dissimilar from sports stats.
I asked on this page for evidence. You've used 'psychology' repeatedly as a foundation for arguments on other pages, and I've asked you before on these other pages for evidence. It seems you pretending I called you names, focusing on guilt and blame, and otherwise "bickerfying it" is your own decision - perhaps a mechanism for avoiding reasonable discussion. And if you want "pleasant little explorations of ideas", then write fiction
... but the WikiWiki
is not the place for it. Is 'PsychologyMatters
' a fiction? maybe. That's why I ask for evidence.
- No thorough objective technique has been developed to compare paradigms/tools/techniques, let alone widely acknowledged by even just academics.
- This is not evidence for 'PsychologyMatters'. And I'm not even certain it is true, either... not when given logic itself as an objective technique.
- Please clarify. If you can use logic to demonstrate the One True Way, please do (see below).
- Clarification: You claim that no objective technique has been developed to compare paradigms, tools, and techniques. Because it is possible that logic qualifies as such a tool, I'm not inclined to accept that as fact (i.e. "I'm not even certain it is true") without YOU handling BurdenOfProof for your own claim - i.e. YOU need to prove your positive claim by demonstrating that logic and science and such are not "objective techniques developed to compare paradigms/tools/techniques". It further seems you're attempting to shift the burden of proof.
- I perfectly agree that objective techniques can be used to compare specific or narrow factors. However, it appears that psychology plays a major if not overriding role.
- I've yet to see evidence substantiating: "psychology plays a major if not overriding role". What you've been calling evidence thus far certainly isn't.
- People have widely different personal preferences.
- What does "widely different" mean to you? In my observations, which are admittedly limited, most humans prefer the same things with very small variations in the grander scale of all possible variations, though they learn to acquire these things in different ways. In any case, how does varying preferences support the idea that preferences, which I'll accept are part of psychology, in any way 'matter' in programming?
- Programming itself takes place largely in the mind, if not entirely.
- In the same sense, so does mathematics, architecture, and science theory. All of these, including programming, are ultimately applied and proven in a real environment. I find it highly unlikely that application of personal subjective preferences provides significant benefits to the latter three, so I can't reasonably accept that an activity taking place largely or entirely in the mind is evidence for the notion that PsychologyMatters.
- In most those other disciplines, one can test the mental model against the real world, or at least a simulation of it. A bad bridge design will eventually reveal itself outside the head that created it by falling down. (Remember, "wrong output" is not really the topic here.) Even in math, a computer, or colleagues, can be used to verify a proof.
- In programming, one can test the mental model against the real computer, or at least a simulation of it. A bad program design will eventually reveal itself outside the head that created it by failing to solve the problem. (Remember, "wrong output" is a perfectly legit mechanism to disqualify a program.) Even in programming, a computer, or colleagues, can be used to verify a solution.
- I am talking about multiple ways to get the correct output, not wrong output versus right output. In my opinion you are obsessed with that topic and will turn any topic you can into a discussion about "guaranteeing" correct output using formal methods and/or strong typing. Again, it is my opinion that there is much more to programming than guaranteeing the correct output and that you are overmagnifying that issue/factor, at least for many domains. I don't want to get into that again.
- There are multiple ways to get a correct solution in math, architecture, and science too. That's why the analogy holds. The above wasn't about "guaranteeing" a correct solution in programming any more than it was about "guaranteeing" valid solutions in math, architecture, and science. It's about demonstrating that your "in the mind" 'evidence' isn't valid evidence for the point you're attempting to make.
- In engineering and architecture when there are multiple ways to get the same solution, and cost has been considered, aesthetics then comes into play for the final decision. While not exactly comparable to software engineering, it is still a psychological issue. You are still not escaping psychology.
- You're just equivocating now. Aesthetics of the end-product are part of the solution, not the multiple-ways to get to it.
- Are you claiming that aesthetics are part of the "user requirements"? It's not that simple because limitations of design or expense may reduce choices. Its often an iterative process where designers and engineers try different aesthetics based on technical and funding constraints. Anyhow, it would be interesting to study the HolyWars of other disciplines to see if they are as complex and wide-ranging as software engineering. It appears that S.E. provides a far larger degree of freedom than other disciplines because we are less constrained by the physical world. Our internal models can be just about anything we want as long as they produce correct answers. A bridge engineer cannot change the laws of physics. Software has far less "natural laws" it needs to obey. (Related: SoftwareGivesUsGodLikePowers). --top
- All aspects of design are limited by resource constraints. Aesthetics is hardly unique in that manner. And I don't really agree on the "natural laws" issue: all software also must obey the laws of physics in addition to artificial and often arbitrary limitations and interface imposed by the machine.
- Reply moved to PageAnchor "Natural_Laws".
- The computer does not give a shit - Outside of performance and speed, the computer does not "care" how the algorithm is arranged. There are multiple solutions to the same problem.
- This is evidence that psychology DOESN'T matter. The computer doesn't give a shit about subjectives. It follows cold, hard logic based on transformation of symbols in a language. If there are multiple solutions to a problem and a given set of constraints, then the choice DOESN'T matter - all of them are solutions. Same as mathematical proofs. And if it is an optimization problem, then there are clear 'wrong answers', just like in TravelingSalesmanProblem. Psychology doesn't matter there, either.
- Its sort of a side issue, but psychology in TravelingSalesmanProblem may matter to a real salesmen, who may care about some locations more than others. The weather, airline brands at given airports, availability of hookers, local architecture, etc. may all play a roll if we wanted a more accurate (more real-world) model. Then issues such as whether hookers matter more than weather to the salesmen come into play. As stated, TravelingSalesmanProblem is simply an oversimplification (UsefulLie) that produces a nice simple metric for comparison of graph theory techniques. It's a toy problem for a toy world.
- Yeah, that's certainly a "side issue", especially when the domain is mathematics and not real travel. In any case, UPS and U.S. Postal Service aren't going to concern themselves with hookers, though they very well may associate 'cost' (travel time, fuel, $$$) and 'risk' (weather, gang violence, traffic jams, etc.) with certain routes.
I will agree the claim is speculative, but let us see the evidence for the other side, that there is One True Way and that pure logic or math can find it....
I'm calling you on this blatant ShiftingTheBurdenOfProof and ArgumentumAdIgnorantiam. Nobody needs to prove the opposite to call bullshit on your unsubstantiated claims, and demanding evidence of the opposite as a defense of pure speculation is only something a fraud, a charlatan, or a troll would do.
In science the leading theory/hypothesis may be a weak one, but its still the leading theory. To rank a theory, one has to look at the competitors. So far "psycho theory" may indeed be weak, but its competitors are at least as weak
, if not more. You appear to want to personalize this into a "burden of proof" issue focusing on individuals so that you can verbally spank them because that gives you the most pleasure. Instead, lets look at the bigger picture and compare the competing theories. --top
Proper scientists don't promote weak theories as truth or assume them as premises in arguments - even leading theories. They seek evidence to confirm or disprove them. You seem to seek excuse and justification for avoiding BurdenOfProof while promoting pure speculation. Despite all your words and claims of favoring evidence, your actual behavior betrays your hypocrisy - you clearly don't concern yourself with evidence for your own views. I'd bet you'd make a better cult leader than a man of science.
Let's delay the issue of how sinister I allegedly am and just present the evidence for alternatives. I want to talk about software engineering, not about me.
I'm willing to delay the issue of how fraudulent you are when YOU present REAL evidence for YOUR CLAIM. Nobody here is claiming alternatives - except the default: "it hasn't been reasonably supported that PsychologyMatters in an objective way so it is irrational to assume it as a point or working premise in any argument". So nobody, except YOU, has a BurdenOfProof. Go find other pages where people are making explicit claims if you wish to demand evidence for them - asking for it here looks, to me, like hedging and sophistry. Asking me not to point out fallacy where I see it isn't a reasonable request.
Are you this annoying in real life, Lars? If you don't want to talk about the alternatives, just say so without turning it into a crime drama.
Check out Lars's personal wiki and make your own conclusions. Are you this much a fraud and troll in your business interactions, Top? When you go to a company with your ideas, do you demand they provide evidence to defend the opposite of what you suggest? That sort of shitty behavior isn't acceptable there OR on WikiWiki.
Where did I "demand"? You appear to be visualizing bad things in my writing that are not really there. Generally people sit down and discuss the pro's and con's of the alternative scenarios in a civilized way. This is perfectly normal. I'd like to explore the scenarios and ask that all sides put their evidence on the table so that we are all looking at the same thing so that we can compare and discuss. If this is being "bad", then I guess I am delusional and evil and brain-damaged all at the same time like you imply. Comparing and discussing is perfectly normal in my world as I perceive it. If there is a better alternative, I am not knowledgeable in it. From my perspective, you are either being overly sensitive, or have an agressive chip on your shoulder left over from a past discussion.
Pros and Cons are for plans, not theories. Theories need support absent contradiction. As far as where you keep ShiftingTheBurdenOfProof: "Where is your evidence for mind-independent betterment? And what's the opposite? Some single magic universal equation or type system? Where's the proof for it? let us see the evidence for the other side [...]". Theories don't benefit from compare and contrast sessions unless you're applying something like OccamsRazor to choose between two equally well supported theories... and that is clearly NOT a situation that applies in this PsychologyMatters page. Support your own theory.
- BickerFlag: Asking is not the same as "demanding". You are again exaggerating in order to portray me in a bad light. It is a repeated fault of yours and you should correct it if you want more harmony.
- Asking repeatedly and as a primary point of argument IS the same as demanding (ask: to demand, expect). And you put yourself in a bad light; no help from me is required. It is a repeated fault of yours and you should correct it if you want people to think you're better than a hand-waving quack with an agenda.
- I'm sorry, but I don't see it that way. Evidently you interpret what I say a lot more harshly and sinisterly than what I intended. It is my opinion that the "problem" is between your eyes and your brain, not between my brain and the keyboard. Yeah, I know, you disagree. No need to state it.
- And, for the hell of it, lets do an OccamsRazor.
- Sure. OccamsRazor says: cut the assumptions that aren't necessary to predict the observed evidence. At the moment, you lack objective observed evidence that PsychologyMatters. So OccamsRazor would cut that assumption.
- You have only assumed its not necessary.
- Incorrect. I have only pointed out that you've yet to provide evidence that it is necessary.
- Aye. But Absence of Evidence IS Absence of Evidence. That's all OccamsRazor needs to shave off an assumption: an absence of evidence requiring it.
- If that's the case, then OccamsRazor results in null for the nature of software engineering because there is so far nothing else to plug into it.
- Untrue. And a blatant lie, too.
- [removed angry words]
- What you said is false, and you know it's false. Even you know evidence supporting such facts as: "In practice, customers rarely can provide complete, clear, and correct requirements up front." Thus, OccamsRazor doesn't result in a 'null' for software engineering. There are hundreds of such well-supported facts both within and regarding the nature of software engineering. And you know it, and you knew it, and that means you lied, quite blatantly. Your indignant, anal-expulsive shouting is just your reaction to being caught.
- Individual metrics for specific things, yes, but obviously that is insufficient to clearly demonstrate a given paradigm/technique/tool is NET superior. That is the scope of this discussion. It would take a series of metrics, not unlike sports stats.
- This discussion is: 'PsychologyMatters' lacks support, not about 'net' superiority of anything. We've already agreed that superiority is always relative to a set of goals or a problem to be solved... that you keep taking up arms against the 'net' superiority is a fool's errand: you're fighting an imaginary opponent.
Anyhow, I'm tired of bickering about bickering with you. You hate me and think I am a bad person. I know you think that already. You don't have to keep stating it over and over.
- Fallacy and sophistry should be pointed out each place it exists. Since you write many pages and repeatedly indulge in fallacy and sophistry, it does indeed need to be stated over and over.
- BickerFlag: You are projecting again. You are a very poor debater.
Now, I am interested in exploring the evidence for the alternative hypotheses. If you don't want to discuss such, please just say so and then we have reached the end of this discussion.
Here's another viewpoint: Software is the developer's "user interface"
into the software. It is analogous to an application user's interface to an application, say a spreadsheet. Most would agree that psychology plays a large role in user interface design. By this analogy (as far as it holds), then psychology matters for the developer's interface also. --top
Do you mean the model/language is the developer's user interface into the software? In any case, you say that psychology plays a role in "user interface" for applications - and it does, insofar as psychology drives user requirements - but I don't believe that true when it comes to requirements for correctness and such (e.g. if you were writing a desktop calculator, psychology plays no role in deciding whether the numbers should add properly). The model/language must meet real and non-subjective developer requirements to meet user requirements - so, following the analogy, developer psychology would generally matter less than user psychology because there is already a set of requirements in place (driven by the user). That the analogy can go either way makes much less valuable for exploration of viewpoints.
I thought we weren't considering the issue of wrong output here.
- Only a fool ignores or fails to consider relevant issues. I'm no fool, so there is no "we" on this particular behavior.
- If this is turning into yet another provably-correct or heavy-typing-prevents-errors HobbyHorse debate, then we have come full-circle back to our usual fights. I guess we're done here.
There are multiple ways to produce the same output given the same inputs. Or stated another way, there are multiple ways to satisfy stated requirements of a system. The open question is how do we determine the "best" path/implementation assuming the requirements are met?
You determine the "best" solution relative to a set of cost, risk, and optimization requirements. Without a set of such requirements, there is no 'best'. Are you attempting to imply that PsychologyMatters, in some objective way, to determining these requirements? If so, I'm interested (as I've stated many times) in your evidence.
And similarly, I'm interested in your counter evidence or evidence for alternative explanations.
ShiftingTheBurdenOfProof. Again. I don't need to present counter-evidence for a claim that isn't substantiated. And I don't need to provide alternative explanations for 'observations' that haven't been provided or verified.
Nobody "needs" to do anything. Wiki is not a dictatorship. I'm just asking.
Asking questions in order to avoid providing answers isn't "just" asking. Now, will you provide cold, hard, objective evidence for this as-yet unsupported claim, 'PsychologyMatters', that you keep relying upon in your arguments? I'm just asking.
I don't have "cold hard objective" evidence. It is "soft" evidence. If that still bothers you, then see a therapist.
Speculation and faith doesn't qualify even as "soft" evidence.
This page seems to be yet more evidence that Top's view of software development is fundamentally at odds with that of a number of participants here. The latter seem to be computer scientists of one stripe or another, for whom theoretical and mathematical rigour take precedence. The former and his supporters (who are in short supply here, but no doubt can be found elsewhere) are pure practitioners who can be consistently relied upon to deliver working screens and reports today to solve a problem that was supposed to be finished yesterday -- whilst the theoreticians may still be debating what fundamental model is appropriate -- but the result will typically demonstrate the "kludginess", inefficiency and inflexibility of most ad-hoc implementations.
Not really. This page is yet more evidence that Top doesn't believe he requires evidence for his own claims. The other participant isn't promoting any theory, only demanding practical evidence for Top's pet theory - one that he has consistently applied in arguments but never substantiated.
- Top's views hold little water in pure computer science but may stand up (reasonably) well in some other fields. Archaeology, for example, is fond of constructing extensive speculative frameworks on what other fields, by comparison, would consider to be the flimsiest of empirical evidence. Top's arguments and general position appear to owe far more to management theory and business studies than computer science or mathematics. As such, his standards of what constitutes valid evidence will be very different from yours. Of course, speculation with no evidence at all is unsustainable, but I wouldn't be surprised to see Top argue that the bullet points at the top of this page are reasonable evidence, even if only that supported by informed speculation. However, the important point here is that your views and Top's views are fundamentally -- and perhaps irreconcilably -- incompatible. The chance of either of you convincing the other of anything is likely nil.
- What does stand up to scrutiny and rigor? Even blocks over Goto's could not be mathematically proven net better in practice. Life would be simpler if we could just mathematically prove that Foo Oriented Programming was net better and all the arguments would end. Software Engineering is facing a science crisis. I am just the messenger. --top
A good project manager will attempt to employ both approaches, using the rough just-get-it-done pragmatism of the practitioners to balance the perfection-driven inertia of the theoreticians and vice versa, resulting (hopefully) in products that demonstrate a good balance of theoretical strength and practical effectiveness. However, the most zealous proponents of these diametrically opposed approaches can rarely spend time in the pub after work without chucking beers on each other. This Wiki is a bit like a pub after work. As such, Top and his opponents would probably be happiest if they chose to sit at different tables.
It is indeed an age-old battle. But I do dismiss the notion that my solutions are inherently "kludgey". I strive for reasonably "clean" designs, and whenever they do get messy, I review the design to see if there is not a better solution. Often there is just EssentialComplexity caused by the domain itself. Forcing too much abstraction often makes a system inflexible because changes in requirements may not fit the overly-pruned abstraction. HelpersInsteadOfWrappers is a good example. Finding the right balance requires a "feel" for how things change and why they are the way they are. If the theory purists can show obvious advantages like less code or less change points, not the indirect roundabout obtuse metrics they are fond of, then I'd be happy to take a look. --top
Your solutions may not be "kludgey" at all. Indeed, you may be one of those practitioners for whom programming itself is a form of intuitive mathematical reasoning, subject to its own implicit rules and drives toward elegance. However, among practitioners who openly deprecate theory in favour of empirical results, non-kludgey results are arguably rare. That doesn't mean they don't get the job done, of course, but they'll make the pure theoreticians grind their teeth in frustration.
Perhaps "kludgey" and "elegance" are in the eye of the beholder.
(extracted from above)
Re: ...And I don't really agree on the "natural laws" issue: all software also must obey the laws of physics in addition
to artificial and often arbitrary limitations and interface imposed by the machine.
Outside of performance/resources, I disagree with the "natural laws" part. One could create a fictitious universe in software to achieve the desired goal. The results may have to resemble our universe to some extent in order for the user to relate to it, but the internals don't have to be. Same with math.
Software written in such an 'artificial' universe doesn't escape the need to obey the laws of our own universe. We can't create an artificial universe capable of reversing the flow of time, delivering a message before it is sent, or deriving more information out of a signal than allowed by entropy. Instead, software of the sort you describe must simultaneously obey two
sets of laws: the laws of our universe (including those of physics, logic, information, computation, and complexity) PLUS the laws of this artificial universe. I.e. the efforts of constructing an artificial universe have, at best, only introduced extra
constraints and requirements to follow - that is, above and beyond what the user demands.
Constraints are features when well chosen for acceptable reasons (optimization, security, correctness verification, modularity, reflection/debug-ability, protocols and interoperability, flexibility when comes time to make changes, etc.) Or, more accurately, constraints are a necessary price we pay for useful features. So there is merit
in the notion of creating a 'universe' within software that constrains certain behaviors... indeed, programming languages (other than assembler) and especially their runtime, engines, or virtual machines, and frameworks, would all seem to qualify. Of course, poorly chosen constraints can exact a price without
providing any associated benefit, so one ought to choose wisely.
In any case, regardless of the potential benefits of constrained artificial universes, you won't be able to bend or break the laws of ours. You say: "the results may have to resemble our universe to some extent in order for the user to relate to it, but the internals don't have to be", but the truth is that the internals must and will adhere to a subset of what is possible in our universe, and thus will also (in that manner) resemble our universe. A truly fictitious universe rarely has such constraints... magic, time travel, etc. are all possible. Software doesn't give us god-like powers, and doesn't allow us to create fictitious universes. A software universe might be 'artificial', like a toy box, but it is still a real one.
I'm not sure what you mean by the last part. Logic and math may not be inherently tied to our universe. Cosmologists can and do model alternative universes. (True, they usually leave some parts the same as ours.) As far as "can't make time flow backward", if one can create an algorithm for it, it can indeed be done. Hell, I make it move backward by sliding the position bar leftward on some AVI movie players (doesn't work on MS's). An operating system and a database is its own little universe, I would note.
Please don't start waving your hands and inventing false devil's advocate arguments that even you don't agree with. Sure, "if one could create it, it could indeed be done" is (trivially) true (and utterly vacuous). But even you know that you don't make time flow backwards in the program or its model by shifting the position bar on an AVI movie player - you only deliver a signal indicating that you wish to review in the immediate future an audio-visual signal that has played in the immediate past. Those concepts aren't at all the same. You can't
create an algorithm to make time flow backward - laws of our universe prevent it. The closest we get in practice is working with ACID transactions, and those are rather limited in scope.
As far as logic being "tied to our universe": keep in mind that logic, and accepted laws of logic (axioms
, law of excluded middle, etc.) prove themselves for valid use in our universe by the exact same mechanism as every other theory. I.e. under the hypothesis they are true, and given a set of knowns about our universe, these axioms are utilized to make falsifiable predictions, which are then verified or falsified. Given a long period of use, those that survive become 'laws'. So, while it may be the case that logic (in general) is not tied to our universe, our universe certainly seems to be tied to a particular logic. You might consider physics in the same sense: is physics (in general) tied to our universe? alternative rules of physics (within limitations of computation) can certainly be modeled in the same sense as alternative logics. I expect it more accurate to assert that our universe is tied to a particular physics. Based on human observations thus far, the rules of our universe (be they for logic or physics) seem to be particular, not general and encompassing of all possibilities.
Anyhow, it is my impression that you are focused on simulations of alternative universes. Keep in mind that the software
for these simulations is what can't violate the laws of our universe. I.e. the software is written in our universe, runs in our universe, obeys the rules of our universe, and is incapable of violation. Further, software can't
take advantage of alternative 'laws' or magic of any simulated universe because it must still run in our universe. Where you say "one could create a fictitious universe in software to achieve the desired goal," the truth is the opposite: a fictitious universe can only constrain
your approach in achieving a desired goal. Constraints can be good and helpful (buying features, helping programmers stay on task, etc.), but they never achieve a solution on their own... and they are rarely 'good' when created for an artificial universe. The best that can happen is that these constraints help guarantee some features. The worst that can happen is that these constraints are just loose enough that you continue pushing your way forward against continuous and resistance until you hit some unforseen dead-end (this, as anyone who has had a anyone who has had a ScreamLoudlyBangHeadRepeatedly
moment with a framework would tell you, being much worse than the constraints being so tight as to force immediate dismissal).
I never meant that simulations could bend the rules of our "real" universe. Thus, a simulation cannot make time flow backward *outside* the simulation.
Correct. But a simulation written in software ALSO cannot make or take advantage of time flowing backward *inside* the simulation. Nor can you do or benefit from other things impossible in our universe, such as deliver a message from one agent (inside the simulation) to another agent (also inside the simulation) before that message is sent.
And as far as "being focused on simulations", I'm not sure it matters. Our own universe may be a simulation and we'd have no way to tell the difference (barring Matrix-movie-like errors). We may experience time only flowing forward in this universe, but the "god" programmer may be able to make the simulated time do other things. And, this "outside" programmer may be in a universe with laws more limiting than ours. You implied the simulations can only be a subset of the universe it runs it. This is not the case. See SimulationRelationshipToParent. --top
The existential nature of where our universe and its rules come from aren't particularly relevant. And I'll state it again, outright - there is nothing software can do that will break a rule of our universe, and it is impossible to take advantage of
any 'magic' or rules that would violate our own. From our perspective, we can only be constrained. The ability to "create a fictitious universe in software to achieve the desired goal" does not exist
in this universe.
Well, its a matter of semantics surrounding "achieve the desired goal". But we seem to be wondering away from the relevant issue. To produce a result (computation), we are not restrained by the laws of this universe. One potential example is imaginary numbers used in electronics. They are a UsefulLie, a shortcut to a computation. A slide-rule is another UsefulLie that takes advantage of lower-level operations acting like higher-level operations when used on "compressed" scales (such as logarithms). Or, in a relational database we create "tables" even though those tables may not really be in the real world. They are just an artificial conceptual model even if they can produce results usable in the real world. These conceptual models are potentially wide open. OOP, relational tables, type systems, functions, etc. are all internal conceptual models. They are a "dummy little universe" for systems designers and programmers.
I suspect what you mean regarding "not restrained by the laws of this universe" has almost nothing to do with the ability to violate the laws of this universe. In the important, literal sense we ARE restrained by the natural laws of this universe - we can't write software that violates natural law. This is especially true when comes time "to produce a result" which, from your own SimulationRelationshipToParent
page, would clearly violate the 'Ability Leakage' rule. I.e. if you CAN violate the laws of this universe inside the simulation, then you CAN'T obtain a result from it - you can't take advantage
of it. Your examples of imaginary numbers and slide-rules, and the algebras and mechanisms guiding their use certainly don't violate any natural laws. Oh, and in a relational database, any 'table' we create has a physical representation in the real world - they "really 'be' in the real world".
Our models often have "things" that correspond more or less to the "real world", but that is because modeling the real-world with abstractions has generally proven useful, not a necessity. However, that still does not force us to model the real world. And "violating laws" may not necessarily be the goal. Imaginary numbers may not violate any laws of this universe, but they also don't model the real world. "Violating" is not really the important goal here. They are a "clever shortcut". If one finds that making anti-gravity in a model serves a purpose, they may use it similar to how imaginary numbers are used. (Perhaps our thinking is clouded by trying to stay too close to the "real world" out of comfort or habit.) Related replies in the "to parent" topic. --top
To the contrary, modeling the real-world with abstractions is
a necessity. Our brains lack the computational power and memory, and our senses lack sufficient capability, to process raw sensory data without constructing abstractions atop them. We infer abstractions from sensory inputs, and we call that reality: keyboards, typing, clouds, cars, numbers of cars, numbers, even 'imaginary' numbers are all equally
'real' and objective when derived via induction and abduction from sensory inputs. Fundamentally, they must be equally real because they are all the exact same sort of thing: patterns with predictable properties inferred over sensory inputs and memory. And language, which communicates in abstractions, ties it all together. To say I'm 'typing' this 'reply' on a 'keyboard' is a 'truth', not a UsefulLie
- I would not be a liar for saying so, but I would be a liar for saying otherwise. You seriously overestimate your own capacity for thought and communication if you believe abstractions are unnecessary. Sensory neurons fire in response to light, heat, pressure, agitation, and chemical stimuli. Beyond that, everything our senses tell us is inferred abstraction.
In answer: imaginary numbers model the real world just as much as any other sort of number or color or object or verb. They need only be attached to the real world through our senses via some repeatable derivation.
Here's what I'd readily concede to, regarding this "natural laws" discussion:
There isn't even one natural law that software can break, but some natural laws that often constrain physical systems rarely constrain software systems. OTOH, physics laws related to signal and information theory tend to constrain software systems more than physical ones... there aren't very many bridges constrained in their design by the laws observed in the development of information theory.
Some of the problems encountered by physical systems that software neatly avoids are very expensive ones related to fabrication, storage, transport, and duplication. These aren't 'necessary' problems imposed by natural laws (Feynman and other physical scientists well above my caliber insist that there is no known constraint preventing nanolithography or the Star Trek style object synthesizers). But they're real problems in today's world. As such, they're far more analogous to (in software engineering) the arbitrary limitations imposed by the machine, the language, the OperatingSystem
, and other frameworks. One might assume that software developers have an easier time designing around these arbitrary limitations than do mechanical designers, making the internal software be anything they want it to be, and they'd be right - to a degree. Anyone who has wrestled with a language, a framework, or an operating system (all of which are more or less the same thing) and lost won't hesitate to tell you differently, and most software developers won't even try. "Our internal models can be just about anything we want as long as they produce correct answers" applies equally to any design project, be it for software or bridges or electrical transformers... but not just "anything we want" will "produce correct answers".
Physical constraints and logical constraints both can cause headaches, but this does not mean they are the same thing. Creating an internally consistent model that acts how we expect is indeed a difficult task. Related: SimulationRelationshipToParent
Infinitely-Powerful Brain Thought Experiment
An infinitely smart "brain" could write in spaghetti machine code and could read spaghetti machine code and change spaghetti machine code to fit new requirements very fast. Computers executing code don't give a flying flip about how it's organized (assuming speed is not the overriding concern). If the machine doesn't care, then what does care and why? We use higher-level languages and abstraction because we don't have an infinitely-powerful brain. The code must "communicate" without our limited brains. That's the main purpose of higher-level code. It has very little to do with making machines happier.
The "power" of a brain is studied as part of "hard" psychology... e.g. how fast to solve a problem, how many things can be remembered at once, average time between errors while writing under duress, etc.. Appealing to it as part of PsychologyMatters is frivolous because "PsychologyMatters" has only been contested in the context of "soft" psychology.
I also believe your analysis to be naive. From RicesTheorem and GoedelsIncompletenessTheorem, we know that, even with unlimited processing power at our disposal, we cannot (in general) analyze, optimize, or predict any given feature without support from the language or (via protocol restrictions on its usage) from the framework. The undecidability of the HaltingProblem is only a specific instance.
- It could potentially be done that way, but only specific isolated factors at a time. A fuller survey that studies a sufficient spectrum width of factors would be very expensive and thus won't answer the tough and contentious questions any time soon.
- It is true there are more answers to be had, but that doesn't forgive speculation and HandWaving that various unnamed and unmeasured properties "matter" without the ability to demonstrate or logically imply a significant correlation of causal nature.
- It may be that it could be made a "minor factor", thus we don't have to focus on absolutes. The more powerful the brain/analyzer, the less "help" it needs.
- You use many VagueWords when you say "it may be that it could be". You can use them to say just about anything. Example: It may be that it could be that pigs fly and therefore our pretending the economy is recovering will guarantee a rising price of tea in China. Can you provide good reason to believe it to be a "minor factor"? How much more powerful would a brain/analyzer need to be to allow one to analyze for termination without support from the language?
That limitation applies equally to the SufficientlySmartCompiler and the human programmer, and that's even before considering
information limits regarding black-box compositions, incomplete requirements, and runtime input. Higher level languages allow us
and our language interpreter/compiler/optimizer/linker to make useful assertions, assumptions, and predictions about properties of code, its composition, and future changes. I'm not going to anthropomorphize and say this makes the machine 'happy', but it certainly helps 'satisfy' some very real limitations.
Well, I am
going to anthropomorphize in a way. It could be argued that compilers/interpreters (C/I) "like" certain constructs in order to analyze and optimize more effectively, and the same for humans trying to grok code, which is more or less trying to predict how it behaves at run-time, similar to what the C/I is trying to do. Thus, we have a lot in common. The main difference is that C/I's can be completely redone if we don't like the approach we took, but not so much the human mind. We are more or less stuck with primate wiring, the psychology of the human species.
Not really. Humans are perfectly capable of learning skills that weren't wired into them from the start. I was just reading the other day about humans learning to see with their tongues (http://discovermagazine.com/2003/jun/feattongue). And the relative cost of teaching such skills is almost invariably smaller than the cost of changing the languages or frameworks late in a project. You overestimate the flexibility of C/I and underestimate the flexibility of humans.
This brings us back to the question: Is it more economical that tools be bent to fit the mind or vise verse (MindOverhaulEconomics
)? Compilers/interpreters (C/I) that study code for optimization hints have a "psychology" also, and you seem to be agreeing that this artificial psychology
matters. This would imply that natural psychology also matters.
Algorithms and techniques (what you might call artificial skills) matter. Skills matter to humans, too. Processing power and memory and sensory devices and output devices and information are necessary to learn and utilize these skills. Languages and protocol and restricting behavior are mechanisms to allow information (in the form of assumptions, assertions, and predictions) in the face of RicesTheorem, GoedelsIncompletenessTheorem, black-box composition, runtime inputs, and incomplete requirements. But your attempt to equate skills to psychology is a fallacy on your part. Psychology is not a study of chess, driving, or the skills for optimizing code. At most, psychology would study what happens in a human brain to learn and apply these skills, and the differences between a neophyte and an expert brain.
If we want really fast C/I's, we need to design the language such that the C/I can study it easily. Similarly, if we want humans to be able to study patterns and constructs in order to mentally simulate/predict code run-time behavior (including mental code modification candidates to fit new requirements), we need to design the language to fit the way humans study it most effectively.
Also, "neophyte" versus "expert" is only part of the issue. There are inherent differences between the way different people extract and process information. See PhysiologicalAndPerceptualFactors
If SapirWhorfHypothesis is valid (and it seems to be holding against tests) then it is more likely that the language itself is the tool with which the humans will be examining the problems to be solved. If so, the goal is not to design a language to "fit" the way humans think, but rather to design a language that helps humans think by allowing them to make useful predictions and assumptions. Further, we wish to embed the language in an IDE that can analyze code and help the humans write. I posit that the exact same features help both C/Is and humans, doubly so in the context of an IDE.
Suppose we designed a language so that it is readily speed-optimizable using the known set of techniques called "A" known at the time. So we build this language around technology set "A" and get a pretty fast language. However, new techniques are discovered or technologies come along. For example, multiple CPU's are more common so that we can use parallel processing more than before. Let's call this "technology set B". However, it may require a very different language organization to take advantage of B than it did A. In A we focused on pipe-line management, but now we have more independent pipelines at our disposal. Excess pipeline management may even get in the way of forking portions off because we had to pre-commit a given pipe to certain predicted behavior, making it more difficult to re-assign a task to a different pipe in short notice. The "mechanical psychology" in the C/I that our super-fast language project needs to target has changed. Mechanical psychology mattered here. There is no "one right way" to make a language for C/I optimization speed. It depends on the "personality" of the hardware and the optimization technology available.
I have not posited that there is "one right way", but I will say "there are a great many wrong ways" and "there are 'worse' ways". If you don't believe there are 'worse ways', just take any design and add arbitrary AccidentalComplexity (gotchas and restrictions and funky behaviors on Tuesday afternoons) just for the hell of it.
If there are worse ways to do things, there are likely also to be better ways (unless we have found an optimal 'best' way... but how often does that happen?). These better ways are unlikely to be low-hanging fruit in any mature language or framework. Stronger or more widespread support for CrossCuttingConcerns and NonFunctionalRequirements (introducing concurrency in your example) often requires a different language and code organization exactly because such overhauls haven't yet been performed to reach the 'high-hanging fruit'. And, after doing so, B likely can still do everything A ever did (in terms of functional requirements). The overhaul doesn't even imply a performance loss in the non-concurrent case. If B is
not worse when it comes to performance and other non-functional properties at doing what A did, I'd call B's additional support for concurrency to be "net better".
And while learning to use this new language for the case solved by "A" may involve some rework for the human that (by happenstance) is skilled in technology A, we have no valid reason to believe 'psychology' will somehow make learning and using technology B just for the case solved by A significantly (in the statistical sense) more difficult than learning and using technology A to solve the problem solved by A. I.e. there is no reason to believe that PsychologyMatters here after the other variables (the problem being solved and the relative quality of the techniques for solving that specific problem) are controlled. Since this example was intended to demonstrate that PsychologyMatters, you need to provide a reason to believe it matters here.
Finally, if B allows programmers solving the A case to "mix" pipelines (as per a sound mixer) when running on A's technology, B might even be easier to learn than A because it requires less learning of hacked in libraries and frameworks for idioms and features not provided natively in A. So, if PsychologyMatters, it may matter in a way that is independent of MindOverhaulEconomics. The poor guy who learned technology A might have wasted more time learning a less valuable and more difficult skill - a bit like spending twenty years learning to use a bow and arrow vs. two weeks learning to use a rifle. To equate MindOverhaulEconomics with PsychologyMatters is another fallacy you've committed above.
- I suspect your favorite approach(es) is a TechniqueWithManyPrerequisites, too many.
- Many prerequisites? Perhaps. But I won't delude myself into believing that incremental improvements can get you out of a local maxima. Improvement is change. Significant improvement often requires significant change. Change will be resisted - by MindOverhaulEconomics, by human resources, by existing projects, etc. But there are many beach-head strategies for that do not require full-blown overhaul of existing projects, and that allow a reasonable degree of integration with existing projects to be performed by a single motivated programmer. Most such strategies derive of AlternateHardAndSoftLayers.
- The issue of MindOverhaulEconomics applies to the poor guy who already has skill in technique A and is deciding whether the benefits of gaining skill in competing (and allegedly superior) technique B will overcome the costs of learning.
- The question of PsychologyMatters applies only after controlling for skill (since skill != psychology) and controlling for quality of the technique (since technique != psychology). For a set of experts equally masters of two competing techniques where neither technique is arguably 'better' than the other, one might say that 'PsychologyMatters' when one technique makes the expert more productive than the other in a statistically significant manner and which of the techniques varies from expert to expert. For a set of neophytes in both techniques, one might also observe which technique is easier to learn in the first place.
- But skill and PsychologyMatters may be related. A good WetWare fit could make a given person more skilled more quickly for certain tasks. True, a person who can master many different languages and tools easily is nice to have, but that doesn't mean those who may only effectively master a narrow tool set cannot provide some economic value even if it's less than the multi-master's economic value. Maybe the more narrowly-skilled person has other attributes, such as social skills or domain knowledge that compensate for the lack of multi-tool masterability. There are marathons and decathlons.