has been confused. It has conflated two entirely different ModelsOfComputation
and put them both in the lexicon of CS. For evidence of this confusion see TypeTheory
. If you can't make sense of it all, you now know why this page exists.
got confused when Babbage's DifferenceEngine
got lumped together with modern day computers (i.e. which are TuringMachines
). In other words: a conflation between AnalogVsDigital?
was not an analog computer NickKeighley
- Strangely, one finds this confusion surrounding the venerable DonaldKnuth's idea of exponentiation of the zero; that is, zero to the zeroth power, which he set equal to 1, but it cannot be. Zero times anything is zero. This is a disagreement between basic algebra and limit theory for calculus (which is to say the numeric/symbolic domain vs. a geometric domain). I put this little tidbit here to see if anyone else cares to examine why such a decision affects ComputerScience.
- Your misunderstanding lies in thinking 0 to the zeroth power is zero times anything. A simplistic explanation is that it isn't doing "times" by the time it gets to zero. 2^3 is 2 * 2 * 2; 2^2 is 2 * 2; 2^1 is 2; so 2^0 is ...? In one interpretation, it's 1 by definition rather than by derivation. Are you in fact referring to 2^0 (two to the zeroth power)? The fact that it's 1 makes it trivial to (for example) convert binary to decimal. E.g., 11001 equals 1 * 2^4 + 1 * 2^3 + 0 * 2^2 + 0 * 2^1 + 1 * 2^0 equals 1 * 16 + 1 * 8 + 0 * 4 + 0 * 2 + 1 * 1 equals 16 + 8 + 1 = 25
- Zero to the zeroth power being equal to 1 pre-dates Knuth -- it long pre-dates ComputerScience, in fact -- and see http://www.askamathematician.com/2010/12/q-what-does-00-zero-raised-to-the-zeroth-power-equal-why-do-mathematicians-and-high-school-teachers-disagree/
- Thanks for the reference. I'm sure the question pre-dates Knuth, but it seems that he made the decision that affected Computer Science.
- He documented the definition, but didn't originate it. He did argue -- quite compellingly -- in favour of it. See http://mathscitech.org/articles/zero-to-zero-power How do you believe 0^0 negatively (I presume) affects computer science?
is in the same category as LambdaCalculus
- neither use digital logic or BinaryArithmetic
, yet these are the "must-have", defining
features of modern-day computers.
[Note: Originally, the above read "...
defining features of modern-day ComputerScience", and this was a response to that.] Neither "digital logic" nor binary arithmetic are "must-have",
defining features of modern-day ComputerScience. ComputerScience is, by definition, the study of computation in all its forms. No basis in digital logic or binary arithmetic is required, though obviously both are as important as LambdaCalculus. You may be confusing ComputerScience with computer engineering. In computer engineering, the majority of modern-day computer designs do depend on digital logic and binary arithmetic.
Okay, I think you and I must have a talk.
If I turn a gear wheel 'round that turns a second, smaller gear, am I "calculating" or is it physics? Because I think we must draw the line somewhere. They do not have the same behaviors at all. Though they may share the same name "computation", they do not share a common computational domain
. One will be constrained by physics, the other is not.
- Computation is constrained by physics. Information does not travel faster than the speed of light. Information always has a physical representation, with a minimum energy dependent on ambient temperature (cf. Landauer's principle). What's with this widespread myth that computation is independent of physics? Computation is mechanical. (related: http://calculemus.org/logsoc03/materialy/ConservativeLogic.pdf)
IN OTHER WORDS: One
of us here is the pro
grammer and one of us is just a bro
I'm not quite sure what you mean, or how it's relevant -- or why it matters -- to a discussion of what ComputerScience is about. ComputerScience is the study of computation, largely independent of (though sometimes studying as well) the physical mechanisms used to achieve it. Again, I think you're conflating computer engineering with ComputerScience.
Perhaps that is what you
is, along with Harvard it appears, but it cannot be the case at all. If you are going to call something Computer Science
you're going to have to specify the ModelOfComputation
you're using for a Foundation for the science
otherwise, this confusion will continue. (See the top paragraphs on ModelsOfComputation
, specifically the counter-argument against the ChurchTuringThesis
that begins with "the most important practical aspect of...".)
Actually, it can be the case, because it is the case. ComputerScience does not have to "specify the ModelOfComputation you're using for a Foundation for the
science", any more than (or as ridiculous a notion as) biology has to specify a particular nematode to use as its "Foundation". Specifying a ModelOfComputation is a foundation for (say) engineering a new programming language, but ComputerScience studies and embraces all models of computation; it doesn't limit itself to one.
Well then, perhaps biology is behind too, because I'm quite sure the DNA structure of fungi is going to be quite radically different than the plant Kingdom, even though the share the generic name of "life". Frankly, I'm not even sure that fungi have DNA at all. So, yes, a biologist would have to specify the Kingdom of life that they're studying if they want to make sense of it.
Biology is the study of the mechanisms of "life", so fungi, plants, bacteria, viruses, diatoms, protozoa and creatures are all appropriate subjects of study. Fungi have DNA. What breadth of study is appropriate is up to the individual biologist, but it's all equally biology. Likewise, all models of computation are equally ComputerScience.
What has made it even more confusing is ideas (akin to ChurchTuringThesis
) is that people keep arguing that that each can be implemented in the other, so that "it's all the same". This is wholly irrelevant
except for ToyProblems?
and endless argumentation. One can also map the imaginary numbers onto the reals, but this doesn't mean they are the same or interchangeable.
Of course LambdaCalculus and TuringMachines are part of the lexicon of ComputerScience -- they both belong there, and both are valid bases for theoretical study. Both are equivalent in terms of not presenting any particular obstacle (or help, for that matter) to the programmer implementing programming languages, operating systems, or other software.
Stop right there.
You are making a claim that is not true. This confusion does
present an obstacle and are not
equivalent, the exact nature of which I'm trying to elucidate. Specifically, it is enormously expensive to design certain kinds of languages on some models of computation. I'm not going to get much done with C on a LISP Machine -- if I could even write a compiler on it, simply because its design is for recursive models (akin to LambdaCalculus
I'm not convinced any serious confusion exists (casual pub-squabbles amongst equals don't count), and what is not an obstacle is basing a language on any particular ModelsOfComputation. C++ and Haskell, for example, are based on different ModelsOfComputation and present very different approaches, but both can achieve equivalent ends.
- Turing completeness ensures that you can write any computable function. It says nothing about equivalence with respect to extensibility, modularity, security, performance, complexity, and convenience. (To be equivalent with respect to modularity and extensibility is formally related to homotopy and homeomorphism.) Cellular automata are Turing complete, but it would be extremely difficult to write a usable operating system in one. Why do you imply such considerations are not valid obstacles?
- I did not intend to imply that such considerations are not valid practical obstacles, so I'm not sure why you inferred it. My point was that ComputerScience -- in particular, as a theoretical field of study -- is not hindered by "any serious confusion" between LambdaCalculus and TuringMachines, as was alleged by my correspondent.
I don't think you can do GenericProgramming
(in the CeePlusPlus
sense) in HaskellLanguage
You don't have to. You do it in the Haskell sense, which is arguably more powerful.
The other thing is that this kind of argument leads to a TuringTarpit
: just because you can
do it (in "theory"), doesn't
mean it is a viable alternative.
A TuringTarpit is a language that is TuringEquivalent but, usually for illustration or amusement, so awkward as to be effectively unusable. Haskell, for example, is obviously not such a language; it can be used wherever you'd use C++.
ComputerScience has no difficulty reconciling theoretical ModelsOfComputation with applied SoftwareEngineering. In short, there is no confusion.
Sorry, you just made a claim that does not have consensus. There doesn't seem to be any confusion because where is the "software engineering" in Babbage's Computer? In any case, I will try to find the references that show the CS field has been fragmented and does not have consensus
. Where you might be confused is that it has, instead, a long-standing tolerable disagreement which has gone silent; but this should not be confused with consensus.
Among computer scientists and experienced software engineers, I think my claim does have consensus. I work with computer scientists and experienced software engineers on a daily basis. Whilst there is good-natured ribbing between (and about) the imperative programming proponents, the logic programming proponents, and the functional programming proponents, we all get the job done using the tools we prefer and we recognize the equal validity of the ModelsOfComputation that underpin them.
- I think the problem is between a TypeSystem model (the view from above) vs a Hardware view (a view from below). The latter is tied to DigitalLogic. See also ClosuresConsideredHarmful.
- For most programming -- to the extent that this is a concern at all -- it's only a significant issue for the compiler or interpreter developer and certainly not a difficult or controversial one.
"engine" (a mirror in some way to the JavaVirtualMachine
..?) were honed in the presence
of these contradictions - they did not solve them. In any case, my consideration is for a UnifiedDataModel
, and these old domain (mal)adapatations are no longer tolerable and can't simply be waved-away to "get the job done". Come to think of it, the entire confusion only came after the web created its own little domain of "web programming" and, though slow attrition, usurped the lexicon of ComputerScience
. Were they not once called ScriptKiddie
I think your problem of creating a UnifiedDataModel, in light of contradictory views and redundant terminology, is trivially solved by implementing a notion of synonyms (to take care of redundant terminology) and microtheories, which are collections of knowledge that are not contradictory within themselves, but which may contradict other microtheories. The OpenCyc people have already explored some UnifiedDataModel notions, and it may be worth examining their work. That will certainly be easier than trying to reconcile (and then legislate?) terminology throughout the entirety of the computing field.
- [No, not really... unless you are intentionally conflating developing websites with breaking into websites.]
What you are suggesting is more like a distributed SemanticNetwork?
. I actually want to create an ecosystem of working
objects, like the DNA snippets inside an organism.
For the record, I did not create this topic nor the introduction. --TopMind
I think it's clear where the confusion about computer science lies here. And indeed about other fields of mathematics as well (for example, somehow blaming the behavior of exponentiation on Knuth). Computer science is the study of computation. Period. Lambda calculus, Turing machines, cellular automatons, electrical, optical, mechanical, von Neumann cores and memory, FPGA computational fabric, neural networks...everything. MarkJanssen
, your view of the field is cripplingly narrow and limited.
- Quite, the opposite, sir. It's actually very broad. Do you know what you even mean when you say "computation"? Do you mean adding numbers? Do you mean interfacing with a user? That's what computers do right? What does it mean to add a number? Is it a quantity being increased? I'm arguing that you do not know as much as the field would confer to you.
See also ComputerScienceVersionTwo