are abstract specifications of how a computation can progress, and they are often expressed as the description of some kind of conceptual automaton. ModelsOfComputation
are very distinct from DecompositionParadigms
, which are strategies for structuring programs
, not the computations
that executing those programs may occasion.
The single most important theoretical aspect of a model of computation is its power
, which is the class of (mathematical) functions it can compute. Another important pragmatic, qualitative, aspect is its expressiveness
, which is related to the ease with which it is possible to express computations in it.
A classic example of a model of computation is the Turing Machine model, which has great power even if its expressiveness is very limited, because of how awkward it is to express computations in it.
Church's hypothesis, ChurchTuringThesis
This is a mathematically unprovable belief that a reasonable intuitive definition of "computable" is equivalent to the list provably equivalent formal models of computation:
For a contrary view suggesting that interactive computation models, such as ActorsModel
, etc., are more powerful than Turing a-machines and the pure LambdaCalculus
[Much content moved there]
Even for sequential computation, a TuringMachine
is considerably "lower-level" than the LambdaCalculus
, which explains why there are programming languages based almost directly on the latter, but not the former.
With regards to expressiveness; nobody ever claims TuringMachines are an expressive model. There isn't any modern useful high-level language which limits itself to "turing machine" features; even modern procedural languages - the lowest "level" type, arguable - borrow much from the LambdaCalculus (they have functions, etc.... even if they aren't full FunctionalProgrammingLanguages). In fact, we have a rather derogatory term for languages whose chief selling point is that they are TuringComplete - we call them TuringTarpits. The raw LambdaCalculus isn't a particularly programmer-friendly model either (though some functional languages provide little more than a thin veneer on top of it.)
TuringMachines are not useful as high-level programming models.
Right; that was all I was trying to say with the 'a TuringMachine
is considerably "lower-level"' comment. Note that it is independent of the arguments about interaction and encapsulation given in InteractiveComputationIsMorePowerfulThanNonInteractive
, which deal with power, not expressiveness.
They are useful for two reasons: 1) they are easy to reason about mathematically; much research and knowledge in computational theory is based on the concept, and 2) virtually all computing hardware that is built resembles TuringMachines. Building hardware that directly evaluates the LambdaCalculus isn't economical; it's more feasible to emulate such on a von-Neumann architecture (which, due to years of R&D and numerous economies of scale, are both fast and cheap these days).
Virtually all computing hardware that is built has considerable support for interactive computation, and that support does not resemble anything provided by a TuringMachine
But discussing the "expressiveness" of a low-level computational model is a RedHerring anyway. This is why programming languages exist; to provide reasonable abstractions on top of the models that are easier for humans to program and reason about. Speaking of the "expressiveness" of a low-level computational model misses the point completely; expressiveness is more important as a figure of merit of a language (or other tool used by programmers). And as all modern languages are Turing-equivalent, expressiveness has almost nothing to do with computational power.
And keep in mind. Unless you have very esoteric hardware, any program you write is ultimately getting run on a TuringMachine - more specifically, a finite-memory approximation of one.
And I seem to recall that ConcurrentSystemsTheory?
(maybe in RobinMilner
) does not necessarily rely on any of the automata classes above, but generalizes them in the sense that everything/anything can happen at once.
See also http://www.cs.brown.edu/people/mph/HerlihyRu00/focs.pdf
In the introduction, they say: "Unlike sequential notions of computability, in which all reasonable models are essentially equivalent, concurrent computability is highly sensitive to the underlying model.
" and then go on building a hierarchy of models that can be built on top of others.
The paper is based on an approach whose basis was laid in http://www.cs.brown.edu/people/mph/HerlihyS99/p858-herlihy.pdf
(which seems to be later than PiCalculus
Note that reasonable
here is related to how powerful a language is, not whether anyone can actually use it. Some commonly used languages are not in fact TuringComplete
, and some TuringComplete
languages are far from what any sane person would call "reasonable". See EsotericProgrammingLanguage
recently ran an article (http://developers.slashdot.org/article.pl?sid=03/06/18/127214
) about a book which describes several models of computation. See ConceptsTechniquesAndModelsOfComputerProgramming
- TheOzBook primarily discusses programming paradigms (dataflow, OO, functional, logic, constraint-based, etc.) and not theoretical models of computation (TuringMachines, LambdaCalculus). While it may touch on the ChurchTuringThesis; it is a minor topic. I highly recommend TheOzBook, but not for the topic discussed on this page.
- But dataflow is a model of computation, because it is about how things happen at runtime, not how programs are structured into parts.
Other models of computation, weaker than those given above, include (from weaker to stronger):
- Deterministic stack automata
- Non-deterministic stack automata
Note that deterministic and nondeterministic TuringMachine
s are equally powerful, and so are deterministic and nondeterministic FiniteAutomata
. But nondeterministic stack automata are more powerful than deterministic ones.
I seem to remember from University that PrimitiveRecursionTheory?
is also one of the ModelsOfComputation
Correct. Primitive recursion is recursion where the number of recursive calls is bounded a priori. Therefore, termination properties of primitive recursive functions are computable by a TuringMachine
. However, there are things (such as AckermannsFunction?
) which can be computed by a TuringMachine
, but not by PrimitiveRecursion?
(Ackermann's is recursive, but not PrimitiveRecursive
). Primitive recursion is less
powerful than a Turing machine.
Let us look at the models of computation that have been influential in the design of Programming languages: the most influential: the von-Neumann machine, a conventional CPU with only one register (called the accumulator) and load and store instructions. At most one memory cell changes during any one machine cycle, but machine cycles are fast.
- Except perhaps for assemblers I can't imagine any popular programming language that is based on a CPU/accumulator/load/store model of computation. Essentially all popular programming languages are based on the environment/references model of computation, and most are based on the contour model specialization of the environment model. Admittedly very few people seem to have ever heard of either. Compilers and virtual machines map the environment model and the contour model in terms of which (most popular) programming language computations are defined onto the (usually extended) Von Neumann model in terms of which mode hardware systems are defined.
- Once again, we are discussing models of abstract computation, not programming models. The Random Access Machine model (the formalism which underlies the Von Neumann architecture) is a way of expressing computation as an abstraction; its usefulness as a model for physical hardware is more or less coincidental. I doubt you'll find many programming languages based on Post Formal Systems as a computation model, but they too are thought to be equivalent to the UTM model. Programming languages are not mathematical formalisms - they are transformation systems for conveying concepts from an informal system (the thoughts of the programmer) into an engineering formalism (the machine code). It is useful to base programming languages on mathematical formalisms, but most programming languages historically have not been. -- JayOsako
- It is possible to interpret programming languages as models of computation; it's not a guiding principle in designing the language (unless it's a proof-of-concept exercise), but beyond the obvious fact that the syntax of a programming language is necessarily a formal language there's the usable interpretation that a programming language together with its semantics (constraints that aren't expressed explicitly but would get trapped by a compiler) as a model of computation. In this interpretation, a program becomes a proof that a certain computation is possible in the given model. (The fun begins when you have a proof but it's for the wrong theorem.)
The second most influential model of computation underlying programming languages is the one where state is sequestered in objects. If you consider a variable as "in reality" a simple object that responds to get and set messages, you are definitely using the ObjectOriented
model (or the ActorsModel
- The ObjectOriented style is a decomposition paradigm, and not a model of computation. Languages like C++, Java, or SIMULA are definitely OO, but certainly their model of computation is not the ActorsModel whose main characteristic is not having shared state.
The third most influential model is the lambda calculus, the conceptual foundation of FunctionalProgramming
. Fourth most influential has been the PredicateCalculus?
, aka, predicate logic. Expressions in predicate calculus are closer to NaturalLanguage
than expressions in the other models are, which is an advantage, but IMO we do not yet know how to implement PLs based on the PredicateCalculus?
I will claim that there are two primary models of computation
: something akin to the Babbage DifferenceEngine
, and TuringMachines
. They are not equivalent except in the mostly-irrelevant truth that one can map from one domain into the other. One deals with binary logic, the other does not. Much ThreadMess
is due to the confusion of these two. Practically, one only deals in one realm at a time, but terms have been thrown around to the point of making magic happen.
I'd like to see where you feel ThreadMess is specifically due to "confusion" between "something akin to the Babbage DifferentialEngine?, and TuringMachines", and I'm curious what you think the ideal solution to that confusion should be. The usual solution to misunderstanding of or confusion about models is, and can only be, documentation and education.
To show evidence is not simple because I only have words to work with, and it is words that comprise the problem (the other problem is energy depletion). Try TypeTheory
and the many derivative pages. Unless you grok it yourself, you won't see the differing semantics underneath all the words being bantered around. I might just have to wait until the community catches up. It's taken me a long, painstaking analysis to figure it out, bordering on otherwordly. But yeah, there's been arguing around in circles because everyone's been sharing the syntax
of a lexicon without actually sharing the semantics
. The community has (mal)adapted to this problem by using the troublesome words with presumed authority, apparently in attempt to win converts to their preferred semantics. The ideal solution is to 1) see the profound difference between the two domains, almost always
unnoticed because the discussion never gets to the exact details of the implementation where the problems arise, then 2) start separating the terminology, so that words aren't shared where semantics aren't readily apparent. There's apparently about 50+ years of this shared lexicon, not high on the radar because the field also (and more positively) adapted by creating specializations. A starting point at revamping it is at ComputerScienceVersionTwo
. Thanks for the question.
I think you must be seeing something that I am not, especially if your analysis requires something "bordering on otherwordly". What I see is a community on this Wiki that is happily bantering and arguing about topics they understand to a greater or lesser degree, in a manner akin to a gathering of students and professors in a university pub. Here, it's all about education and fun. Outside of this "pub", in the broader ComputerScience and SoftwareEngineering communities -- and by that I mean the professional world, not simply other on-line pubs -- there is no general confusion.
Ah, yes, that is all fine when dealing with "armchair ComputerScience
", but it is not sufficient for PhD-level work. At that level, one must nail down the confusion, so that a foundation of knowledge and mastery can even begin. It's akin to mixing the complex domain with the reals and going along as if it's the same, because they both use the same operators and seem to have the same results (i(1 +5) = i(6)::1+5=6), so should be interchangeable, but resoundly: NO. --MarkJanssen
Indeed, academic work goes to considerable effort to clarify the definitions of terms it uses and clearly identify its scope and domain. WardsWiki isn't academic work. WardsWiki is banter.
Sure, but let us try to accomplish something shall we? Otherwise we can just throw pieces of paper at each other and accomplish the same thing: nothing.
If you're looking for collaborators, I suggest you make an explicit request for collaborators. It's not reasonable to expect all of us to change what we like about WardsWiki, or why we're here, or how we interact here, to suit you. Some of us like bantering, and it's what we're here to do.
The single most important practical
aspect of a model is that it is enormously "expensive" to implement another model of computation on top of hardware that already embodies its own (a decent counter-argument to the ChurchTuringThesis
This depends on what hardware model of computation you're pairing with which software model of computation. It's straightforward to implement, say, a LambdaCalculus-derived model on top of conventional TuringMachine-derived VonNeumannArchitecture processors, and FunctionalProgramming languages like HaskellLanguage are the result.
See also: ModelsOfComputationAndFormalLanguages