# Multi Valued Logic

The most general extension of ThreeValuedLogic to N-cardinality logic (potentially infinite).

A proposition, informally, is a statement that you can make about a world that possesses a truth value in that world. I.e. "Snow is white", "It is raining outside", etc. are propositions. The use of TwoValuedLogic to reason over worlds operates within the ClosedWorldAssumption; every proposition about a closed world is, by nature, either true or false (or some other duality). Under the ClosedWorldAssumption, you either know or can go measure any particular detail you need; you effectively have omniscient access to every detail about the world. If you can't prove a particular proposition is true, then that proposition must be false. The use of ThreeValuedLogic to reason over worlds operates within the OpenWorldAssumption. By definition, there are some details about an open world that you cannot know (i.e. you don't know, and aren't permitted to measure in the performance of reasoning). Thus, if a particular proposition cannot be proven true or false, it is unknown (or unknowable). (The "cannot be proven" is literal, and different from "will not be proven". You may assume you have a processor that is infinitely fast and has infinite space, so it can solve even undecidable questions, operate with impossibly large numbers, etc.)

```  --(propositions require DependentTyping in a system that carries both open and closed worlds.)
type proposition = (closedworld -> true|false) & (openworld -> true|false|unknowable)

```
All worlds are either closed or open; that's a complete dual. Ultimately, any proposition asked of a world will only ever need to return true, false, or (for truly open worlds) unknowable.

Multi-valued logics, thus, are not necessary for reasoning over worlds. Or, at least they are not necessary for reasoning over worlds directly. Instead, these logics are generally utilized for indirectly reasoning about other worlds within systems that often contain incomplete, fuzzy, and potentially incorrect information. In a sense, these systems are epistemic worlds, where the objects consist of knowledge and facts held with varying degrees of confidence. This epistemic world may, itself, be closed or open... e.g. it might be considered closed while reasoning within one's own mind (since you supposedly know what you believe), and open while reasoning about someone else's (since you don't usually know everything someone else knows). Thus, questions of the epistemic world return that some proposition is true, false, or unknown. However, those propositions themselves are generally asking: is <some fact> accepted/known/believed/necessary/possible/etc. in <some other world>. All propositions are indirect.

The most common of such systems is the brain. As such, MultiValuedLogic has a very important place in to ArtificialIntelligence.

The following are in common use across various domains:
```      type truth_primitive = true         --(boolean logic)
| false        --(boolean logic)
| believed with <confidence> to be <truth_primitive> --(epistemic logic)
| necessary    --(modal logic)
| possible     --(modal logic)  (... see many other modal logics)
| <probability> likely  --(bayesian logic)
| <percent> true        --(fuzzy logic)
| unknown      --(open-world inherent logic value)
| unknowable   --(open-world theoretic logic)
| undecided    --(computation-limited logic)
| undecidable  --(computation-theoretic logic)

```

The original three-valued logic was invented (by Łucasiewicz around 1920) to accommodate the idea that some assertions are neither true nor false. Not that their truth or falsehood is unknown (to someone or something); that they literally, objectively *are* neither true nor false. Several three-valued logics are based on this idea; some are based on the notion that assertions can be *both* true and false. Some four-valued logics assume that assertions can be true, false, neither xor both. (Why make such apparently strange assumptions? Usually to try to avoid (or to embrace!) some of the paradoxes that afflict ClassicalLogic?, or to capture the idea that statements about future events can be neither true nor false.) Conversely, the logics that philosophers *do* generally use to reason about things like knowledge and provability are things like IntuitionisticLogic? and flavours of ModalLogic?. These are by and large two-valued logics which are provably not equivalent to any finitely-many-valued logic. (Equivalence to infinitely-many-valued logic(s) is a messier story.) Source: An Introduction to Non-Classical Logic by GrahamPriest?.
• These are by and large two-valued logics which are provably not equivalent to any finitely-many-valued logic -- that's not really possible, since any two-valued logic must be equivalent to itself, and every two-valued logic is a finitely-many-valued-logic. In any case, multi-valued logics (including three-valued logics) have existed since long before the 1920s. Aristotle, for example, was dealing with contingent truths, necessary truths, possible truths, etc. And he's hardly the only reasoning mind in the last 2000 years that has needed more than 'true' and 'false' to tackle logic problems.
• People have been thinking about modality for a very long time. But in general *they haven't been using many-valued logic to do it*, which was pretty much the point.
• I dunno about that; "True", "False", and "I haven't a friggin' clue", has probably been around far longer than any formalizations of them.
• A many-valued logic has more than two truth values by definition. (Read my "finitely-many-valued logic" as "finitely many-valued logic" if that avoids confusion.) It's still true that you could bolt extra truth values into intuitionistic logic, say, but that's missing the point. On the one hand we have the modal logics, intuitionistic logic and so on, which have funky modal operators and the like but only two truth values. On the other hand we have the many-valued logics, which have multiple truth values but no special operators that don't appear in ClassicalLogic? (classical FirstOrderLogic, at least). It's impossible to take (for example) a standard modal logic and find an equivalent (first-order) logic which has a finite number of truth values and no special operators. So not only do you *not have to* use a finitely many-valued logic to reason about knowledge, belief, provability, etc., you *can't* use a finitely many-valued logic to do the sorts of reasoning you do with epistemic logic, intuitionistic logic and their ilk.
• "Read my "finitely-many-valued logic" as "finitely many-valued logic" if that avoids confusion. -- oh! That makes much more sense, when read that way. Really, though, no one logic should be fully equivalent to another logic. Where that is the case, you really don't actually have two different logics; you just have one logic, likely with two different paradigms. This often happens with Bayesian vs. Fuzzy logics, when the operators are defined in the same manner. (IMO, one should probably figure out what's wrong with the operators if two different concepts have the same mechanism for reasoning, especially if one is looking to combine the logics into one working whole.)
• "you *can't* use a finitely many-valued logic to do the sorts of reasoning you do with epistemic logic, intuitionistic logic and their ilk." -- I feel there may be a bit of a LaynesLaw issue on how we're counting logical primitives. I count each primitive 'truth' that can be returned from a a proposition (even if a set is returned, like 'necessary' and 'necessarily necessary') just count each value once). Provable, Not Provable; Necessary, Possible, Impossible, Necessarily Necessary, Possibly Impossible, etc.; Accepted, Unaccepted; Likely, Unlikely, or something in-between; Decidable, Undecidable; True, False; Unknown, etc. I count each 'primitive' return value, and this is completely independent of the nature of the reasoning required to find that value. There are many ways to get infinitely-valued logics (e.g. most modal logics, bayesian probabilities, fuzziness, etc.) While I count answers, I don't count the reasoning to get those answers; those don't change the 'valued' level of the logic. e.g. I don't count whether ~~p reduces to p, which does happen in BooleanLogic but not in IntuitionisticLogic?, but in either, if you ask a proposition, you will only receive true or false (or 'unknown' in an open world). I also don't count questions asked in return: If the model answers a question with a question; e.g. an answer to a query for "what is <propostion>" with "tell me <p,q, and r>" is 'unknown' with window-dressing. The set of answers for a proposition is at least somewhat independent of the existence of first-order or higher-order predicates, axioms, etc.. Predicates and abstractions mostly determine the nature of the propositions you may ask, and should be based upon the nature of the world over which you are reasoning. Ultimately, there is nothing preventing the use of N-valued logics for the tasks described, so long as the operators between propositions of different nature can be reconciled (e.g. what is 'possible' AND 'undecidable'?).

BooleanLogic operates within a ClosedWorldAssumption (if you can't prove it's true, then it's false).

BooleanLogic does no such thing. Adding that assumption to BooleanLogic would make incomplete systems inconsistant.

Your objection is unfounded. Incomplete systems are inconsistent with BooleanLogic. For example, proposition 'p', all by itself, has no boolean logic value. Period. It's fundamentally inconsistent with boolean logic. However, by representing 'p' in a ClosedWorldContext?, then you can assume either: "if 'p' is not provable as true, then 'p' is false", or "if 'p' is not provable as false, then 'p' is true". Now 'p' can be given a BooleanLogic value (either true or false). To do this, however, you had to "operate within the ClosedWorldAssumption".

• In vanilla ClassicalLogic? it is absolutely not true that if you assert neither a proposition P nor any other set of propositions which entail P, then you are effectively asserting ¬P. If that's what the ClosedWorldAssumption means, then it doesn't hold in vanilla ClassicalLogic?. If it did hold, then P ⊨ ¬Q and P ⊨ P & ¬Q would be valid inferences (again, under vanilla ClassicalLogic?). They aren't. More on the standard semantics of ClassicalLogic? when I have time.
• The only thing that is held to be true in a world of pure logic are those things that are necessarily true: tautologies. The only things that are held to be false in such a world are those things that are necessarily false: contradictions. The former you can prove to be true, the latter you can prove to be false. Other propositions are generally held to be variable objects with the value unknown. As such, it'd be an error to call a world of pure logic a 'closed world', or to assert that the ClosedWorldAssumption actually applies.
• What do you mean by a "world of pure logic"? The empty language? An interpretation which makes no sentences true? An empty set of premises? It's not relevant in any case: the (putative) inferences I gave above have a non-empty set of premises, imply a non-empty language (containing at least P and Q) and can be given interpretations in which at least one sentence, P, is true. (In fact the empty language is the only language of ClassicalLogic? which doesn't defy the ClosedWorldAssumption, in that there is no set of sentences in the language which does not imply the truth or falsity of every sentence in the language. - Actually, no, I was completely wrong there, because the empty language has contingent propositions too, like ∃x∃y¬(x = y). So it defies the ClosedWorldAssumption as well.
• By a "world of pure logic" I mean a world in which nothing (no thing) exists. Because nothing exists, you can't answer questions about things. You cannot answer "Is it raining?" because there is no 'it', implicit or otherwise. However, that doesn't mean you cannot formulate the question. Worlds do not impose restrictions on the language used to discuss them. If you ask bad questions, you can't answer them... but that doesn't change the world. Worlds are, quite fundamentally, separate from the logics used to reason about them. If you use the wrong logic, you'll end up with bad conclusions, but a failure of reasoning does not change the world. (That is excepting where reasoning is performed within the world, and the very act of reasoning and computation thus changes the world).
• "However, that doesn't mean you cannot formulate the question. Worlds do not impose restrictions on the language used to discuss them." An interpretation must give a denotation to every constant in the language (and every relation except =). If there are no things in the universe of discourse to denote, then you can't use any language with more than zero constants to talk about it, as there is no interpretation which applies that language to that universe of discourse. So yes, your choice of universe of discourse does impose some restrictions on the language you use to discuss it.
• As a language theorist, I must heartily disagree. Choosing the universe of Integers as my universe of discourse does not prevent me from choosing a language to communicate and reason about this universe that includes direct references to each of the 101 Dalmatians, while still managing to exclude direct references to most integers in the range around and above 3^3^3^3^3^3 simply because they are not physically expressible. (Any direct physical expression of an arbitrary such integer requires more atoms than there are in the universe, and expressions in a language must be expressible). It is not an error to choose a language that is more powerful than the problem domain requires, and yet remains unable to express some things in its universe. Instead, rather than choosing a weaker language, we simply limit our use of a more powerful language within the context of a particular problem domain.
• In ClassicalLogic? (at least!), every constant in an interpretation denotes something inside that interpretation's universe of discourse. "For a constant c, the denotation c^M is to be some individual in the domain |M|." - /Computability and Logic/, 4th ed., Boolos, Burgess and Jeffrey, p. 104. -- Every formable and meaningful expression in a language is, rather fundamentally, within that language's widest potential universe of discourse. This does not mean that all formable expressions are meaningful; many complex languages can easily formulate paradoxes. This also does not mean that all meaningful things are formable -- there may be a gap between what one intends to discuss and what one can discuss with a particular language. Finally, it does not mean that the widest possible universe of discourse for the language equates to the universe of discourse for the problem domain. Even when dealing with logic languages, all the above is true. It is useful to recognize the difference between what one can discuss (which is based on language limits) and what one intends to discuss (the universe of discourse, based on the problem domain), because the two are often not entirely matched. Where there is no universe to discourse, you have such a case -- you can't say or know anything meaningful about a 'Mu' world except its 'Mu'-ness. However, that doesn't mean you can't say or know anything meaningful.
• Of course, even without a world to reason about, you're still capable of performing some rather complex reasoning. Absent a real world, you're free to imagine one up and ask questions about it, instead. I.e. in place of "Is it raining?", you may ask, "Supposing that a world exists, and the world has <these many, many properties>. Is it raining in this world?". On the other hand, you can only answer affirmatively if the properties you defined about the world necessitate that it is raining. (There is "no thing" to measure in a "world of pure logic".) If you're able to answer 'yes, it is raining', you must have a tautology. The trivial example is: "Suppose that a world exists, and that it is raining in this world. Is it raining in this world?". The ability to describe a world and formulate questions about it, of course, requires a more sophisticated language... not a less sophisticated one.
• "If you're able to answer 'yes, it is raining', you must have a tautology. The trivial example is: "Suppose that a world exists, and that it is raining in this world. Is it raining in this world?"." From the statement that it is raining it is valid to infer that it is raining: P ⊨ P. That is not a tautology. A tautology of ClassicalLogic? is a valid inference from the empty set of sentences, for example ⊨ P → P
• "From the statement that it is raining it is valid to infer that it is raining: P ⊨ P. That is not a tautology." -- Indeed. And, yet, you do not HAVE the statement "It is raining", and you can't even look out the window to check because there is no 'it' and there is no 'window'. You need to hypothesize the existence 'it' and the property that 'it is raining' prior to indicating that this would implicate that 'it is raining' is, indeed, true. The ClassicalLogic? isn't ideal for hypothesizing the existence of worlds (and thus expressing that certain propositions can hold meaning in a particular world). However, this task is conceptually closer to '⊨ P → P'. You can prove "If it is raining, then it is raining". It remains a tautology. It is also similar to ('⊨ (P ⊨ P)') (~You can prove "If you can prove that it is raining, then it is raining".) This is also a tautology in any consistent logic, though it requires something of a MetaLogic? to determine this fact (along with access to the inferencing rules, to prove consistency). I do not believe the latter is a legal expression in the ClassicalLogic?. I do know that ClassicalLogic? is unable to directly express worlds, which is what I stated is really necessary for arbitrary study and language in a world of pure logic. One modal logic notation I've seen for this task is [P]Q, which essentially reads "In a world defined by property P, test Q". [R]R is a tautology; (⊨ [R]R) -- from within a world of pure logic, you can prove that, in a world defined by the property 'it is raining', it is raining.
• You made a statement on the nature of an empty language (in which no propositions can even be formulated). ("In fact the empty language is the only language of ClassicalLogic? which doesn't defy the ClosedWorldAssumption") It is certainly true that you cannot formulate any bad conclusions if cannot formulate any conclusions. However, to be clear, I'll reiterate: Languages are not worlds. Changing the language does not change the world. If something exists, but you are unable to formulate questions about it, that something still exists. If something does not exist, but you can formulate questions about it, that something still doesn't exist. ("What is the height of your pet Unicorn?").
• "You made a statement on the nature of an empty language (in which no propositions can even be formulated)." In fact the empty language has infinitely many sentences. Can you think of one? Nonetheless there is no set of sentences in the language which does not imply the truth or falsity of every sentence in the language. Why? - Actually, my second statement is false, see above.
• In answer to your first question: sure. 'true or false' is such a sentence; it would be trivial to create a generator for all such sentences. I'm not sure I understand your second question, but if you are referencing the fact that each possible sentence in the language possesses the value of either true or false, I agree that this is true and may be proven inductively. I should amend my original statement to properly express my original meaning: You made a statement on the nature of an empty language of classical logic (in which no propositions about a world can even be formulated). It is certainly true that you cannot formulate any bad conclusions about a world if cannot formulate any conclusions about a world. Of course, you don't need the ClosedWorldAssumption if you can't express anything about a world; in such a situation, both the ClosedWorldAssumption and OpenWorldAssumption are vacuously consistent and compatible.
• It is not sane discuss whether a "language" does or does not violate a ClosedWorldAssumption. However, it is sane to discuss whether a particular logic system (including its language, axioms, inference rules, etc.) is compatible or consistent with a particular world. Informally, the logic system is inconsistent with the world if it can produce erroneous conclusions about the world, and the logic system is incompatible with the world if it cannot even operate within that world. BooleanLogic is incompatible with any open world because, by definition, the correct answer to at least one proposition is 'unknown' in an open world, and BooleanLogic cannot operate with 'unknown'. You can force compatibility by applying the closed world assumption, but the closed world assumption is inconsistent with an open world for the obvious reasons. You cannot fix this inconsistency. However, in the more practical sense, you still have a world you wish to reason about. You are free to either change logics (to a ThreeValuedLogic, for example) or to change worlds (to an epistemic world, for example).
• The general problem here (as elsewhere) is that you're apparently confusing the question of whether a sentence is true under some interpretation with the question of whether a sentence is a valid inference of some given set of sentences (such as a knowledge base). In ClassicalLogic? P is true iff ¬P is false, but it is not the case that Γ ⊭ P iff Γ ⊨ ¬P .
• When you say "under some interpretation", you mean something operationally similar to (but not conceptually equal to) what I mean when I say "in some world". 'Under an interpretation', P might be true and Q might not be given any meaning at all, which causes no problems so long as Q doesn't appear in your sentences. Worlds are object in reference to which variable propositions may be given truth values, so an 'interpretation' imposes a world. Entailment and such work from sets of known facts about the world to produce more facts ('a entails b' is about the same as 'if you know a then you can prove b'.) In most logics ((A → B) ⊨ (A ⊨ B)) is something of an axiom or inference rule over all variables A, B... though in some logics it is expressed directly, and in others it is meta to the logic. I don't believe there is any confusion, here.
• More complex discussions of worlds (in general) are possible than can be expressed with ClassicalLogic? (which lacks any real ability to reference objects, including the world). In ClassicalLogic?, the expression 'P' can only ever be true or false while "under some interpretation." ClassicalLogic? can only ever 'operate' (be utilized to 'evaluate' arbitrary, legal propositions) while under some interpretive context... i.e. while within an imposed world. As you say, without a context, 'P' is neither true nor false by itself ((⊭ P) and (⊭ ¬P)). ClassicalLogic? fails to operate upon propositions such as 'P' when these aren't included in the interpretation. This issue can be overcome by stipulating that P be given an interpretation prior to its use in a logical sentence -- to assert that it always be known whether P is true or P is false prior to evaluating a sentence that expresses 'P'. (In your words: "An interpretation must give a denotation to every constant in the language.") However, by such a stipulation, you've asserted that the interpretation impose a closed world - everything is known a'priori. You are operating by your own stipulations within a closed world, so the ClosedWorldAssumption trivially, yet necessarily, applies. If you ever do move to an open world context for reasoning (which isn't necessarily appropriate for ClassicLogic?, but still appropriate for many two-valued logics) you'll need to handle the situation... and some sort of ClosedWorldAssumption will actually come into play.
• ClassicalLogic? assumes that every sentence in a language under an interpretation is true xor false. There's two parts to that. First, it demands that all the non-logical symbols have a denotation under the interpretation ('a' might stand for New York, Rxy might stand for the relation expressed by 'x is near y', etc.). Second, it assumes that all the sentences you can make up using the symbols of the language (such as Raa and ∀x((x = x) → Rxx) and Rab ∨ Raa) are true xor false under the interpretation. But that's not the ClosedWorldAssumption. It says nothing about knowing which sentences are true or which are false. The truth value of a sentence under an interpretation might be unknown. It might be profoundly unknowable. All ClassicalLogic? assumes is that in objective reality it has a truth value of true xor false, regardless of who knows or could know which. -- The ClassicalLogic? is operating under the ClosedWorldAssumption. In order to operate under some interpretation, it assumes that every sentence in the language is true xor false... that there are no sentences that, literally, possess the value 'unknown'. ClassicalLogic? does not operate under the assumption that 'unknown' is a correct value for a proposition. It only operates under the ClosedWorldAssumption. If it makes you feel better, though, a great deal of empirical evidence indicates that the real world is, indeed, a Closed World, even if our mental model of it is incomplete. Open worlds yet abound. A trivial such world is the one defined in an example, below, in the world described: 'there exists a reference to integer, 'y', with unknown value.' It seems to be more that you are mentally blurring the distinction between language (which limits expressiveness) and operation (which involves computation and evaluation). Any TwoValuedLogic operates under the CWA. This is a necessary truth; it is not possible for a TwoValuedLogic to operate with propositions or sentences that correctly evaluate to a third value, so the logic must either insist upon operation within a closed world, or accept some inconsistency in order to operate within an open world.
• "In order to operate under some interpretation, it assumes that every sentence in the language is true xor false... that there are no sentences that, literally, possess the value 'unknown'. ClassicalLogic?? does not operate under the assumption that 'unknown' is a correct value for a proposition." It assumes that none of its sentences have a truth value other than true xor false under any interpretation. That is not the ClosedWorldAssumption. Correct. The logic and the language used to express it are not the CWA, yet still require operating under the CWA. Insistence that any given interpretation have all the answers (a'priori) is insistence that you operate within a closed world.
• Actually ClassicalLogic? doesn't really talk directly about knowledge or knowability at all. But given a set of sentences which are all true under some chosen interpretation, you can deem a sentence to be known to be true under the interpretation iff it is a valid consequence of that set of sentences, the knowledge base, and deem it to be known to be false iff its negation is known to be true. (That's a very generous way to define knowledge, as you know, for reasons of performance and decidability which don't matter just now.) But if the ClosedWorldAssumption is the assumption that anything not known to be true is known to be false, then it obviously doesn't hold (for most knowledge bases) under this definition of knowledge. If the language is the zero-place predicate symbols P, Q, R and S, and the knowledge base is the set of sentences {P ∨ Q, R}, then Q is not known to be true (since P ∨ Q, R ⊭ Q) and ¬Q is not known to be true (since P ∨ Q, R ⊭ ¬Q). (If the CWA is the assumption that anything not known to be true is simply false, then it equally obviously doesn't hold here, since P ∨ Q, R ⊭ Q: there are interpretations that make P ∨ Q and R true and Q false.) -- You have some confusion about what, exactly, the ClosedWorldAssumption is. It is, again, the assumption that you know everything a'priori. That means in the 'P ∨ Q' instance, you'd know whether P is true or not, and whether Q is true or not. This is very useful: A knowledge base about a world is not that world. In this sense, while P ∨ Q might be true, P is unknown (and potentially unknowable), and Q is unknown (and potentially unknowable). We do not know whether the other world is closed or not... but we do know that, in this sense, ClassicalLogic? cannot operate if asked the question: 'S'?. That said, a knowledge base is also a world of sorts... one containing 'knowledge' objects. It is 'true' that 'P ∨ Q' is known, and it is 'false' that 'P' is known, and the questions you can ask of a proper knowledge base are always of the form: is <some proposition> known? Is 'S' known? Negative. Is '¬S' known? Negative. It's easy to prove that this provides a Closed World in which ClassicalLogic? can properly operate. It gives an interpretation to every sentence that is true xor false. (I don't advertise this as the only possibility... just a possibility.)
• "in this sense, ClassicalLogic? cannot operate if asked the question: 'S'?" In what sense? P ∨ Q, R ⊭ S and P ∨ Q, R ⊭ ¬S , just like Q. Any well-formed sentence of the language is xor isn't a valid consequence of any set of well-formed sentences of the language. -- What you discuss here is the evaluation of whole inference sentences P ∨ Q, R ⊨ S evaluates to invalid, as does P ∨ Q, R ⊨ ¬S. Logic over inference sentences is a useful one; you don't need to know 'S' to know 'S ⊨ S'. All you need to evaluate propositions of this form are the inference rules for the logic you are utilizing... which ultimately forms a logical 'world'. If you know ALL the correct inference rules, then the world in which you evaluate inference sentences is a closed world by definition. The ClosedWorldAssumption applies in that if, under the rules provided by the logic, "P ∨ Q, R ⊨ S" isn't provably valid, then it must be invalid (ignoring computability). On the other hand, if you don't have all the inference rules for some logic, then you have an open world. Such is the case with a finite set of axioms for reasoning about arithmetic over natural numbers, as proven by Goedel. In such a logic, if an inference sentence isn't accepted as valid, it cannot necessarily be rejected as invalid; you'd need to first prove it invalid. Studying logics can be interesting because different logics accept and reject different inference sentences as valid, just like different worlds accept and reject different propositions as true. The ClassicalLogic? is a closed logic. However, this is a step removed from (and meta to) the discussion of whether the ClassicalLanguage? can operate with the plain and simple proposition 'S'. 'S' isn't true, and 'S' isn't false; 'S' is unknown. To work with 'S', ClassicalLogic? requires that you first impose an interpretation on it, which involves assigning 'S' to either 'true' or 'false', bringing you back into the closed world.

According to what you've just said here, arithmatic on natural numbers is inconsistant. It uses BooleanLogic, and it's provably incomplete.
• The set of natural numbers is not incomplete. It's infinite. There's a huge difference.

I'm not talking about the set of natural numbers, I'm talking about arithmatic on natural numbers. It is provably incomplete (See GoedelsTheorem.)
• GoedelsTheorem is not about arithmetic upon natural numbers. It is about axioms over arithmetic upon natural numbers. Thats a different set of things -- a different world. Further, the theorem doesn't state that the set of truths involving such arithmetic is incomplete... only that any particular finite set of such axioms (things accepted as true) is incomplete. He proves that there are truths about the world of numbers and operators that cannot be proven from within the world containing the finite set of axioms.
• I guess I wasn't clear enough. When I referred to arithmetic upon natural numbers, I was referring to the standard axiomatization of it, BooleanLogic coupled with Peano's Axioms. GoedelsTheorem proves that that is incomplete, inconsistant, or both. If the ClosedWorldAssumption is part of BooleanLogic, we could rule out the first and third options.
• Goedel proved that the ClosedWorldAssumption is violated when operating with a finite set of axioms about the natural numbers. This shows that the standard axiomatization of arithmetic upon natural numbers to be incomplete with the actual natural numbers and arithmetic. Such a finite set of axioms is provably not a closed world. Thus, it is in reasoning as though this set of axioms were a closed world (by using BooleanLogic) that inconsistency is introduced -- you're choosing the wrong tool for the job. Why? Because BooleanLogic is inconsistent with an open world, and when you treat an open world as a closed world, you'll have inconsistency.
• Under the ClosedWorldAssumption, where the world contains only a finite set of axioms, you'd reject as accepted those things that you cannot prove from your axioms. According to Goedel, among the things you'd reject would be things that are true when applied to the world of natural numbers and arithmetic. Technically, it is correct to reject those truths as 'accepted', because you defined your world to be a finite set of axioms. If, instead, you define your world to be the world containing all truths regarding arithmetic over natural numbers, then any finite set of axioms is incomplete. This is what Goedel's incompleteness theorem makes clear. It would be wrong to consider this world 'closed', because you don't know everything in it.
• If the ClosedWorldAssumption is part of BooleanLogic, then that world has to be closed. BooleanLogic is included in the set of axiom.
• The BooleanLogic included in the set of axioms is for reasoning about the arithmetic and natural numbers, not for reasoning about the axioms. Goedel's theorem is about the axioms, not about the arithmetic and natural numbers. This causes no contradiction; these are two different conceptual worlds: the set of numerical objects and operators, and the set of axioms.
• More worrisome is the nature of your comment. ... Are you trying to say that the nature of the world changes simply because you decide to use BooleanLogic to reason about it? That is a rather dubious position. The world is the world is the world; if you make a bad assumption about it, then you're the one introducing inconsistency.

You need multi-place predicates (and identity) to model arithmetic. BooleanLogic is ZeroOrderLogic?. If it has predicates (with more than zero places) it's at least FirstOrderLogic. Call it FregeanLogic? if you have to.

In representation of Incomplete Systems, where you wish to have consistency, you must use ThreeValuedLogic... which allows proposition 'p' to return the value 'unknown'. In this case, however, there is no excluded middle; BooleanLogic no longer applies.

Now you've switched from the system itself to representing that system. I agree that unless you are dealing only with complete information, you will need some method of handling the incompleteness.
• Any utilization of an incomplete world requires the method of handling the incompleteness. I should have used a better word.

This causes less inconsistency than you might believe. If you can't prove p, then p is false. Thus, Not(p) is true, because p is false. Etc.

It's not that simple. For example, lets define (over the natural number)

Odd(x) ::= There exists a natural number, n, such that 2*n + 1 = x. Even(x) ::= There exists a natural number, n, such that 2*n = x.

Let's y be some unknown value. Which of Odd(y) and Even(y) is true? According to the closed world assumption, both are false. However, we also know that at least one is true.

You need to UnaskTheQuestion, here. You've made a conceptual error in assuming you have a closed world after you say "let y be some unknown value." You don't. 'y' isn't a proposition asked of the world; it is, instead, an object within the world. In a closed world, all such things are known a'priori. By saying you do not know what 'y' is, you've clearly violated your ClosedWorldAssumption. You may no longer use BooleanLogic to answer questions about that world; to do so would be fallacy.

If you wish, you may wrap an open world within a closed world; to do this, you simply reason about what you know about the openworld. The set of things you know is, after all, a closed set objects -- a closed world. (In particular, it is a closed set of data objects.) For example:
```    knowledge :: data = { NaturalNumber(y) is True in theExampleWorld }
```
Now you may form propositions to ask of your knowledge regarding theExampleWorld. Known(Even(y) or Odd(y)) would be true, and you may ask Known(Even(y)) and receive false, and Not(Known(Even(y))) will properly return True. All of these are correct. All may be calculated with Boolean logic within the ClosedWorldAssumption, given the proper inferencing rules. Everything about the world of reason is in that simple, closed set of data.

Asking Odd(y) within this epistemic world isn't even a valid question, because 'y' is not an object of the world. However, when reasoning -within- the associated open world (theExampleWorld, above), a question such as Odd(y) may be asked directly... it's just that the answer is 'Unknown'.

I thought we were discussing situations where we had incomplete information, so I don't think I'll UnaskTheQuestion. The closed world assumption doesn't even come into play when you have complete information.

The ClosedWorldAssumption IS the assumption that you have complete information regarding the world, even if you don't. It is the assumption that everything that is true is either represented directly or can be derived from the set of things that are in the world. The ClosedWorldAssumption comes into play when you ask a proposition of the world that cannot be proven. The ClosedWorldAssumption insists that it is false. And such a proposition is provably false under this assumption. The ClosedWorldAssumption is violated when you introduce to the world something that you don't have represented. The world you presented ('y' is unknown) is NOT a closed world. This makes your claims that the ClosedWorldAssumption fails to handle this situation rather meaningless. If the assumption is violated, then of course it shouldn't hold.

Which is pretty much the point. Including the ClosedWorldAssumption in BooleanLogic, breaks just about every theory we use BooleanLogic in. BooleanLogic (without that assumption) doesn't have a problem with the situation in question, it simply doesn't prove any of these statements, Even(y), Not(Even(y)), Odd(y), and Not(Odd(y)). But if we include the closed world assumption, we have to prove two of those, and I don't see any obvious way of picking which ones. (The only consistant choices are Even(y) and Not(Odd(y)), or Not(Even(y)) and Odd(y), but how do you pick which one of those two?).
• BooleanLogic doesn't provide any value with which to answer those propositions in the context of the example world. Thus you have a problem -- a proposition upon a world must come back with an answer. The correct answer is 'unknown'. This means that you definitely have an open world, and that you should not be using BooleanLogic to reason about it. Your proposed inconsistency arises only from stubborn insistence that you somehow must use BooleanLogic to reason about a clearly open world. This is not the case. You should not look at a situation and say: "Well, this assumption is clearly and irrefutably violated. Let's use it anyway." If you do, then you are the one introducing inconsistency.

In any case, I know what 'y' is. 'y' is a variable that ranges over objects in the world. y is an object in the world. (To be clear, y is an object, not a variable.) Even(y) and Odd(y) are statements about the world. We cannot prove Even(y). We cannot prove Odd(y). According to the closed world assumption, both Even(y) and Odd(y) are false, but we can prove that Even(y) v Odd(y). This is a problem, and it's caused adding the closed world assumption to BooleanLogic.

Your proposed Even(y) and Odd(y) are not statements about a closed world. I've explained this well enough above. Their use does not provide a relevant argument, and I'll let you be the one to scroll upwards and study why. Here's a short mental challenge to you: What is the correct value of Even(y), given only BooleanLogic (true/false)?

Whatever the model says it is. It's the model that assigns the truth values, not BooleanLogic.

It is BooleanLogic that carries the truth values. When you ask a proposition of a world, you receive an answer in the form of a truth primitive. For a closed world, this truth primitive must be from BooleanLogic. No other answer is legal. For other models, you don't need to answer with BooleanLogic... but, in those cases, you are simply no longer using BooleanLogic (well... no more than ThreeValuedLogic "isn't using BooleanLogic"; BooleanLogic may be subsumed by a wider logic). Anyhow, you're the one claiming you have a closed world, and you're the one asking a proposition of it. You can't dodge this question and claim you have a closed world. So, what. is. your. answer?

(See also BooleanLogic, ThreeValuedLogic, TwoValuedLogic, BooleanAlgebra, DataBase, WhatIsData)
CategoryLogic

View edit of November 15, 2006 or FindPage with title or text search