Types Are Side Flags

Summary of extraordinary kind at TypesAreTypes.

As a compromise, I have agreed to drop the issue of a standard definition, and instead now sell the "side flag" idea as a "conceptual model of type implementation" to be moved to TypesAndSideFlags. (New topic titles are suggested below.) I would also note that almost every "type system" ultimately involves "side flags" paired or closely associated with "values" in its implementation. This pairing tends to be a "universal characteristic", even if not a defining one of most typing systems. (An interesting test would be to study typing systems without side-flags, if there is such a thing.) Enjoy. -- top

How about TypesArePilonsLikeOnTheStreet? or TypesAreMarkers? or TypesAreLabels? or TypesAreFlagsNotOnTheSideButOnTop?, or TypesAreKindOfLikeSectionsOfStuff? or TypesAreWhatTheDictionaryAlreadySaid?

If it helps with grokkability in the field, go for it. If a mental tool works for you, good. Just be careful not to force it on others if they can't make use of it.

Those pages were suggested as a joke. If types are side flags then they must be pylons too. But they are not pylons. Pylons are pylons. Let's be reasonable. Types are a classification system, a way to categorize a domain, a way to group items. If someone in a soccer game throws a side flag out and says that the soccer ball is out of bounds - this is not a type. It's an act of throwing out a side flag. The side flags themselves are not types, just as pylons or hand wavings are not types. See TypesAreNotTypesTheyAreTypes.

I don't see how this relates to the above. It talks about mental models to help one work with an idea, not a universal declaration about what something "is".

Gee, what is the synonym for "are"? One synonym is "is". Yes, the word "is" is really a synonym for "are". Now, let's logically deduct: what is the title of this page? TypesAreSideFlags. TypeIsSideFlag?. It's a cute, silly, funny metaphor - but even as a metaphor it stinks. It's wrongo bongo.

Please clarify.


Is equals Are. I use logical proofs when I write and argue.

Try adding a sprinkling of clarity into it also. Your is/are rant makes no sense.

I would not be against the idea of changing the title to SideFlagTypeModel? and drop the issue of using it as a definition. The main purpose is to provide a useful mental model of "types", and the definition issue seems to generate controversy that is mostly irrelevant to the modeling ability of the concept. In other words, "types can be viewed/modelled as flags" is what is important, not so much "types are flags". [Might I suggest SideFlagTypeNotion?, as it isn't really a model. Or, perhaps TypeTagNotion?, to be in keeping with usual term for implementations of systems (e.g., dynamically-typed languages) that associate type identifiers with values.]

So that we don't have to hard-wire a prejudgement to the name, how about the original suggestion of "TypesAndSideFlags".

[I could live with that, but how about TypesAndTags?, which has a nice rhythm to it and uses a familiar term for a related (if not the same) idea.]

The downsides are that "tag" may be mistaken for mark-up tags. I feel "type flag" is a bit closer to the notion.

["Tag" is the familiar term for the concept. In context, it is no more likely to be mistaken for mark-up tags than "dog" is likely to be mistaken for a canine mammal when talking about fireplace fixtures.]
Can anyone suggest a less-enlightening page on this wiki? I think I'd like to read something that makes this appear interesting and attractive. There must be something here worth reading. After all, it's seen a lot of activity ...

Curiously, almost the entire content of this page is debate over whether its essential notion is worth anything, let alone reading. My recommendation is not to bother...

[As one of the participants, I can confirm that this page contains nothing of interest to me, and I'm rather ashamed to have taken part in it, especially the insults. Unfortunately, I find few things more frustrating than a ChronicallyRight HostileStudent, and I reacted accordingly. I think this page should be deleted. If top thinks it is of value, he can write up "Flag Theory" as he sees fit and put it on his Web site.]

I didn't start the insults. I only responded in kind. First Blood is not on my hands. The (self-elected) teacher insulted the student first. YOU are the HostileStudent because you won't listen. "Type Theory" is your bible and you defend it like any other zealot. It's too convoluted to be useful for the vast majority of IT workers and you admittedly don't give a shit. There's a big gaping flaw and you try to plug it by stuffing the body of the messenger in the hole.

Oh, you certainly did start the insults, top. You did so by saying that "all the competitors" were no better than this pile of crap.

You just don't understand how insulting that is, seeing as you don't comprehend that your pet-theory is a pile of crap. There're many big, gaping flaws in your model that you've been stuffing with your ego alone. When we point them out, you attempt to distract us by waving your hands, calling them implementation details, or blustering about how accepted type theory couldn't possibly be better. We might attempt to explain to you, quite thoroughly, why you're wrong to do these things. THAT is when you become the HostileStudent.

And TypeTheory isn't convoluted, it's just deep and broad. Understanding its depth and breadth is Masters and PhD level work. IT workers can stick with the shallow-waters stuff like simple primitive types and generic types, and basic uses (type safety and dispatching). That's perfectly fine. But no one with even a shred of honesty would try to dumb it down (to 'TypesAreSideFlags') and leave IT workers with some misbegotten delusion that they possess an understanding of TypeTheory that they do not. However, I can see how a true charlatan might create some shitty theory that he can sell, that IT workers would accept because it makes them people feel better; people feel empowered by a delusion of knowledge. Charlatans do such things because it feeds their ego, sometimes even feeds their belly. I expect that this page was written by a charlatan.

I'd be okay with deleting the page, but I have a feeling that top would just take advantage to further his own agendas by writing it again, this time without the criticism, and that if we provided criticism again, he'd get all blustery and hand-wavy and we'd end up with the same monstrosity we just deleted. That sounds costly to me.


Insult Pledge: Can we agree to at least mutually stop the personal insults? We both think the other side is highly biased and/or manipulative. We don't need to keep saying this over and over.

This is a viewpoint on WhatAreTypes in which "types" are defined as "side flags" or dual-cell variable nodes where one side of the node is the value or value-container and the other is a type indicator or list/set/bag of type indicators (depending on language or run-time engine). The flag affects the interpretation or usage of the value. The key points are:

This flag(s) is intended to indicate the usage, interpretation[1], or processing of the variable/value; and can be used by the interpreter or compiler (I/C) to select which operation to apply to the value. The I/C can also follow a substitution tree or DAG, per language, to find the proper type. See RunTimeEngineSchema for a simplified example of how substitution may occur. ("Variable" means like X in "X = 7", not "flexible sized". However, variable nodes may have more than just values and type flags. This definition only requires the minimum of the 2 items and permits other items, such as length, scope indicates, etc. as needed per language/engine.)

This definition does not dictate how the flags are actually used[2], only that they exist and make a difference in the results. That is, if the values of the flags are changed, the output also can potentially change. Also, there must be some establishable relationship between the flag(s) and its corresponding value(s). (It is undecided whether such flags used merely as documentation are still "types". The human is "processing" them even if the computer does not, so perhaps it also counts, depending on the scope of execution we are considering.)

This views types in terms of data structures, unlike competitor definitions, which are tied to "syntax", allegedly making them over-focus on human-read symbols and characters which, according to TypesAreSideFlags, are not necessary to have types. Types can still exist if the source code is lost or burns up in water heater explosion. I/C's usually turn syntax into data structures before processing, and these structures almost always have some form of "side flag" nodes if they have types. An exception has yet to be found.

[1] See PageAnchor "spreadsheet_scenario" in AreTypesTiedToSyntax.

[2] Explicit techniques should have explicit names, such as "DAG-oriented Type Substitution Theory".

The Scope of the Definition

The flag model is intended to provide a predictive "thought model" to understand how (and if) languages treat types in various situations. It is not necessarily an implementation model because compilers/interpreters do not have to use flags to provide a type system, even if it behaves equivalently to a flag-based model. If the behavior of a language/tool matches a flag-based model, the model can still be useful even if it is not the actual model used under the hood. In this sense, it may serve as a UsefulLie until the point a better model is discovered or learned. To my knowledge, flag-theory has no simple competitors, only complex ones.

Also, this definition does not attempt to explain in any detail the philosophy of types. This is largely because philosophy is often based on psychology, which is more art than science when it comes to computers; and because everybody's psychology varies such that a viewpoint of one mind may not be compatible with another. It is hard to test for "wrong philosophy", especially if there are multiple viewpoints which appear valid (lack obvious internal contradictions). A definition that involves heavy use of psychology and psychological assumptions is likely to be a problematic model. Thus, the flag model generally avoids it, focusing on predicting behavior instead.

Parsing versus Types

Sometimes values are "parsed" to see, or "guess" what type they are. Before such parsing takes place, the item has no "type", according to the above definition. However, a type can exist after the parsing is done. Example:

  x = readItemFromFile(...);
  r = guessType(x);
Here, "x" has no type after the first statement, but does after the second statement because "r" becomes the "side flag" for "x". (This is assuming that readItemFromFile does not include a side flag of its own.) It is true that some languages always have a type flag for all variables whether it is used by the program or not. But in the case above, it may be set to some default, but useless value, such as "variant" or "object" or "auto" if not explicitly assigned by the application or API. (This example is of "roll-your-own" types and not an example of using the built-in language type system, assuming such exists.)

If "guessType" is functional, it's quite reasonable to say that 'x' had type 'r' before the guessing ever occurred. Your statement and 'above definition' to the contrary is not defensible.

But there could be gazillion other versions of guessType, which may all return different answers. Thus, if we open the door to "ghost types", then things are no longer deterministic, or at least useful in any way.

  r1 = bobsGuessType(x);   // returns "foo"
  r2 = bettysGuessType(x);  // returns "bar"
  r3 = mikesGuessType(x);  // returns "date"
  r[infinity] = infinityGuessType(x);
You can say that "types" have to be an observable property, not a potential property, in this definition.

Nonetheless, you can encode implicitly (in zero bits) that 'guessType' is the correct one to use. If 'guessType' itself calls a bunch of other 'guessType's, then that's fine. Further, I've trouble seeing why you object: according even to your own statements, r1 and r2 and r3 and r[infinity] all now constitute "side-flags" and therefore types for "x", the moment you choose to utilize them.

A given value can indeed have multiple types. The definition does not exclude that. Unless you know (implicitly) that exactly one choice is the 'correct' one, would it be more correct to say that x has ALL these types or NONE of these types? And, regarding "observable property", yes, types are defined by their properties - those you can prove to be present or absent. Dynamic typing allows you to wait until you actually have the value-instance (for testing difficult properties like isprime) and static-typing focuses on what can be proven based entirely upon the structure of the program. Observation of a property is just one sort of proof.

(moved discussion below under "Programming versus Math".)

In general, if you have to do complex calculations to determine "type", then it is not a flag. Flags are "pre-packaged" and easily digestible. That's the main difference between "parsing" and "flags". Types don't exist until the knowledge of types is readily determinable without complex computation. (There may indeed be a gray area where the flag is somewhat obfuscated, and thus may be a half-ass flag, in which case we get half-ass types.)

Contrary Viewpoint: Types can often be determined by complex calculations. Types are not flags. You can, if you wish, cache the result of this complex calculation, and call this cached result a "flag". Philosophically, the types must exist before they are determined by complex calculations - properties and names of things that don't exist cannot, in any sense, be determined. Thus, types exist, can often be determined by complex calculations, and the result of said calculation can optionally be cached.

In theory it may be possible to know the results of every possible function or program to ever exist and thus dispense with the program. If we want to take an existential trip down somebody's EverythingIsa world, then indeed everything is everything and nothing is nothing. Do variables still exist if a given program is rewritten such that no variables are used, only functions feeding functions? One could perhaps argue that such is a black box such that one couldn't experiment with the innards to see if something has a type flag not, and thus is unknowable. You can't test something you can't open. You are drifting away from the practical issues and into existential philosophies of what programs are.

You're wrong. It is not, even theoretically, possible to know the results of every program because it is not, theoretically, possible to determine ahead of runtime which values will be received from sensors or communications.

When Do Types Exist

This describes where, when, and why type flags exist in various "flavors" of typing systems in common programming languages.

In statically typed languages like C, the flags exist at the compiler stage. It could be argued that they also exist at the text stage when there is an explicit declaration such as "int x;". The compiler keeps a type flag(s) for each variable in its variable table to track the type of each variable and uses this for validation, such as making sure one is not accidentally multiplying characters together (or dispatches the appropriate operation when overloading is allowed). When the compiler is done, it discards type information (type flags); and the result, the executable file, no longer contains type information. Machine instructions in the executable that operate on bytes do so dutifully without ever "caring" the "type" of the bytes. The bytes are not "marked" to indicate which operations are allowed or which operations are appropriate. Machine instructions are thus operating on "raw" values. Types don't exist when something can only look at the value and only the value. In dynamically-typed languages, the variables keep their type flags throughout execution, and can usually be changed programmatically when needed. To truly know what a variable "contains", you have to inspect both the value and the type. In type-free languages (other terms are sometimes used), there are no type flags. Usually the variables are represented as strings or a series of bytes. How these strings or bytes are interpreted is usually up to each operator/function as encountered. These operations may have to do explicit parsing to check to see if a value can satisfactorily be used in the given operation. There is no flag to check. As mentioned below, a language can be "type free" on some levels but not another. For example, it may not track types at the scalar level (integer, string, date, etc.), but do so at the scalar-versus-structure level.


This is a very informal model for a typing system, and an even worse mechanism for defining types. You might as well have titled this: "TypesAreTypeFlags". Of course, that wouldn't obscure one of the basic problems with the theory - ultimately the "types" are still out there, undefined.

Consider, for a moment, that some agent must ultimately determine which "side-flags" a cell/value/object/variable receives. Under the definition proposed above, all variables are of the single-typed there are no-flags until an agent assigns at least one flag, so under this proposed definition, the annotation of type information actually changes the types rather than indicates truths implicit or explicit in the code.

Further, most definitions of types are not tied to syntax, despite the ridiculous claim above.

Describing that a particular program structure is intended to possess a particular type is tied to syntax because descriptions are always tied to syntax, thus syntax is a necessary component of any type system.

However, the types themselves are values, standing well above the syntax that represents them.

Even in such primitive cases as 'int' and 'float' in C programs, the type is not the physical, syntactic name "int" or "float"; rather, the type is the concept these descriptors represent.

Types, themselves, are based most often in one of SetTheory, CategoryTheory, or PredicateLogic.

You just can't use them without a corresponding syntax. Period.

As to the final claims above: (1) Types can still exist if the source code is lost or burns up in water heater explosion. Of course. Types are values, and thus exist in concept regardless of representation, much like you can't destroy the value '1' simply by shredding a piece of paper with '1' represented upon it.

(2) [Interpreters and Compilers] usually turn syntax into data structures before processing, and these structures almost always have some form of "side flag" nodes if they have types. An exception has yet to be found. Firstly, it's foolish to think that one could write down or otherwise store "data structures" anywhere without some sort of syntax, and "anywhere" includes writing it in memory. An AbstractSyntaxTree is a transformation of the source-code and is, itself, more source-code... simply of the sort that abstracts away any sort of syntactic sugar.

Secondly, use of "flags" to represent calculated information as to types is an implementation detail - an optional optimization; the only reason one mightn't find exceptions is because one is cherry picking what one chooses to look at.

This idea that one can define types as 'side-flags', AKA top's "flag theory", is naive and not very deeply considered. The above author hasn't even begun to give serious consideration to some of the most fundamental questions behind the theory. This author suggests starting with: how do you determine the flags in the first place?

So is source code. In fact, the declarations are a kind of "side flag":

   function myfunc() {
     int foo;
     double bar;
"int" and "double" can be viewed as "side flags". Thus, the definition holds up regardless of the form (source code, parse tree, etc.). The parse-tree example was only brought up to illustrate a flaw in the competition, not say it is the ultimate format. One should be able to take all possible formats/representations of a program or thing with types, and mark with a yellow pen what parts of it are "types" and which are not.

Specific trials are welcome. The competition is welcome to show up also.

A good definition of "types" ideally should at least:

I'll agree with these... except for the psychology one. I'll admit to that one only for typing of language expressions because languages are a shared medium between agents and, thus, cannot be subject to the psychology of any particular agent (psychology cannot be -correctly- imposed upon the representation or interpretation even if it is attempted by emotional humans in practice). Of course, the context for this discussion IS programming-languages despite the fact that you brought up identifying an object as a chair (which is object classification, thus subject to personal psychology).
"Side flags", more commonly known as tags, are often used to decorate values with type information in dynamically-typed languages. It is an implementation strategy - one that can be used to implement one of the elements of a type system. "Side flags" are not, however, either a theoretical system or the types themselves, because types are an abstract concept and are only meaningful in the context of a type system. "Side flags" are an implementation detail.

By way of analogy, note that relations and tuples are theoretical elements of the relational model. A common implementation strategy is to construct tuples as arrays of values, and construct relations using file-based B-tree indexes and file-based arrays of tuples. However, it would not be accurate to say that relations are, at an abstract and theoretical level, file-based B-tree indexed file-based arrays of tuple-arrays, because that's only one of many possible implementation strategies. It's also an exceedingly awkward way to perform reasoning about relations.

That is a bad analogy for three reasons. First, we have a fairly simply clean relational model to work off of. It can be described on a page or two in fairly simple language. Second, tables don't have to have indexes and small tables often don't. Third, in many ways "arrays of tuples" is close to "map of maps", which is almost what a table is in an abstract sense (if we make assumptions about what missing explicit columns mean). Thus, it wouldn't actually be far off. Thus, observing implementation *does work* to a large extent. Also, observing implementation would probably result in the observation that a primary key of some type is needed (DB's that don't are not fully relational). An alien dissecting a variety of RDBMS's would come close to identifying the correct commonalities. Similarly, one wouldn't say that types are side flags, because that's only one way to implement them, it says nothing about the types themselves, and it provides no way to perform reasoning about types. It merely provides a way of identifying the type(s) to which a value belongs in a particular implementation. At best, you could say that "side flags" may be used to identify types, but not that "side flags" are types.

I think you may be confusing type theory with particular implementations of type systems - which is a bit like confusing the field of automotive engineering with a particular model of car.

By focusing on objective reality instead of what happens in the writer's head, it may indeed look like "implementation". However, if we want at least a somewhat objective definition of "types", we *must* focus on implementation, otherwise the definition is not testable against instances to see if it matches since we cannot rip neurons out of people's heads to put them under microscopes (in most countries). I am looking for what is common in implementations of types, and I see flags. If there is some observable reality above implementation, please tell us what this is.

It's called an "abstraction", which is the observable - via symbols and/or diagrams and/or imagination - "reality above implementation" that is revealed via logic and intuition, and that allows us to perform provable reasoning on mathematical or empirical (e.g., implemented) structures.

Assuming "flag theory" is theory and not mere implementation detail, it would be helpful if you could indicate how "flag theory" works. In particular, what predictions or deductions does it allow us to make? As it stands, it would appear that "flag theory" is no more and no less than the following statement: Given a set of side flags T and a set of values V, for every v that is an element of V there exists an ordered pair (v, t) such that t is an element of T. Such a statement does not a theory make! At best, it could be an axiomatic starting point for exploring a type theory, but in itself it provides no predictive or deductive power. It tells us nothing about the types (oops, I mean, "side flags", because they're the same, aren't they?) themselves; how they may be constructed, distinguished, substituted, etc. So far, all "flag theory" says is that values have types. Well, we knew that anyway.

By the way, Top, please don't take this the wrong way, but would I be correct in assuming you've had little or no formal university-level education (or self-teaching thereof) in mathematics or computer science? It strikes me that many of these debates result from a fundamental gulf between you and your opponents' understanding of and appreciation for the role of mathematics and theory. I'm sure you're a kick-ass programmer - the issues you debate wouldn't be important to you if you weren't - but I get the impression that you turn your nose up at formal theory and mathematics as a defensive reaction to not having learned it. Reminds me a bit of some "old skool" auto mechanics I knew - sharp as tacks otherwise, and talented as hell in all the pragmatic spaces - who frequently hurled scathing abuse at mechanical engineering but would sometimes make fundamental blunders because of their lack of theoretical knowledge.

I have a degree in Computer Science and graduated at the top of my class. That being said, it was a regular State-U and they intentionally focused on practical issues instead of lots of theory. Also, I forgot most of the shit I learned there; and my minor was graphics/CADD and not compiler design (or "systems programming" as they called it). I pride myself in being a practical-minded person, and demand that academic approaches prove their worth on the street or at least semi-realistic simulations. Playing with symbols is not necessarily the same as making the world function better, and some people hide behind them because they are not practical minded. DrCodd's work is an example of an idea that can be expressed mostly in simple, strait-forward terms, but he was unfortunately not gifted in such presentation. It took others to do that part. -- top

Your degree may have been titled Computer Science, but it sounds like what has, in the past few years, been more commonly (and correctly) re-badged (at least in the UK) as Computing or Software Engineering. That explains much. Now then: I would appreciate an answer to the paragraph where I wrote, "Assuming "flag theory" is theory and not mere implementation detail, it would be helpful if you could indicate how "flag theory" works. ..."?

You seem to have a personal desire to paint me or view me in the worse possible light. If that is your goal, it is probably not possible for anything I do or say to change it. As far as "how it works", It is simple: it uses side flags. Types are side flags.

How is that painting or viewing you in the worst possible light??? You seem to do that all on your own. I'm simply, though indirectly, pointing out that you obviously don't have the background in TypeTheory to comment on it, let alone offer up a replacement. I'm not deprecating your Computing or Software Engineering background - these are legitimate academic subjects. Indeed, my own professional background is more in these than in formal computer science. Furthermore, I am not a type theorist. However, I have a great deal of respect for the work of type theorists and other theoreticians in computer science, and I respect the process that produces and validates such work because some of my work depends on it. You, in one sweep, try to offer up a supposed alternative with obviously not one iota of understanding of types, type systems, type theory, computer science, or the way theories are validated in general. The scary (and sad) thing is this: It's obvious, I am sure, to everyone reading this that you don't have a clue what you're on about. It's obvious to everyone, except, of course, you. The vast majority of computing specialists or software engineers, when encountering a field they don't yet understand, would make an attempt to understand it - at least enough to critique it at some appropriate level. It's abundantly obvious that you've made no such effort at all. To use an analogy: If we were all medical doctors instead of software engineers, programmers, and computer scientists, you'd be the guy who tries to convince us that muscles move not because of the formation of crossbridges between myosin and actin fibers etc., but because fleshy things pull on our bones. That would be the, uh, "Fleshy Things Theory," based on the observation that "Muscles are Fleshy Things". That's how ridiculous "types = side flags" looks to anyone with even a vague understanding of computer science.

Almost every other useful computer-science idea can be reduced to a simpler explanation. Perhaps the simpler explanation is not quite as thorough or lawyer-tight as the formal version, but at least it is reducible. That has yet to happen for types and it is not my fault. You admitted that "type theorists don't give a damn about regular IT joes" (paraphrased). I don't have to accuse you of ignoring a large audience of actual type users, you flat admitted it. Your "sin" is there for all to see. You are chewing me out for trying to remedy that without offering a remedy of your own. It makes you come across as a do-nothing arrogant prick (although I've encountered much much ruder people here, so I have to give your credit for only being mildly rude). You complain that I am trying to help a dying patient without having a medical degree, but you yourself would let the patient die. That isn't logical.

Yes, of course every useful computer science idea can be reduced to a simpler explanation. The accepted "simpler explanation" for "type" is that a type is a specification of a set of values and the operations that can be performed on those values. Your "flag theory" simpler explanation is that a "type" is a "side flag", with no further explanation. Thus, "type" and "side flag" are mere synonyms. It's not a simpler explanation, it's a non-explanation. By the way, there are at least two unsigned authors at work here - I did not admit that "type theorists don't give a damn about regular IT joes." That was someone else. And I'm not being rude in pointing out that as a software engineer and programmer who is not a type theorist - i.e., your stated audience - your "theory" is useless. Instead of reacting defensively, perhaps it would be worth reconsidering your theory, or at least expanding on it, for the sake of people like me in your intended audience.

As for your "how it works," it suggests that "flag theory" is as useless as my "diesel theory": Q: How does a Diesel engine work? A: It uses diesel fuel. Such an answer is utterly trivial, incomplete, unsatisfying, non-explanatory, and useless for any practical or theoretical purpose. So is "flag theory."

This is a poor analogy because it does NOT say "types are things that use types".

You're right, it is a poor analogy. I shall make a better one: Your explanation for "how it works" is as follows - "It is simple: it uses side flags. Types are side flags." Thus, a better analogy is to say "Diesel engine theory is simple: it uses a motor. Diesel engines are motors."

Inferred Types

Some suggest that flag theory does not take into account inferred types. For example, in a dynamic language we may do something like this:

   var a, b;
   a = 32.8;
   b = 'my fleas have dogs';
Here the program "knows" that 32.8 is a number by using a parse tree. However, actually the quotes are the "side flag" here, and lack of quotes (lack of explicit flag) is also a kind of "flag", sort of like a Null. Lack of quotes *indicates* something. Similarly, usually there are other indicators that indicate octal, hex, etc. Internally, a number will still have a flag. The lack of an indicator in the language text for decimal numbers is merely a shorthand for a flag to save the humans time.

Programming languages are generally a hybrid between natural (spoken) languages and formal or computer languages. Thus, they inherit some of the Obfuscation of natural languages. But for the most part, type indicators are there, just not in a "clean" way. If it starts with digits 1-9 it is assumed an integer, unless it has a decimal, then it is Real. If it starts with zero it is assumed octal; if it starts with an 'x' it is assumed to be a hex number; if it starts with a quote, assume string; if it starts with A-to-Z, assume it is a variable name, etc. The first non-blank character and potential decimal serve as the "side flags". A clean/formal version of the same thing would have explicit flags and the run-time-engine or compiler usually explicitly has a flag so that it doesn't have to keep parsing over and over (which type-free languages often have to do).

There are languages that don't have flags and use situational context. But basically the operations are "on their own" in those languages. They parse to validate "types" for local usage (if we can even use the word "type"). An operator is more or less saying, "I've been requested to treat this as a number, let's parse it to make sure I can." They are called type-free languages because they don't have flags. Sometimes there are flags to indicate the difference between scalars and structures, but not between different kinds of scalars. Such languages are type-free at the scalar level, but not at the variable level. IINM, Perl is like this. So is ColdFusion. The difference between ColdFusion (CF) and PHP can be felt because in PHP sometimes the "type flag" ends up producing unexpected results and you have to keep it in mind. In CF, if you do a "print(x)" (or its equiv.) and the result is "123", you always know that any other scalar that also produces "123" will be equivalent to x at that point. However, this is not the case with PHP because its internal type flag may say it's a string or a number or whatnot. To test for equivalency, you have to also check its type in PHP. There is no test for scalar types in CF, except parsability. For example, isNumber() will tell you if something passes parsing as a number. It tells you how it can be interpreted, not what it "is".

Your example is hardly an interesting one. You didn't even touch such things as functions with implicit input and return types as you'd typically find in Haskell or ML. You focused 100% on the use of syntactic clues in identifying types of tokens, and didn't even start expanding into how the token affects the apparent type of 'a'. You should experience some implicitly and -strongly- typed languages (which are certainly NOT "called type-free languages"). You probably need more experience with a wide variety of -statically- typed languages, period, before you start coming up with your own pet theories. The most advanced research in typing is all in statically typed languages.

If it works well for languages that people actually use for pay instead of MentalMasturbation languages, then I have achieved a worthy goal: finding a coherent definition of "types" for the IT masses, and have no gaddam reason to be ashamed of that! Thank You, Thank You. -- top [Aye, but therein lies the problem... As I wrote above, I can't see how your "side flag theory" works at all. A theory is a predictive or deductive tool. What can we deduce from your theory? What does it predict? Your definition of types aka "side flag theory" appears to say nothing more than "values have types." If types are side flags, and every value has a side flag, then every value has a type. This is a trivial result, and it does nothing to explain what a type is or how types work in a given type system. This is your side flag theory: Given a set of side flags T and a set of values V, for every v that is an element of V there exists an ordered pair (v, t) such that t is an element of T. What can we use it for, so that we can say it "works well"?]

I've given examples of languages that don't have type flags and examples where type-flags are discarded by the compiler. Things with type flags behave different than things without. By thy fruits ye shall judge thee. The difference between PHP and CF's scalars are an example. As far as "works well", that is not the goal of this or most other comp.sci definitions, as already stated. I am attempting to define the existence of types, not necessarily quality. Perhaps the prior attempts bit off too much, and that is why they created a cryptic mess that most practitioners ignore and only a mathematician's mother can love. It makes sense to break stuff into pieces when they become too complex, and I am attempting to do this by separating existence and quality. If you want to continue on the quality path, then please justify it in a separate section first.

Nobody is arguing that type-flags don't allow for easy runtime type dispatching, and that this is a "different behavior"... but if that is your justification that there are different types, then I'd say we're closer to an agreement. After all, your words here can be restated as defining types as being that which elicits different behavior. You're thinking too narrowly with type-flags in particular. It'd be just as correct in CF to describe types in terms of: "can this parse as an integer or not?" or "can this parse as a list or not?", etc. As far as "works well", you're theory is ultimately valueless if it doesn't, much like any other crackpot theory. You keep demanding others "justify their paths" without first making a convincing justification of your own. Written that essay yet?

There seems to be some confusion here. CF does not have "types", at least on the scalar level, because it has no scalar-level flag. Scalar variables act as if they are stored as strings and are parsed as integers or dates if operators require that. It does act as if it has a type flag to distinguish between scalars, arrays, etc. however. Thus, variables do have a type flag in CF, but it cannot (without hacking) be used for scalar identification (number, date, string, etc.)

The type of scalars in CF are strings. The type of value the scalar represents depends on what sorts of things you can parse from it. If CF expects a string carrying a date, but gets a string carrying a bunch of garbage, this would be a typing error. No confusion here.

I would label that a "parsing error". Not necessarily a "typing error". If all scalars are (or act like) "strings", then the type flag for scalars is useless beyond knowing that something is a scalar. It no longer affects the outcome at the scalar level and thus might as well not exist, and may not. The interpreter does not need to know that info and makes no use of it. Observers of the language could at best conclude that "scalar variables act like a list of bytes on which the built-in operators for dates and numbers parse as strings of digits and dividers (punctuation)". The built-in operators do not appear to act on anything resembling a side type flag, and thus types don't exist in the language on the scalar level. No Flags == No Types. One is looking at the value and only the value.

If the intent of the programmer is that a certain function or procedure receive a string that represents a date, and said function or procedure receives a string that cannot be parsed as a date, that is a TypeSafety error, especially so if the function or procedure becomes undefined at that point. It is also a 'Type' error in the sense that you sent the wrong 'type' of scalar to the function or procedure. Whether the scalar parses as a date certainly does affect the outcome. It might do you well to recall that I reject your theory and definition that TypesAreSideFlags; that this makes the "flag for scalar useless beyond knowing whether something is a scalar" is something I believe of no relevance at all; you cannot use your definition to defend your definition, so stop trying. A sufficiently advanced TypeChecker? could guarantee the scalar is a string representing a date before passing it to the function... be it statically or dynamically... and it could probably do so without modifying the language.

Just because YOU call it a "type safety" error does not make it so. I call it a validation or parsing error. So there! (Although any error can perhaps be called a type-safety error if you convert or view EverythingIsa "type", which is certainly possible, but one can do that with many of the EverythingIsa items. Perhaps your definition is too wide to be useful.) Tell me, what is the precise difference between "validation" and "parsing" and type safety? Flag theory has an easier answer: if it involves checking a flag, then it is a type-related error; if it involves only parsing/analyzing the value itself, then it is not. Utility in simplicity.

And, I don't see any circular defining on my part.

It's a TypeSafety error by the definition of TypeSafety: any operation that results in an 'undefined' operation is a TypeSafety error. If you have a "parse error" and you don't know what to do next... i.e. the 'next' thing you do is undefined... then you've just experienced a human TypeSafety error. Period. It is possible to prevent all TypeSafety errors by making sure that every operation is well defined, but this doesn't make sense in all domains (e.g. assignment of a public-encryption-key to a variable intended to hold a date/time). Dynamic Typed runtimes provide TypeSafety by aborting gracefully (e.g. with runtime exceptions) before performing undefined operations. Statically Typed compile-times provide TypeSafety by mechanism of proof. TypeSafety is one form of validation, but can only be applied to invariant properties. TypeSafety CANNOT be applied to inputs (since you can't guarantee that input will be correctly typed... that's completely outside the control of the local program).

The definition of "type safety" should depend on the definition of "type", which we are not done with yet. Otherwise, you are attempting to use circular reasoning. Any "Undefined error" is too wide to be useful anyhow. Any look-up process in any structure that cannot find any match could be called an "undefined error" and thus a type safety error by that reckoning. "Employee number 1234 not found in database" is a type safety error? Not! As far as "what to do next", handling such situation should be up to the language or app. It may crash, display an error message, log the problem to a file and move on to the next input record, etc.

And "Flag Theory" may provide easy answers, but it is also snake-oil... it doesn't do anything for you... it just says: "flags appear! Hallelujah! It's a miracle! flags came from God and they just happen to represent something, but definitely not this 'Properties' concept that the heretics say, and you can check these flags, and supposedly they hold meaning (but that meaning definitely isn't necessarily properties, nosiree!)." It's utterly useless. Utility in simplicity, indeed.

It's been a useful mental model for me and has predictive value. If you don't find it useful, then so be it. You appear to be trying to stuff too much into the definition of "types", and are surprised or disappointed when others don't do the same. Your stuffage has made a bloated mess and greatly narrowed the audience. I bring K.I.S.S.. And as far as "they appear!", the definition of "rocket" does NOT depend on the origin of the rocket and it would be dumb to do otherwise. Flag theory does not need to put the source of the flags in the definition. It does not describe how to build a type engine anymore than the definition of "rocket" tells you how to build a useful rocket. Those are specific techniques, and should be below the level of definition.

I don't put much at all into the definition of "types": just the idea of 'properties', both intrinsic and extrinsic. That isn't very much for me at least... I don't see how YOU find it so damn overwhelming. Etymology and origin of 'rocket' are, I agree, an independent issue from the concept represented in the definition of 'rocket'. However, your Flag Theory doesn't even make this 'definition' matter. One can flag ANYTHING with 'rocket' and it's okay in your Flag Theory BECAUSE you made it so damn 'simple' that 'rocket' isn't even required to refer to a definition. Flag theory is naive, simplistic, and useless, not 'simple'. As Simple As Possible, But No Simpler.... you've violated that last clause.

You keep saying it doesn't matter, but that does not make it so. Perhaps you just don't "get it". I don't know yet where you are getting mentally stuck. You seem to require that the definition define how types are actually mechanically checked, which I still claim is outside the scope of a most definitions.

My statements make no claim about how types are mechanically checked. Types and TypeChecking are two different things. It'd be more accurate to say that types must be capable of being checked. How or whether an implementation bothers checking them is up to the implementation, and which types the implementation can recognize and check is dependent upon the language and type-system. Those are implementation details.

Programming Versus Math (from "Parsing versus Types")

Your functional viewpoint brings about some sticky philosophical observer problems in which one may know by mathematical proof some calculation results ahead of actual "running" of the program. However, in practice even functional programs must be "ran", and at the start, "r" has not been calculated yet. For now, I will limit the definition scope to programming and *exclude* math. "r" may always have a value in a mathematical sense, but not in a programming sense.

You can't exclude math from programming. Half of programming is describing calculation, and the other half is describing communication. That's what computation is all about. And, by my own thoughts at least, 'x' has types long before the program is ever run (the type of item read from the file), and 'x' must have some fixed properties at least if 'guessType' is to be able to distinguish one 'x' from another. 'r' also has an invariant type defined by its properties (the type of 'r' being describable as 'a type-descriptor for 'x' based on its intrinsic properties, represented as a string'.)

Perhaps, but that does not necessarily mean they are the same thing.

They aren't the same thing. But they aren't divisible.

If they are not the same thing, then you cannot automatically assume that every property of math also applies to programming, even if they are related in some ways.

I do not make such an assumption. My statements on the deterministic nature of types does not rely upon programming having "every property of math". It relies only upon programming having certain properties of math. In the case of functional 'guessType', said "functional" property is the only property that is required... and it need only apply in exactly the case of the 'guessType' function. No other property is needed.

If and when "r" has a specific value, then a type flag can be said to exist. Make from that what you want. (Although, it is not part of the compiler/interpreter type system, but is rather a roll-your-own type system.)

It doesn't matter whether a 'type flag can be said to exist'. The value 'r' will possess is determined before 'r' is ever calculated. That's what I make of it. And since it doesn't matter, one can reject the notion that TypesAreSideFlags.

Huh? How is it determined before it is "calculated"? What is doing the determination, and how is it avoiding calculation? Prayer?

The value 'r' will possess is determined the moment 'x' was set. It just isn't known. You're confusing the words.

Determinable or determined? Those are two different issues. Every fricken program may be pre-determinable if something solves hard enough. But that is not useful information. If it's determinable to God but not mortals, then it matters not to mortals, and I am looking for a def useful to mortals. To me, you almost seem to be playing games, moving back and forth from concrete to philosophical as needed to dodge the issue. From a practical programmer's standpoint, until the MACHINE calculates "r", it is not considered "known".

Determined, adjective-form. Similar to 'determinable', except 'determined' is 100% and not dependent upon any contingency (like future action or sensory input) - we tend to call something 'determinable' even if it involves going out with a tape-measure. And you're wrong: not every program is determinable, even theoretically. I mentioned this above. There is a theoretical constraint: if you describe a big black box around a program, and draw lines going in for each of the inputs and lines going out for each of the outputs, it is not theoretically possible to determine the outputs before we first determine all the inputs. Now, God might or might not be able to determine the inputs. More relevantly, type-checking necessarily involves a computation that is limited over some finite 'box', cannot see into other 'black-boxes', and cannot (even theoretically) determine inputs or their timings.

And I do agree: until the machine calculates 'r', it is not "known". It's just determined. It would not necessarily be determined if 'guessType' was not functional, though. There is that caveat.

The existence of possible alternative special-case shortcuts/shorthand shouldn't really matter if its only special cases (incidental or coincidental).

I'm missing your context, here. Special cases for what? There are a lot of possible referents in the above two paragraphs.

Classification Versus Types (continued from above discussion)

And, as to your final statements: you don't know jack about the "true essence of types". The real essence of types is about distinguishing things, which is necessarily done based on properties. One can distinguish integers from real numbers, natural numbers from integers, prime numbers from natural numbers, and integers between 1 and 10 from the set of all integers. One can distinguish apples from bananas, fruits from vegetables, plants from animals, rockets from chairs. ALL of these things are 'types', and different programming languages provide for different subsets of this set of all possible types. Anyhow, the only reason to distinguish things is so you can act differently based on what you've distinguished, whether that be to throw an error, dispatch to different functions, choose different branch, alter plans, etc.. (Thus: types are that which an agent must relevantly distinguish to correctly fulfill its role). Perhaps you can't be convinced of this because you hate to be wrong and you hate even more for others to be right, so you keep repeating crap that essentially says: "well, if I'm wrong, then you must be too." But this is truth. If you don't want to accept it, I can't make you. Nonetheless, there it is.

Please tell me what the precise difference is between "types" and "classifications and categories"? What people call "types" and what they call categories tend to be different. If they are the same, then let's roll them all up together. My observation is that what is called "types" tend to be "closer" to the variables or values they are used with, while classification systems focus more on labeling real-world nouns with limited regard to how they affect computation or programs. Classification is more of a global endeavor, while types are more of a programming aide, a programming tool, and/or a temporary situational aspect. This is one of the reasons I focus on I/C's with the def. (True, sometimes they are inter-related, and OO has sparked interest in dispatching based on global classifications {often a bad idea in my opinion}, but in general they are viewed differently.)

'classifications' and 'kinds' tend to refer to types of objects and types of types respectively, but they are still simply distinguishing 'things' based on arbitrary properties, and thus have all the properties of themselves being 'types'. When I speak of 'Category' in particular, I'm usually restricting the word to mean exactly 'Category' as it applies in 'CategoryTheory'. One can represent types in terms of categories much like one can represent types with sets or predicates, but that's a representation layer... the thing represented, the 'type', is not the representation.

On the apparent association between types and program-'variables': TypeTheory is not tied to programming except in the sense that programming is its #1 best application. The word 'type' doesn't merely refer to a programming aide. Rather it is the inverse: programming languages oft tie themselves to TypeTheory by implementing certain type systems in order to provide certain guarantees of 'safety' or 'progress' or 'correctness, or even just to a more advanced syntax (e.g. with polymorphic functions and type-dispatching and implicit conversions). One mechanism programs use to associate components with types is to ascribe a type to a cell or variable. The manifest ascription becomes a contract between the programmer and the language-compiler/interpreter saying that any value or object placed into that cell or variable must and will possess the ascribed type (possibly forcing an implicit conversion) and thus all the properties associated with that type. Knowing these properties, having this guarantee, makes reasoning about program correctness much easier. (It's useful to recognize for your modeling that it is the value or object referenced by the variable that actually meets or 'possesses' the type, not the variable itself. The variable itself already has a type: 'variable' or 'cell'.) Types of variables aren't quite so useful in languages that attempt to KillMutableState, or at least marginalize it, like Haskell; there is less need to make the guarantees at that level when nothing is going to ever change after the initial assignment. One could, for example, use assertions to represent types in such a language as an alternative to even once attaching a type to a variable.

Programs can also utilize TypeTheory to reason about program correctness, often representing types as Hoare-Logic style contracts that guarantee that a particular block of code has side-effects with particular properties. This sort of technique exists, for example, in Erlang, though it hasn't been explored deeply. Many of the type-theory books I'm reading focus on such things as structural types, which deal with resource management: guaranteeing that an file-object meets the extrinsic property of never being used after a 'close' operation, and seeking properties over regions of memory to learn such information as to whether memory can be freed immediately rather than waiting on the garbage-collector, etc. Suffice to say: the use of Types in programming already extends far beyond the narrow application of 'variable', and TypeTheory attempts to push it further yet.

But TypeTheory itself is more than just 'Type', which is the relatively simple idea over which 'TypeTheory' chooses to reason. And the concept represented by 'type' IS simple. Types are described in terms of their necessary properties. TypesAreAbstractionsOfReality?. Types in any system are, in the end, those things capable of being distinguished which one chooses to relevantly distinguish.

This does not really address the observations of "in practice" usage raised above. You could argue that the "in practice" usage is "wrong", but definitions are shaped by actual usage at least as much as an authority, for good or bad.

Concepts transcend definitions. A definition attempts to capture a concept, but doesn't always succeed. If you wish to understand types, you must understand them as a concept, by their form and function and nature... not merely as a sequence of English words glued together. You seem to think that operational definitions for a concept ARE the concept. For better or worse, this isn't true.

And "in practice" isn't a foundation for theory. "In practice", there are always engineering constraints. For example, "in practice", Relational in its defined form is barely alive: I've rarely seen a so-called RDBMS that enforces finite sets, much less supports transfinite and uncountable sets. Sort of hurts the whole 'a relation is a set' thing. Nonetheless, it would not be correct to say that "Relational" is not capable of or does not encompass these things simply because you don't see them "in practice".

Similarly, the fruits of the frontiers in TypeTheory mentioned above can currently be found in various toy/research languages. Admittedly they haven't entered mainstream languages or common practice, and probably will not for very many years - due to codebase inertia, it's very difficult for new languages to compete with the behemoths, much less gain the needed man-hours on optimizing compilers. The fact that most of the advanced type-systems are computationally undecidable in some cases doesn't help either. Nonetheless, it would not be correct to say that types do not envelope these things, that they are strictly limited to their apparent use "in practice".

They are decent *implementation* techniques. No argument there. But implementation is not necessarily definition.

Anyhow, while not everything I listed directly pertained to practice, much of it still applies even in common languages (e.g. distinguishing type ascribed to a reference (e.g. variable name or identifier) from the type of the referent (the object or value so referenced)). Don't dismiss it all simply because some things are more airy than you'd prefer.

It does not fit common usage of "types". It is a legitimate complaint. Volume of writing and volume of implementation details by itself does not qualify something as a definition. It's not a good definition. "Types" usually refer to something narrow or context-specific, and is thus NOT interchangeable with "classification system", which often can and should handle multiple overlapping views or classification systems.

One can talk of types in Haskell, or types in C++, or dynamic types in Ruby. However, if one is to well define 'type' in the context of programming languages, one needs to speak of the word's usage both across the programming languages and within the field of type theory, not just regarding programming languages popular among programmers whom each know only a handful, generally of the most popular languages. Your assertion about the limited extent on types comes largely from your lack of personal, anecdotal, and vicarious experience with the less popular programming languages possessing type-systems that provide greater extent.

The type-flag model works on them also. The difference is implementation details. Further, definitions *are* shaped by common experience. A weird niche or lab language may do interesting things with types, but that does not mean it gets to influence a definition.

[Ah yes, the "type-flag model," that comprehensive and eloquent description of type systems that explains everything in a single sentence: "Values have types." Brilliant insight, that.]

Incorrect. Values do not necessarily have "types". Types are separate from the value(s). And, the alternative definitions stink even more, partly because they resemble implementation techniques. In other words, they focus on "how" instead of "what". DontComplainWithoutAlternatives.

Technically, you can't bridge the concepts of signal and value (fundamentals of communication and calculation) without the concepts of types and representation. And values do have types... e.g. three is odd, prime, integral, natural, and between one and ten. However, one might reasonably argue that these types aren't relevant or 'extant' until they can be found as represented within a type-system.

As far as alternatives: stop saying shit that ain't true, top. You don't understand the alternatives, you don't WANT to understand the alternatives, and your personal 'alternative' is the one associated far more closely with 'implementation' details than any formal type model I've ever worked with. You've blinded yourself into thinking it actually constitutes a type theory - that you're doing something right, therefore everyone else must be wrong. I've pointed you towards real theory, but you don't care to look. I've offered you explanation, but you don't want to hear. OF COURSE your mind won't be changed. I can't help you anymore. Your theory or definition of types is completely unconvincing to an expert in this field, and I doubt that any incremental modifications you make will change that.

The flag model works in most cases that one actually encounters via more or less random sampling. Academic elitists don't always get to control definitions. I'm just the messenger.

[How can the flag model be claimed to work "in most cases" when the "flag model" has no theorems or methods by which we can test it on even a single case? It consists of a single definition, that types are side flags. Then it ends. We apply this so-called model... How???]

[The "side flag" theory or model or whatever is completely unconvincing to a non-expert in the field, too. If it helps the author of "side flags" to personally understand some aspect of type systems, that's all well and good, but don't confuse that with a theory or a model. "Side flags" are to type theory what bushels of apples (two apples plus two apples are one, two, three... FOUR apples!) are to number theory. (Please do not follow this up with a NumbersAreApples page.)]

Those in glass houses should not throw rocks. "Guaranteeing proper type substitution" is *implementation* concern. Types don't have to be bulletproof anymore than a "rocket" has to reach space to be called a "rocket", as described above.

[What glass house? I don't have to be a type theory expert to know when a so-called "theory" is utterly bogus, and I certainly know enough about the subject to get my work done. Now then: How, pray tell, does the phrase "values have side flags" - which is the sum and total of your theory - guarantee proper type substitution? What do you do with the "side flags"? Pass them through a Side Flag Processor that works like this: Input -> *magic happens here* -> Output?]

Again, you are flat wrong. Values are NOT required to have side-flags. As far as "guarantee proper type substitution", as already stated with the rocket analogy, quality control is NOT the purpose of most definitions. A bad type system is a still a type system. You've been brainwashed by MisuseOfMath to mix QC in with it. If a poorly-designed language truncates a Real to make an Integer when parameter passing, it may be bad practice, but it is still a type system. I've asked for justification for mixing QC into the def, and I get shit from you guys. Your argument-from-authority is getting tiring. I find it strange and frustrating that one would militantly hold such an unconventional and unjustified position.

You're blind. Over and over and over you keep implying that quality is intrinsic to the nature of types, but you don't even notice yourself doing it. Open your eyes. Tell me: what properties must a thing possess to correctly be called a "rocket", given that reaching space is not among them? Are 'Real's always things that a language can 'truncate' to make an 'Integer'? What properties must something have to correctly be called an "Integer"? If a type-system applies the side-flag "Real" (which is intended to mean the math Real type) to a Relation-variable, is the type-system "wrong" or "bad"? These are all qualitative questions.

Re: "Are 'Real's always things that a language can 'truncate' to make an 'Integer'?" - Again, that's a language or domain-specific question. Those issues are below the level of a definition of types. This should be clear to any rational person. (I know common tools that do truncate, by the way.) Rocket Stabilization Theory is NOT a definition of rockets.

Sure, the issue of 'Real' in particular is a language-specific question. However, the meta-issue is NOT below the level of a definition of types. Any type in any type-system will itself be characterized by a definition. This definition will identify instances of that type by its properties and allows modeling of instances of that type. A definition of 'types' needs to allow one to both identify and model instances of types, e.g. things like 'Real' and 'Integer' are instances of 'types'. Since every type in a type-system will always come with a definition that allows one to both identify and know what one can do with an instance of that type, this common property should be implied by any definition of 'types'. (Oh, and stop putting up straw-men. I've said nothing that implies rocket stabilization theory is part of the definition of rockets. To the contrary, I'd expect you have a definition of 'rocket' before you can even tell me what you're trying to stabilize.)

Exactly, and "type theory" fails to provide that, focusing on stabilization first. You keep accusing flag model of looking like implementation, but type theory is doing just that. Types are NOT about formal and provably correct classification/categorization systems. It may be a good model to *make* clean type systems, but it ignores the whole space of possible dirty but usable type systems that most IT professionals would agree are type systems even if they have rough spots. It focuses on classification techniques and backs into types instead of starting with the essence of types themselves.

Saying that 'type theory' is about stabilization is like saying that 'relational' is about storing business information. Type safety is one application of type theory - not the whole, conceptual "focus" of it, but a rather profitable use for it. Type theory focuses on learning what can be learned about types; each individual type-theory only starts by providing an operational definition for 'type' (the most common being representation as sets, categories, or predicates). From there, it moves forward into studying how types can be applied, properties of types with certain properties, etc. A type-theory is a theory in the mathematics sense. There are type-theories that are more based in natural language and linguistics and all the fuzzy stuff that comes with that... but, beyond use in AIs, you just won't see much of these in the domain of modern computer programming because these 'fuzzy' type-theories have poor properties for computation - it's hard enough to perform fuzzy calculations; when you still can't be sure of anything even after performing them, it's even harder to justify having done them.

And while types aren't required to be formal, they ARE about 'correct' classification. No matter what you try, top, you cannot name a single type that is NOT about a classification of objects or behaviors or values or phenomenon.

Since "classification" is just shy of TuringComplete, it can be just about anything you want it to be, like a Star Trek monster that reads minds and shape-shifts to fit your thoughts. Plus, it begs the messy question, "what is classification"?

Stop pretending you have knowledge that you so very obviously lack. General problems in types and classification are not "just shy of TuringComplete". These problems are fully TuringComplete (and undecidable). Some programming language designers often go to great effort to ensure that the type-inference for their language is among the set of decidable ones (since it's nice to know your type checker won't start looping forever), but there are undecidable, TuringComplete classification systems in common practice even today (e.g. the C++ templates system).

Which are implementation details and quality issues, which should not be part of the definition. (Why do I need to repeat this? You should have anticipated this complaint.) You have even strengthened my point if TT is TC because it means it can be everything and anything and thus a useless tautology definition.

You feel the need to repeat falsehoods because it boosts your ego to make a stand on hot air alone. Being 'TuringComplete' is not part of the definition of classification problems any more than being 'NP-Complete' is part of the definition for the Traveling Salesman problem. These are derived truths. And you still have an effin' shitty understanding of computation and expression if you think simply being TuringComplete "means it can be anything and everything". That must derive from your foolish hand-waving notion that TuringEquivalency is all there is. There are many classes of problems that are fully TuringComplete, in the sense that the computation to solve them has a correspondence/translation one-to-one with Turing Machines, but that doesn't mean the problem-classes "are tautologies"; any given problem certainly is not. Go drown your naive, simplistic views in a TuringTarpit. If you wish for me to 'anticipate your complaints', why don't you start by doing it yourself? Start explaining those statements you KNOW I'll reject BEFORE they are challenged, you hypocritical bastard.

Re: [I'm the one who wrote "What I think you've been doing is successfully using the flag model to explain how type-related mechanisms in certain languages are implemented. You've done nothing to explain what types are." What I meant is that you've successfully duplicated a common mechanism used to implement systems for clearly identifying types, but let me be perfectly clear: I do not mean that you've explained types, or even that it is a useful model. It isn't. I merely recognise that it may be useful to you, personally, as an aid to comprehension - much like thinking of quantities of apples may be useful to a child learning addition - but there is absolutely and categorically no aspect of the "model", and I use the term loosely, that serves any useful purpose whatsoever among actual software developers. It is rubbish, and the sooner you recognise the rubbish for what it is, the better. No one benefits from your arrogant, ignorant, unqualified, unconsidered, ego-centric, nonsensical spew, that is at best embarrassing, and at worst a waste of bytes on Ward's hard drive. Anyone with even an ounce of understanding of computer science can see that. Unfortunately, I don't think you have even an ounce of understanding of computer science; hence you cannot recognise the sheer folly of your "contribution" here.]

So you agree it may be useful to me, but not anybody else? Am I the only developer out of millions that thinks in such a way that the model may be useful to me but ONLY me? You are make huge assumptions about IT psychology. (I've already described why it is not an implementation model.) It is not a perfect model, but the alternatives suck far worse as far as usability to real-world practitioners.

He said it may be useful to you as a comprehension aide, NOT as a model. Even that was very clearly written, above. How could you not see it? Are you really so damn arrogant that you think you can just read a small fraction of a statement and wholly understand it? You're making some huge assumptions about your own psychology. However, there is (as stated above) "absolutely, and categorically, no aspect of the 'model' (and he uses the term loosely) that serves any useful purpose whatsoever among actual software developers", including YOU.

Let's return to the practical scenario mentioned before: the description of the differences in treatment of scalars between ColdFusion and PHP. I can explain the difference using the flag model by drawing boxes on a blackboard showing when and where the boxes get filled in and how they affect the various error messages etc. that may arise. I don't need goofy foreign notation nor a thesis on classification systems, just boxes and arrows. Can you do the same without sounding like a befuddled quantum physicist?...

Sure. But I'd make it quite clear that I'm describing a particular implementation of the type-system as used for the purpose of type-based dispatch. And, unlike you, I'd actually know what I was talking about.

And would remain the only one after your presentation.

Or at least I wouldn't be inappropriately throwing about words like "model" and "theory" like a befuddled layman.

There can be a lot of overlap between "model" and "theory". String Theory is also a model.

[String theory has structure - the definitions, processes and theorems by which it may be used to make predictions about the real world, and by which it can (eventually, it is hoped) be objectively tested against the real world. Your "theory" contains no structure, hence any predictive value derives from your personal, subjective interpretation of its single element, a terminology equivalence of "type" = "side flag". There is, in particular, no content sufficient to automate deductions, or to perform objective reasoning about the "model."]

Wrong again, as usual. One can do experiments to see the difference between a type flag affecting something and not affecting something. For example, both print(a) and print(b) may produce the same output, but print(a + 1) and print(b + 1) may not. Flag theory can explain why this is. (It may not be the only way to explain it, but it is one of the simpler ones.)

[Really? I don't recall reading anything you wrote to the effect that f(v1) = f(v2) but f(g(v1)) != f(g(v2)), given (v1, t1), (v2, t2) where t1 != t2, etc., or some such. If you believe "flag theory" explains this, then the theorem must exist in your mind, because it didn't make it to this page until you wrote the informal description above. Do you expect users to apply your theory via imagination, or clairvoyance? What, in your "theory", guides the construction of experiments? Or is it exception-based, in which you only reveal informal tid-bits in response to comments that you consider incorrect? So far, that seems to be how it works.]

[Your cognitive processes in applying the definition might, if articulated and expanded appropriately, constitute a theory or a model or both. However, as it stands on the page, it constitutes nothing but an empty substitution of terms. By way of analogy, it is equivalent to a "model" consisting entirely of a single definition: "spaghetti" = "pasta". It tells us nothing about spaghetti itself, and says (for example) nothing about the significant properties of said noodles when subject to moisture, heat, and tasty sauces. These things need to reside in your mind beforehand in order to derive any use from the "model." Similarly, in order to be used, "type" = "side flag" requires a mental model of the behaviour and application of side flags because they are not described by a substitution of terms. In other words, the "model" is left entirely to the reader's imagination. If there is to be a proper model there, at the very least it needs to be explicit rather than implicit.]

So the definition of "pasta" should include noodle texture? You're rambling.

[No, but the definition of "pasta" must include *something* in order to be a model or theory. Otherwise, it's just a substitution of terms, something that can be done with a thesaurus. At present, "spaghetti theory" includes nothing at all, just like "type" = "side flag" includes nothing at all. At the very least, it must describe what to do with these "side flags." You can't rely on the reader's psychic abilities in order to divine what you have in mind.]

I disagree with that assessment. It can be used to explain things such as print difference scenario above and the difference in behavior between CF and PHP scalars, as already described. You couldn't do that without a goddam BookStop because "type theory" is a Rube Goldberg model. It would take you at least 10 times the quantity of pages to explain those behaviors to a regular IT joe. If you disagree, then let's test it. With "type theory", Parsimony is murdered, put into a blender, and dumped into the deepest section of the ocean during a hurricane. At best it is a UselessTruth. At worse it is not even a definition.

[What is being used to explain the "print difference scenario" is your MentalModel, not the "model" written here. Your "model" as it is written has no content that can successfully predict the results of the "print difference scenario." A sound model should be automatable, such that I can encode the "print difference scenario" at the input end and out pops the result from the output. If this process requires human intervention, and especially additional interpretation and prior high-level knowledge, then the model is incomplete.]

To get an executable model, one would have to build an interpreter/compiler model (like drafted in RunTimeEngineSchema).

[Not necessarily - by "automatable" I mean that it should be possible to automate without requiring additional special knowledge or invention. That doesn't necessarily mean there's a reason, or that it's reasonable, to automate it. So far, your model is based entirely on special knowledge and/or invention, as it relies wholly on an implicit MentalModel of what is meant by the term "side flag." To be a good model, you need to turn the MentalModel into something external and explicit.]

See above about definitions not usually being specific enough to implement.

[If your definitions aren't specific enough to implement in your model, then you need to refine your model.]

RunTimeEngineSchema has identifiable values and identifiable side flags (columns holding values and types). As far as an algorithm to detect such, has anybody ever made an algorithm to detect *non-disputed* features such as "variables" or "subroutines"? If it cannot be done for those two, then asking me to do it for "types" is unrealistic. Type Theory cannot do it either without lots of hand-holding that requires human interpretation, which is essentially a translation of an algorithm from one TC language to another (TT being a TautologyMachine).

[I'm afraid the above is unclear. I do not follow. What, for example, is a "TC language", and what does "(TT being a TautologyMachine)" mean? Is that a fancy way of saying "I don't understand type theory"?]

 TT = Type Theory
 TC = Turing Complete
 POS = piece of [bleep]
[Actually, never mind. This is an increasingly pointless debate. I don't know whether you're desperately hanging onto your odd notions because you're trying to avoid an ego-beating, or because you genuinely don't recognise the degree to which you're spouting shite. I don't suppose it matters, because either one has reached, for me, the point of being a little bit... Scary. I've got better things to do. I wish you did, too. In particular, I'd encourage you to spend some time learning about computer science, mathematics, and type theory. You'd discover a whole new world of interesting topics, and you could genuinely contribute to the field - perhaps even create a simplified type theory, should that be your goal - instead of merely engaging in endless quibbling over nonsense.]

I suggest you try to teach a newbie about types in common languages and try to find a concrete way to describe what is going on and why they behave as they do. Then you'd realize what a POS TT is.

[I'm a university lecturer. I teach programming, in common languages, to newbies on a daily basis. Many beginners develop an informal notion of types immediately and intuitively. Others benefit from analogy and examples from programming languages and real life, and soon understand. Certain weak students never grasp types or programming of any kind. They frequently find success in concrete, non-abstract subjects, like history, geology, or geography. Some people have unusual difficulty with abstractions of any kind. A few of these become excellent programmers, but at an entirely concrete level.[1] I suspect you may be one of them.]

 [1] I sometimes suspect they are the sole reason COBOL vendors remain in business. :)
I explained why I avoided excess psychological abstractions above under "The Scope of the Definition". The abstract/philosophical issues of "types" may be important, but I stated I feel they do not belong in the definition, regardless of importance. Substitutability of types may be a useful tool, but that is a "purpose" or "techniques" issue, not a description of what types are. And, finally, you can take your insults and shove them where the sun don't shine, you arrogant b8stards. You didn't explain how you would explain those type issues to them; instead you insulted me. Perhaps you are not capable of such and are only hot air. Perhaps your students who don't seem to "get types" are having problems because your lecture sucks because your model sucks. Perhaps try the flag model on those that are slow. The flag model can be used to lay out the steps of what happens when these type-related oddities are occurring. One can number the steps and show stuff being put into and taken out of hypothetical boxes (variable slots and flags). It answers questions and one can then say, "Oh, so that's why the language does such and such".

[I'm not being insulting, I'm describing a characteristic of certain programmers. (A characteristic by which, OMG!!!, they can start to be classified into types!) You'll note I used the phrase "excellent programmers". If you find that insulting, perhaps you might cogitate on why you find it insulting. As for the claim that "the flag model can be used to lay out the steps of what happens when ... type-related oddities are occurring," I must ask what steps? That's what I've been asking you for all along. It would go a long way toward turning your term-substitution into the model you claim it is.]

Re: "The abstract/philosophical issues of "types" may be important, but I stated I feel they do not belong in the definition, regardless of importance." - And you're wrong, regardless of your obstinacy.

Re: "You didn't explain how you would explain those type issues to them; instead you insulted me" You speak of type-system implementation-issues as though they're generic 'type issues', you loudly proclaim that the real type issues are 'implementation-issues', and you expect us to bow to your pretended understanding of the field. Of course you deserve some insult. If I were to explain some mere implementation issue, such as how types are processed in ColdFusion vs. PHP and why they react differently when utilized within expressions, I'd do so directly. But, unlike you, I wouldn't pretend that working with types is easy... the idea of types (which IS founded in the 'abstract/philosophical' issues of classification) is as simple as the idea of water, but the broader field of type-theory is as deep as the ocean. Most useful mathematical and logical theories are that way: simple but deep, and taking more than a little effort to fathom.

You want to know what I'd do if teaching the idea of types to newbies? I'd start with the shallow waters: simple types like integers and strings. Then I'd describe application: use with type-safety in variables, and type-ascription in ML. Then I'd carefully move towards the deeper waters and teach them to swim - simple generic types of one variable, generic types of two type-variables, draw back and toe the waters of dependent types and constraint-types, explore a bit deeper on generics and study meta-types, or shift gears and start looking at how types are utilized in the lexing and parsing and process, or focus on implementation and actually performing type-safety checks. I don't for a second that one could get six full University classes out of type-theory alone, with more than half of them being masters-level. Of course, one would need to sell the classes, which means exploring them while also exploring something like AI, compilers, automated theorem proving, etc.

But for a smug prick like you, who thinks he knows all he needs to know? who thinks he knows enough to create his own silly little 'model' and teach others? Well, I'd probably try to shove someone like that directly into the deep water and laugh as he flails and drowns (screaming all the while: "this isn't types! this isn't types! I saw types from the shore!"), then, when I had my fun and felt he was suitably chastised, I'd teach him a bit and point him towards the real knowledge. You know, like an initiation or something.

Unfortunately for that tactic, I seriously underestimated your ability to support yourself by inflating your ego with your own hot air. You haven't worked with types... you've hardly even gotten your toes wet. You, quite frankly, will never learn anything significant about types - not unless life forces it on you (or that ego of yours deflates and you decide be a student of life). You won't learn anything as a HostileStudent. Until you change (if ever), you'll let your own agendas interfere with your comprehension... e.g. you're desire to bring K.I.S.S. to the people who, like you, wish to pretend they understand.

You are just as afraid that your hard-gained knowledge has been rendered a bloated wasted dinosaur by a practical K.I.S.S. competitor. The vacuum Tube has been replaced by the transistor, and vac makers are trying to justify themselves via patronization. KISS to the Masses! Make the yapping ivory-heads earn their keep. -- top

No, actually I'm "just afraid" you'll actually succeed at teaching your crap to other people. You're a charlatan and a salesman of snake-oil who displays no sense of professional ethics whatsoever, and your attempt to 'teach' this is more to boost your own ego and thumb your nose at the establishment than it is to provide anything of value. If you actually desired your spew to be useful, you'd actually do what [Mr.E. Brackets] asks and complete the model to the point one can (a) systematically plug in type-systems and expressions and (b) derive real conclusions. Further, you'd reject Keep It Stupid for the Simpletons stuff like 'TypesAreSideFlags', because, quite frankly, there is NOTHING in your model that provides for What types ARE; the closest you come is leveraging another 'real' parser and 'real' type-system (with their own operational definitions of 'type') as *implementation details* to give you the 'flags' - about which your model currently makes NO conclusions whatsoever. Clean up your act, top. Get yourself a real education in the subject before you try to simplify and teach it. And deflate that ego of yours. It's not doing you or anyone else any favors. Your excitement at the irritation of those with expert and [moderate to advanced] knowledge of the subject is just one more pound of proof that your motives are entirely impure.

I feel the same about you, and "type theory" is meandering garbage that nobody cares about or ever will except a few self-elected academics who like to play with pasta on paper. As bad as you claim flag theory is, TT is worse.

I challenge you to try to explain the print example to a newbie using a chalkboard. The only ways I think it being done will involve a type flag *somewhere*. You can't escape it. Any concrete or mechanical illustration of what is going on will have a fricken flag in it. You can say that types are substitutions or classifications, but you cannot draw those; they are too hazy a mental notion. "Type theory" fails the chalkboard test (barring the use of 2,000 chalkboards).

In explaining the implementation details of PHP's function-dispatching and operation, I'd be sure to describe that there are bits in the value representation (not the variable) dedicated to indicating type and describing how the remaining bits in that value representation are to be interpreted. Then I'd describe how CF operates with its one-representation-fits-all approach and even looser typing (by language design) and its lack of type-based dispatching. What I wouldn't do is start going on about these indicator-bits as though they're proof of some great, useful truth about 'types' as a whole. These are, after all, implementation details, and thus should be well below the level of any definition of 'type'.

And as far as your informal 'chalkboard test': I don't believe I'd have too much difficulty describing the basic idea of types on just one board, at least before I started providing multitudes of examples. Most people won't have difficulty with the notion that 'types' are whatever properties they're looking at when they classify different things into different buckets, and that the same thing is true for computers (though computers are a whole lot more objective about it).

They'd have a bit more trouble applying typing to behaviors and phenomenon than to objects, but that can wait until a later date. As far as Substitution and classification: these are hazy to you largely because you're a HostileStudent with your own agenda and refuse to comprehend (or at least acknowledge comprehension of) these ideas.

Most people understand quite intuitively that 'dogs' are 'animals' and that one can offer a dog whenever asked for an animal, and most people understand quite easily that some things are dogs, and that some things are not dogs, and that you shouldn't offer a 'not dog' when asked for a 'dog' (and vice versa). That summarily describes substitution, classification, and type-safety all in one small, chalkboard-sized, easily digested example. Even dealing with multiply-classified objects wouldn't be difficult; the majority of people spend their childhood reading about 'red dogs, brown dogs, thin dogs, round dogs'... they've plenty of experience to draw upon.

Of course, your informal 'chalkboard test' is largely pointless, anyway; it doesn't meaningfully distinguish between the useful and the useless theories. How many of these could you teach on fewer than even 100 chalkboards (counting erasing the chalkboard and rewriting): Computational complexity of algorithms, impedance in electronic systems, behaviorism, Kennedy conspiracies, etc? These are all quite specific areas of 'Math', 'Physics', 'Psychology', and 'History'... more equivalent to adding 'dependent types' to the list than 'TypeTheory' as a whole. If I were to judge based on sample sets of all theories I've heard of and managed to understand and find use for, I'd conclude: "any theory you can both describe AND explain to a newbie with just one chalkboard is probably useless." I agree that parsimony is a wonderful trait for a model to possess, but it's rather pointless to possess when you've traded out all the axiomatic rules for making conclusions in order to obtain it.

If I ever have an urge to challenge a definition again, slap me unconscious with a cubicle wall. It ain't worth it, regardless of whether I'm right or not. I'll stick to arguments of how to make better things instead of what to call them. One can't fix the world by renaming it.

If "types" are "side flags", what "type" is the function \x.x*(x-1)? Using Python notation, but not asking about Python specifically, what type is lambda(x:x*x)? I can see what the original might mean in the case of obvious built-in things like ints and strings, but what about functions?

That doesn't answer the question of what one puts as the side-flags for that expression. Specifically, if we set f=lambda(x:x*x), we have the variable f. So what side-flags do you put against f? It sounds to me like you claim f doesn't have a type.

Again, It largely depends on the language. I would reckon that most dynamic languages defer the answer to run-time. The variable "f" would have an empty (or default to "object", "variant", etc.) type-flag associated with it (let's ignore type-free languages here), and the result of the evaluation would populate the type flag. In dynamically typed languages:

  f = <expression>;
Is really:

   f<value><flag> = <expression>
The language syntax just does not directly show the flag. Every expression and function returns a type flag along with a value. The "circuit board" for the interpreter would have a dual-channel bus that carries both value and flag. If the "wrong" type flag is received by an operation, such as multiplication, then most such langs seem to try to parse the operand as the needed type. This double-try approach is one reason why I prefer type-free languages: the flag doesn't help much and just adds another dimension to the mix.

Well, firstly, I don't understand what you mean when you say that f = <expression>; is really f<value><flag> = <expression>. f is a variable that contains a value. I'm asking you what type that value has. The value is, of course, a function. What is its type?

Secondly, maybe I'm just being dim, but I still can't see that you've answered the question. Obviously if you know what type is fed to f as a parameter you can work out what type the result is. That's not the issue. I'm not interested in the type of the answer for a specific parameter. You are completely ignoring the question of the type of f itself. Which of these are you claiming: I'm not asking what languages, dynamic or otherwise, do. I'm asking what the type is of f.

So you are talking about at run-time when the interpreter encounters such a statement? Without testing them, most languages would make "f" a non-scalar type of "function". It is kind of like:

  f = array();
Here, the type-flag of "f" is typically "array".

I just did a test in JS, which confirms this.

  <script language="JavaScript">
  var square = function(x) {return x*x;}
  alert(square(4));  // result is 16
  alert(typeof(square));  // result is "function"
No, I'm not talking about what interpreters do. That's a question of implementation, and is not a question of what types really are. If you claim that types are something, then you should be able to say what the type is of f. If you simply say, "function" then I guess I misunderstood what you were trying to accomplish. It's a bit like saying 32 is a Number, without saying whether it's an Integer, Float, Fraction, Complex, Quaternion, or FixedPoint.

I think it's becoming clear that you are talking about one specific implementation method for types in dynamic languages. Equally, I think it's a bit bold to claim that "types are side-flags". Possibly the renamings now being discussed are overdue.

It is not clear to me whether you are asking a philosophical question or a technical one. A philosophical answer will probably be of very little use in settling definition issues because everyone's psychology is different. I doubt you would consider my answer the baseline for all humans and visa versa. This is a problem ANY definition of "types" will probably face.

As far as what I am thinking, at first it is just syntax. After inspection, I would deduce that "f" is likely to be of type "function" after execution. This is because my brain can forecast some of what the interpreter will do. My brain acts as a semi-interpreter.

I may be being a bit dim, but this second paragraph carries no meaning for me. I can't understand what you're trying to say at all, so I'll ignore it, and perhaps you'll remove it and replace it with something that does mean something to me.

However, addressing your first paragraph, I thought it was clear that I'm asking a technical question. You've claimed "types are side-flags". I'm asking what the type is of a particular, quite specific value. You seem to be saying "function". If sounds like your side-flags give you no way to distinguish between the types of, say, lambda(x:x*x) and lambda(x:x[0]). As I say, it seems that you are describing an implementation technique for dynamic languages.

If that's what you're doing then this page should probably be deleted and you should salvage what you want and start again. A more accurate title might be "TypesInDynamicLanguagesOftenHaveTagsOrSideFlags", but then what you're saying becomes pretty self-evident. I'd be interested to find out what new and useful things you think you're saying.

I give up. I don't know what you are asking. I tried my best to interpret your question and answer it. If it is a technical question, then the nature of the language and interpreter/compiler makes a huge difference. How about we stick with JavaScript for right now since any reader with a web browser likely has access to an interpreter. The type flag is "filled" or "set" when the expression after the assignment operator (=) is evaluated. All variables in JavaScript have a type-flag that I know of. One can test the value of the type-flag by using the "typeof" function. There are also more indirect methods.

I was looking for an answer along the lines of "f has type A->A, where A supports the operation *" (or something similar). Or, in the second case, "lambda(x:x[0]) has type AxB->A". I don't see how these sorts of things can be represented as "side-flags", and I was hoping you could explain your ideas.

Sounds more like a specification or intermediate parse assist form. At this point, I don't see any need for JavaScript to do such (although other languages might). It is not an objective or necessary thing that I can tell. I suspect JS simply generates an internally-named function and returns a reference to that function. "f" is merely an ID number or name. If we look at it internally it might be a variable with an ID number as a value and "function" as the type flag. Whether the referenced function itself is valid or not depends on the usual function rules. As far as how YOU mentally read the lambda, I have no control over that.

So it's true. Your "side-flags" are nothing more than a specific implementation technique for some dynamic languages, apparently specifically JavaScript. You originally seemed to have been claiming more, and I was trying to see what, and how. Now it is clear.

I doubt ANY definition of "types" can be clear without referencing some concrete *representation* (ARGUMENT FROM EGO). TypeTheory uses a concrete notation also, but assigning actual items to those symbols is subjective. If I am not mistaken, it is also DAG-centric (YOU'RE MISTAKEN), and types don't have to use DAGs. This page really, really needs renaming. I'm no longer convinced it says anything beyond:

Is there any content beyond that summary?

TypesAndSideFlagsDiscussion (note "and") has a list of candidate names.

Moved further discussion to TypesAndSideFlagsDiscussion (note the "and") because this topic is TooBigToEdit.

See Also: QuestionsForTypeDefinitions, TypesAreNotTypesTheyAreTypes, TagFreeTypingRoadMap

CategoryLanguageTyping, CategorySimplification

EditText of this page (last edited January 8, 2014) or FindPage with title or text search