Liskov Substitution Principle

Barbara Liskov wrote LSP in 1988:
What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T." - BarbaraLiskov, Data Abstraction and Hierarchy, SIGPLAN Notices, 23,5 (May, 1988).

See also: People are very confused below about what "behavior" means. Here is one reasonable definition (from the C++ FAQ, but the definition is language independent):

"derived class objects must be substitutable for the base class objects. That means objects of the derived class must behave in a manner consistent with the promises made in the base class' contract."

http://www.parashift.com/c++-faq-lite/circle-ellipse-nonintuitive.html

Fuller but still brief discussion: http://www.parashift.com/c++-faq-lite/proper-inheritance.html

So if you substitute an instance of the derived class in a call to a base class method, nothing should break that always works for the base class, as specified by the base class interface.

The discussion there (spread over multiple related pages) shows why it is problematic to make either Circle or Ellipse a derived class of the other, no matter what you can prove about mathematical geometry. Any such subclassing will break a promise in the base class's interface.

Not everyone cares, but still, that is what the LiskovSubstitutionPrinciple is about.
EditHint: This page is a bit of a ThreadMess and TooBigToEdit. It's certainly too big for me to get a handle on what's being discussed here. RefactorMe, anyone?


"Why is the LiskovSubstitutionPrinciple important?"

1. Because if not, then class hierarchies would be a mess. Mess being that whenever a subclass instance was passed as parameter to any method, strange behavior would occur.

2. Because if not, unit tests for the superclass would never succeed for the subclass.


LSP is claimed by some to be one of the PrinciplesOfObjectOrientedDesign.

See "Constructive Deconstruction of Subtyping" at http://alistair.cockburn.us/Constructive+deconstruction+of+subtyping for a counterexample and discussion. -- AlistairCockburn


Circle/ellipse explanation. Necessary for understanding the argument below.

For those unaware of the nuances of the circle/ellipse problem (also referred to as the square/rectangle problem), and who want a simple explanation, please read

Section 21.6 of http://www.parashift.com/c++-faq-lite/proper-inheritance.html

before continuing. Note that the circle/ellipse problem deals with interface substitutability. Liskov substitutability requires behaviour equivalence from a subtype. This is an extremely important difference.

I think the difference is something like the difference between top-built objects (like the PythonLanguage) which are rooted in a very general, very abstract "Object" type, and bottom-up directed ObjectArchitectures?, like CeePlusPlus, rooted in a machine type. In other words, in the C++ model you build larger, more abstract objects from smaller types, whereas with the Python model you start with a high-level abstract Object type and build downwards into more specialized types. Very different. (Is this distinction recognized anywhere in the literature?) Confer with point 6 of AlanKaysDefinitionOfObjectOriented and a review at ObjectOrientedRefactored. -- MarkJanssen

{C++ and Python support composition to an equivalent degree, and permit composing machine types, language-defined primitive types, higher-level types/classes, or user-defined types/classes in an identical fashion. OO developers typically define from top-down or bottom-up depending on their own inclinations; neither C++ nor Python have any particular influence on this. In languages with an automatic base "Object" (or equivalent) class (like Python, Java, etc.), it ensures that every class automatically has certain useful methods to facilitate synchronisation, object cloning, testing for equality and so on. In other words, under the LiskovSubstitutionPrinciple, anything that every object might need to do -- like be tested for equality -- every object can do, because every object is an Object. C++'s philosophy is to avoid imposing any more predefined constructs than are strictly necessary, so creating a singly-rooted hierarchy -- if required or desired -- is left to the developer. In short, C++ lets you to create an object hierarchy if you need it; Python assumes you need it -- that's the only meaningful difference.}

I believe that LSP is falsely believed by some few to be a principle of OO design, when really it isn't.

It is. -- RobertCecilMartin

Beautifully ambiguous, Robert... I fully agree that it is falsely believed that... although I suspect you intend to mean that it is not falsely believed that...because it truly is a principle of OO design. :-) But you'd better read my paper before you run the same old tired arguments supporting LSP. http://alistair.cockburn.us/crystal/articles/cdos/constructivedesconstructionofsubtyping.htm. p.s. it is relevant to note that this article was rejected from OOPSLA because it was considered "too obviously true" and "already known to the experts in the field" (even if not to the merely well-read), hence not novel enough to publish. -- AlistairCockburn

Here is my point. I've seen papers where Barbara Liskov makes statements like the SIGPLAN quote above. The statement defines what behavioral subtyping is, and many people call that LSP. But, there is a corollary, which I first saw in JimCoplien's C++ book which is that all of your subclasses should be behavioral subtypes. Many people have taken that implication to be LSP also, but I don't know whether Dr. Liskov ever made that strong statement. I see value in defining what behavioral subtypes are. If we are going to talk about LSP, should we decide which definition we'll use? -- MichaelFeathers


I've never agreed with this principle. It's an obvious thing to request, but it is not a sensible thing to request. You subtype to change behavior. The old quadrilateral- trapezoid- parallelogram- rectangle- square- point problem illustrates it. A square is not always substitutable for a rectangle, because a rectangle usually has behavior to change its aspect ratio. But a rectangle sure isn't substitutable for a square. Likewise for a square, etc. According to the LiskovSubstitutionPrinciple (LSP) they aren't subtypes. The definition then becomes vacuous, which is not desired.

Come now Alistair. We don't have to make such a big deal about LSP. If two types are substitutable for an interface, they are subtypes. This has nothing to do with how much we would like Squares to be subtypes of Rectangles. They simply are not. -- RobertCecilMartin

Your definition of the LSP is inconsistent with Liskov's definition. Interface substitution is not sufficient.

Oh, I think it is. Imagine that you are debugging program P. You restrict your view only to that which is known by P. You look only at the variables that P knows about. You single step through P as it runs with type T, noting how all the variables change, and how the control flows. Now see what happens when P runs with type S. If you can't tell the difference, then S is a subtype of T. Oh, you could possibly tell the difference if you measured timing; but I'll counter that by saying that every time you hit the single step button the next line of code will run on a different computer with a different clock rate.

So, my humble view of Liskov's definition is simply that the behavior of P remains unchanged <<From P's point of view>>. Clearly the behavior of the system changes. It would be stupid if the behavior of the system didn't change. But P's own intrinsic behavior does not.

That's the power of the principle. If the behavior of P is not going to change when you give it a subtype, then you don't have to worry about changing the code in P.

Mathematicians apparently know this sort of thing... I saw a long note by a mathematician about mathematical properties that went on about this in a much better way than I do. But the point is, I can't see that LSP works, except InTheory. -- AlistairCockburn ('97)

Yeah, that's the thing. LSP is important in some alternate reality where programming languages are purely abstractions. In our world, we deal with machines. Trying to talk with the people who have drunk the koolaid of that alternate world results in being called a nutjob. Proceed with care. --MarkJanssen

Part of the difference in viewpoint here has to do with the difference between languages with strong typing, and those with weak types. In languages with weak types (smalltalk, e.g.) you will find many examples of subclasses that aren't subtypes. On the other hand, there are quite clear examples of subtypes that match Liskov's definition. Nearly all the users of Numbers don't care whether the numbers are represented as LargeIntegers?, Floats, or fractions. -- ChrisHibbert

Smalltalk is strongly typed. SelfLanguage is weakly typed. In Smalltalk, subclasses are frequently not subtypes because of the overriding of superclass methods with self messageNotUnderstood. This is incorrect design from an object purism point of view, but it has made such things as the Smalltalk-80 class hierarchy efficient and minimal. The lesson to learn is that striving for object purism is ridiculous. -- SunirShah


I think we may have a terminology problem. You state that, "..The reason you subtype is to change behavior" - I would have used the term subclass instead, since I take LSP as defining the term subtype. That is, subclassing is a software construction principle, used to create a new class which has similar behavior to another class, while a subtype is a class which can be substituted for another class and still be guaranteed to behave properly, albeit differently.

The example you cite (squares and rectangles) has long been discussed for exactly this issue. I think however, that you have encountered a common fallacy when you state that a "a square is not always substitutable for a rectangle, because a rectangle usually has behavior to change its aspect ratio." The fallacy I refer to is confusing a class with the concept or entity which it represents. While mathematically, a square *is* a kind of rectangle, the class Square may or may not be a subtype of Rectangle depending on the behavior defined for the latter class. If the class represents an object whose aspect ratio is explicitly changeable, then the former is not a subtype. On the other hand, if the Rectangle class is immutable, Square would make a fine subtype, and could in fact, be used wherever Rectangle is used. -- RussellGold
It seems to me that the fallacy RussellGold mentions as key to many covariance problems. I would go so far as to say that it is representational modeling itself that causes trouble, although that trouble may be far better than the cure. In OO, we have traditionally adopted metaphors to get our work going. This tends to be okay until the fallacy pops up. Then we jump through hoops to evade it.

I don't think that LSP is academic. Take a look at JUnit. The TestCase and TestSuite classes do not violate any expectations of their superclass (Test). I tend to think that LSP is actually a special case of a more general principle of engineering: "you can't do with less than what you need." If you are going to substitute this bit in for another bit, it had better satisfy all contextual expectations. It can do more, but it can never break the expectations of the context, or else the context must be modified. This is true, in mechanical, electrical, whatever. The fact is, there can be a disconnect between LSP and representational modeling. The reason is that our cognitive processes throw a lot more baggage into a concept than what we can place in an artifact that must deal with contextual expectations. -- MichaelFeathers
"The behavior of P is unchanged" - that's not meant literally, is it? A program that draws (immutable) rectangles and we substitute squares, then we want it to draw squares, right? We want P to change.

Maybe Liskov is assuming a certain level of abstraction when we describe the behaviour of P. If we say that "P draws shapes" rather than "P draws rectangles", it's behaviour doesn't change. Since Liskov is the definition that everyone quotes I'd prefer it if it were more explicit about this. It seems core.

See also DesignByContract. The idea of DBC is to specify that part of the behaviour which must remain unchanged (by means of assertions and suchlike). This leads to a more precise notion of subtype. -- DaveHarris

The LiskovSubstitutionPrinciple speaks from the point of view of the clients of the interface. When she says "the behavior of P is unchanged", she doesn't mean that the program has the same effect on the world, she means that the program doesn't have to condition its interaction with the objects on whether they are o1 or o2. To the client program, both objects fulfill the contract, so the client can ignore the differences between them. -- ChrisHibbert


Yes. LSP as stated above seems to be about replacing one concrete type with another, and the conditions under which a concrete subtype can be used in every context that another type was used. DBC is less restrictive because you pick and choose what you want to be invariant. LSP is tight, and it points towards things like blackbox reuse. -- MichaelFeathers
I am not referring to the topic of "typing", which much of the above deals with. I am referring to "subtyping". I think that the LSP is the obvious property to ask for. However, I am not at all sure it is the correct or appropriate property to ask for. There are lots of obvious but inappropriate things to ask for in this world. So my tack is to back up to What makes us think we want this thing called subtyping? I think I know why we want typing, but what do we care about (using phrases that don't include the word 'subtyping') that causes us to think that any phrase approximately matching the word subtyping is the right thing to care about? -- AlistairCockburn
To add new functionality to an existing program without mysteriously breaking other parts of it. To be able to just subclass, use the new class in all the contexts where you would use the superclass without making invasive changes. -- MichaelFeathers
LSP as quoted above seems to me a rather simplistic application of set theory to data types and their behaviours. The principle as given above, if you take 'the behavior of P is unchanged' literally (and there is nothing in the quotation to suggest any other interpretation), suggests the following kind of scenario: If we take T to be a 32 bit integer and S to be a byte, then, if for all behaviours of P, 0 <= T <= 255, then S can indeed be substituted for T and 'the behavior of P is unchanged'. This is a trivial principle. Far more interesting is the language of contracts, protocols and interfaces where S can appear to be a T and behave appropriately in all contexts defined for T while at no point in any way being a 'subtype' or even subclass of T. -- SeanOhalpin

No, S cannot be substituted for T without taking into context P. Consider

 u:S
 u <-- 255
 u <-- u + 1

v:T v <-- 255 v <-- v + 1
u does not equal v. While this may seem like a petty objection, it isn't. Provided that T and S share a mutator that could in some event change the state of a T to a value outside of the range of S, and that mutator has the potential to be used in such a way, then S is not substitutable for T. This point alone generally breaks any value-only definitions of subtyping. -- SunirShah


I don't know if the first example that you gave is really full LSP. In the definition above, she is saying any context that uses T. Your last sentence sounds like full LSP to me. -- MichaelFeathers


In the context of P, where 0 <= T <= 255, T does not have "behaviour" that S doesn't - it has unused capacity. To grab a concept from linguistics, where T may have greater competence than S, its performance in the context of P is within the bounds of S's competence. Perhaps I'm getting confused by the use of 'sub' in subtype. To me this implies a hierarchy of types. Does LSP imply that if S is a subtype of T then T is a subtype of S? Then why use 'sub'? I just think this language is confusing. I could understand it more easily if it said 'something like':

If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is congruent with T.

Also, I think the propositional style ('object of type T') too simplistic to be of any use. I'd be more interested in a definition of substitutability which was in terms of the invariants governing overlapping sets of roles, capabilities or behaviours within certain domains of use (c.f. InterfacesShouldBeAdjectives, DesignByContract). -- SeanOhalpin

1. Excess capacity does not change a program's behaviour in the computational sense, but only in the reflective or external sense. Alistair has shown that reflection breaks Liskov substitutability. In cases like a real-time system where the timing of the operation is important, or an embedded system where the code footprint is important, or self-modifying code. Then, logically equivalent symbol strings may be internally behaviourally substitutable, but aren't Liskov substitutable from the external frame. (I.e. their differences do in fact matter, if not from a mathematician's point of view.)

2. S is a subtype of T does not imply that T is a subtype of S. Consider that if all else being equal, S and T where subtypes of each other, except S has a magic method called noop which does nothing. If a program P attempted (v:T)->noop(), it would fail as T does not have noop.

3. Alistair's paper contains many definitions of types, including Lano's. Lano's definition considers invariants (albeit not naming them directly). Search Alistair's paper for the word "preconditions". You'll find it. -- SunirShah


I tend to think that it all depends on what you are after. I'm not very interested in the instantiability thing that the def. of LSP above implies either. To keep myself happy, I just substitute the word "context" for "program" in her definition. It does seem that hierarchy is implied. Note that if S is a subtype of T, S could be a proper subtype (in the set theory sense) or not, in which case S's type would be the same as T.

True enough, we have been talking about typing rather than subtyping. A subtype just may add extra capability, and that implies that there is a client for it.

By the way, I've just cut out a lot of my byzantine remarks on this page and modified some of the ones that weren't mine when they were in response to my oh so unclear writing. I attempted to be as careful as possible. -- MichaelFeathers
I'd like to point out that the type system of Java for example is compliant with the LSP. If I have an object o1 of type S, and T is a supertype according to the Java rules (superclass or extended interface). Then there is second object that is a is of type T that behaves exactly the same in all programs. Why, we can just instantiate a second object with the same state. What I'm saying is that if two types are Java subtypes, then they are subtypes according to the LSP. Or for the mathematicians:

S instanceof T => S lspsubtypeof T

Of course, Java subtyping is more restrictive than LSP. The problem with mathematical definitions like LSV is that they are minimal. Any statement I can make about a subtyping relationship according to LSP will also be true for the subtyping relationship in Java. -- ThomasMaeder

I believe that this is wrong. The LSP says that the program will behave the same after substitution, which means that the sub type must not only present the same api, but also the same behavior. Java (along with most commercial programming languages) only imposes the api. So, for example, two classes might implement methods like open, readRecord, close, but only one would allow open to be called twice without an intervening close. -- DaveCleal

For an actual example, see java.sql.Date, which extends(sic) java.util.Date by making time-related methods throw IllegalArgumentException. It also doesn't have the same constructors. -- interjection from DanBarlow

The only language that really attempts to allow behavior specification is Eiffel, with pre and post conditions, but even that is rather limited. -- DaveCleal again


Yes. In fact, BarbaraLiskov and JeannetteWing? called their work BehaviouralSubtyping?. There are some good papers at http://www.dstc.edu.au/AU/research_news/odp/typemgmt/related_work/behav_subtype.html. -- MichaelFeathers


Doesn't dynamic Java mess this up? Can't the client function ask for the metainformation, then fish around in the protocol offered by the argument object until it finds something interesting to call? That seems to me to mess up all the substitution propositions. p.s. I added a new link at the top of this page to Jeannette Wing's recent description of typing and subtyping. -- AlistairCockburn
LiskovSubstitutionPrinciple defines a type hierarchy. You may not find type hierarchies in your programs in all the places you might expect them. In particular, not all class hierarchies (see the examples using quadrilaterals above) are type hierarchies.

LiskovSubstitutionPrinciple is about subtyping, which relates strongly to polymorphism. To my way of thinking, polymorphism is the source of most of the power of OO. I think there must be people who believe the power comes from the re-use that comes from subclassing. Polymorphism is what makes it possible for the clients of an interface (those dependent on a type) to not care which implementation is used at any point in the code. Subclassing makes it possible to re-use the methods provided by a superclass to provide different behavior.

LiskovSubstitutionPrinciple defines the subtype relationship in a type hierarchy. Some of the objections above seem to be of the form "I look at my system, and find subclasses that don't match this definition." This is a fine situation in many systems. (especially those without strong typing.) In systems and languages with strong typing, the type hierarchy implies substitutability. One of the strengths of Java is that it makes it clear that the type hierarchy and the implementation hierarchy can be separate.
-- ChrisHibbert


I think that ChrisHibbert said this just about right. I equate LSP with PolymorphicSubstitutability, which does not just pertain to Objects, Classes, and Types - but to other sorts of entities as well. Use Cases, for example. Or relationships.

When we say that a Composition <is a> Aggregation <is a> Association, aren't we invoking the LSP? At least in spirit? OPEN is a method that formally separates the notions of SubTyping? and SubClassing? - SubTyping? implies LSP, SubClassing? does not - it implies reuse of code. The UML has a stereotype of inheritance called <implementation>, which implies that the difference is acknowledged there, as well.

Perhaps we have troubles with this because our languages do. In practice, I believe that SubTyping? is very important in Domain Models, or models that describe what a System ought to do. When we have a Design Model, or model of the code, we usually need SubClassing?.

-- DanRawsthorne
I finally took the time to sit down and figure out just what bothers me about LSP. I think the goal is simply impossible, as stated. I wrote the result in http://members.aol.com/humansandt/papers/subtyping.htm, so I won't repeat it all here. But the short of it is that there are at least 4 different execution environments, and LSP makes sense in two of them and not in the other two. The question "Is a square a subtype of a rectangle?" is not even a meaningful question unless you (a) define the particular operations of the square and the rectangle you are discussing, and (b) say which of the 4 execution environments you intend for your answer to apply to. The answers are Yes, Yes, No, No, No. I started ContextSensitiveSubtyping to continue the conversation.

More recently, I ran across the notion of a collection of letters and sounds that one of course thinks points to a meaning, but actually doesn't point to anything. LSP is the first of these I conjured up. The word "why" is the second. -- AlistairCockburn
I remember being surprised by all the apparent confusion over subtypes expressed by various posts in comp.objects years ago. The simplest reason I can give for my surprise is that I learned OO from the Smalltalk point of view.

Smalltalk is best described as strongly but dynamically typed. The Smalltalk programming environment, replete with examples of reflection and of non-inherited polymorphism, rapidly trains the practitioner as to the distinction between subclass and subtype; declaration and behavior.

The notion of subtype in most Smalltalk dialects is informal, perhaps subconsciously related to "the set of messages which will not result in a doesNotUnderstand exception". The recent addition of protocols to DolphinSmalltalk and BennySadehs SmallInterfaces? for VisualWorks each begin to formalize the notion, in a manner similar to the specification of interface in Java.

If one augments the LSP phrase "across all Programs" with the implied "(in a given execution context)" the desired result is obtained. The problem appears to stem from a practice we all too often engage in, that of extending our Universe of Discourse without consciously considering the implications. By including new operations such as access and mutation via reflection, or by modifying constraints w.r.t. simultaneous access, we extend that universe, and must revisit our definitions before resuming the process of reasoning about them with the tools of the previous universe.

One can transform the definition of program, or the definitions of type and subtype, and the meaning of LSP is preserved. Smalltalkers are forced to confront this transformation up front, having no types to mislead them, whereas those versed in the wherefores of static types are sometimes coerced into a subtly different interpretation, relating subtypes and declared types more closely.

Subtype is perhaps an unfortunate term, as the concept it represents differs from many other notions of type. Nor does it define a hierarchy of types.

Subtypes classify behavior, specifically distinguishing between those named groupings of behavior that are polymorphic with respect to one another, from those that are not. -- JimSawyer?

Perhaps it would have been more interesting and correct to say that 'types and subtypes classify behavior'. I think that DesignByContract is much more applicable here than MichaelFeathers gave it credit for above. Were it up to me I'd define a "type" as a set of signatures along with the preconditions and UnitTest's (postconditions) for each of the methods and the invariants for the entire interface. Subtypes, then, would be types that have the same signatures as their supertypes but which may have removed preconditions or added postconditions. Having defined said "types" we can write programs that rely on the behavior of those types to operate correctly and be assured that we may freely substitute different objects of the same types (or subtypes of those types) and still have programs that behave correctly -- although perhaps differently. This may or may not be what BarbaraLiskov meant, but it is certainly the sort of substitutability we rely on for purposes of introducing flexibility into our programs (which is not to say that it is the only form of flexibility available). -- PhilGoodwin
Neatly stated and very close, Phil. I like the weakened preconditions and strengthened postcondition. But it still doesn't protect you from reflection, and is stronger than necessary for subtyping in a database, where only the space requirements are of interest, there being no behavior of interest. I reiterate, at the risk of repeating myself, that the discussion of subtyping cannot take place without stating the category of programs being addressed. Subtyping is not a binary relationship subtype(S,T) but a ternary one subtype(S,T,P). -- AlistairCockburn
I don't see where an objects reflective properties or size cannot be defined as part of its type. Type says as much about the kind of program that can successfully use an object of the type as it does about the object itself. Here's a broader view of what I'm saying:

Type To me a type is a contract. It has two parts: what the client supplies (preconditions) and what the server delivers (postconditions). I think that it is fair to include anything which is programmatically detectable in the contract (including object size and reflective properties). In DesignByContract terms "invariants" are postconditions with the special property of always being true between state changes of the object. In order to be useful a type must be defined using a test suite. These suites are often generated and run on the fly by compilers which, I think, has lead to the false assumption that the definition of a "type" can only include those things which a compiler can automatically detect.

Substitutability If a program supplies all of the things that are required of it by the contract, and depends only on those properties that are guaranteed by the contract in order to function correctly, then any object of the type defined by the contract can be used by the program and such use will result in correct execution of the program. Such objects can vary in any way that is not constrained by the type contract and still be used in the program. This statement can only be true if the contract is allowed to include any programmatically detectable property of the object or program. The correctness of such a combination can only be guaranteed if tests exist to demonstrate that both the object and the program conform to the contract.

Subtypes One interesting property of such types defined in this way, and the reason I keep using the word "type" instead of the otherwise more descriptive "contract", is that certain features of the definition can be manipulated to create families of types. If an object doesn't require all that the contract says will be supplied to it, or if it delivers more that the contract specifies, or both, then it still satisfies the contract. In this case we can define a new contract and write new programs against it that can use the new object and still function correctly. The type defined by such a contract is said to be a subtype of the original type and all objects that satisfy this contract can be substituted into any program that is written against the original type.

It is my understanding that there have been wars fought over the definition of the word "type". I have no interest in fighting such a war. I only intend to point out that this particular (expansive) definition of "type" allows the LiskovSubstitutionPrinciple to be correct and forms the basis of the EiffelLanguage's inheritance based type system. This system even allows for the removal of preconditions and addition of postconditions in subtypes. So if "type" is the wrong word, I apologize for using it, but concept remains both useful and in use. -- PhilGoodwin
I just want to second (and connect) two important points I wish I had read up front:

1) As Alistair points out, "subtype" (as defined in the LSP) is not a binary relation: you need to know something about the programs that use (accept?) T.

2) As Phil points out, you also need to know something about what the types mean semantically: looking at the syntax of their signatures isn't going to cut it. And he's right: we really, really want it to be a matter of syntax, because then we can get our compilers to check it.

It seems to me that if you accept Phil's tighter notion of type (using tests to insure semantic stuff like preconditions being honored by the programs and postconditions being honored by the types) you've said all you need to about the programs, and "subtype" becomes a binary relation on (much more-thoroughly described) S and T again.

Going semantic also patches the great big hole in the LSP as stated above: "What do you mean, 'the same behavior'?" (To be fair to Liskov, the statement seems to be presented as an informal starting point, rather than anything that's supposed to be a precise and generally useful answer to the question.)

Finally, a question: WhatIsCovariance? (Please answer there.) -- GeorgePaci

No, it's far worse than that. The representation of P to the external frame(s) Meta[P] (maybe Quality[P]?) may also be considered interesting. For real world examples, the execution speed, the code footprint, the memory consumption, the level of bytecode obfuscation, the digital signature, etc. all matter when distributing Java applets to embedded devices. Suddenly, for all but the trivial subtype S of T (i.e. S === T), there is no valid subtyping relationship. -- SunirShah


I fail to see where reflection breaks the LSP. On the other hand I find it dangerous to introduce the client program in the subtype-of relation, making it a ternary relation. In any case I fail to find a useful application of such a ternary relation.

Alistair's basic example is

 class Client { ...

public boolean usable (Ellipse1 ellipse) {

return (ellipse.getClass ().getName ().equals ("Ellipse1")); } };
Well, there's something wrong with this particular example. While stricto senso the behaviour of the program will be modified with a sub-class of Ellipse1 being passed in there, I think we are using too strong of a definition for the program, that it is the exact binary output.

Which, by the way , can be broken even if we use a SecureRandom? or an external source of non-determinism, not only by reflection.

But let's assume that the behaviour of a function (and generalizing a program ) is specified only by the valid constraints (equations) that we can justifiably define linking input values, output values, and using function composition in general, even across several types.

For example for an Ellipse type we can have a constraint approximately formulated as ' for any e that is a Ellipse, area(e) >0 ', the constraint should hold for all subtypes of Ellipse. The constraint is independent of any client program, and the breaking of this constraint breaks the idea of subtype of Ellipse.

In the code above , everyone should agree that "the actual ellipse parameter is exactly instance of class Ellipse1", or "the method should always return true whenever a value of a Ellipse subtype is passed in", is not such a valid constraint, and consequently we can blame the lousy programmer that made the assumption here, not the LSP.

On the other hand, if we define subtype-of as a ternary relation, involving a particular program as Alistair proposes, we are in for trouble, because it will be very difficult to do a reasoning on type relations and behaviour, before we know the client program, and we can't easily reason about the program itself until we have some basic axioms about the types it is using. Alistair doesn't specify how we should break this recursion, so my idea is that we should strive to keep the definition of the subtype relation, outside the client program. -- CostinCozianu

The other thing that I take issue with is the formulation. I believe that LSP is falsely believed by some few to be a principle of OO design, when really it isn't.

Well, I believe it is a good principle for a particular theoretical model, and as far as I can tell, there are several valid OO models and theories . So how can I falsely believe? What do the other many put in its place?
This is basically faulty reasoning: "and consequently we can blame the lousy programmer that made the assumption here, not the LSP....because it will be very difficult to do a reasoning..."

That's like saying you don't wish to accept quantum mechanics or nonlinear dynamics because they are too hard to compute. Nature doesn't ask if you think it is easy to compute or not. It just acts.

I didn't invent LSP (and I wouldn't). What I am saying is that evaluated according to its own definition, it is not true. Anyone who builds an argument based on its truth is building an argument on false premises. It doesn't matter if, as a corollary, it turns out that the meaningful version of the theorem is hard to compute.

LSP started out by saying, "what we are after is something of the form" (or similar, I don't have it here). Well, 'something of the form' needs the ternary relation. If it did that, then research would proceed in a much more profitable direction. Also, if people said, "LSP is a not-bad heuristic for getting started with OO designs", then I also would have no trouble, and also that would be very useful.

"Falsely believe" means believe in something that is incorrect. There are lots of incorrect things you can believe in. LSP seems to be one of them. Why do you insist on something to put in its place? I could believe that wearing string around my wrist will guarantee project success. What belief should I put in it's place? Perhaps none. -- AlistairCockburn

You didn't address the issue at hand. If you define the behaviour of a program as strictly its raw output, then yes, reflection breaks LSP.

If you define the behaviour of a program as a set of constraints that relate its inputs to its outputs, than LSP is not broken in anyway.

What's the flaw here? -- CostinCozianu
And the other logical flaw is comparing LSP and OO with quantum mechanics and facts.

The underlying reality of the physical universe is widely accepted as facts. So you can say Newton's physics principles are not quite close to reality while quantum mechanics appear to be closer.

In Object-Oriented Software Development we're not dealing with the underlying reality of an one and only true Object Oriented Model/ Object Oriented System.

The systems are constructed by us, and there have been several formal systems proposed. A system can be valid or invalid according to its own set of rules, and not according to an external reality, that's also the difference between physics and mathematics. So if you find that OO theories based on LSP don't cover reflective languages , then you can't draw the conclusion that LSP is broken. -- CostinCozianu

Costin, I refer you to the paper I referenced at the top of this page, and the fact that it is continuously considered by the experts as being "too obviously true" to be published as a research contribution. Substitutability is demonstrably not a binary relation, it is a ternary relation. Wishing it otherwise won't make it otherwise. LSP therefore cannot be considered any founding principle for OO, whether we ever find a replacement or not. -- AlistairCockburn

Alistair, you can credit me that I studied the paper, before jumping into discussion. I can give you lots of links to research documents in this matter, some of them I don't have enough time to study them with the paper and pencil in my hands because of lack of time, but believe me I did some research. It may be regrettable that experts consider it "too obviously true", and not clarify it further.

To cut it short I'll give you a reference to a very practical and clear approach, IsaCircleAnEllipse, where C.J. Date makes a simple and clear distinction between object as values (ValueObject in wiki terms) and object variables (the general case object instance as we know of).

Though what I find fundamentally wrong in your approach, as well as in Date's approach exposed in details in TheThirdManifesto, and many more people's approaches, is the presupposition that there can be only one and true OO theory, and each considers it has to be his way!

There are several theory, each of them valid with respect to its own axioms. I wasn't quite aware of that until recently I wanted to trace the roots of any OO theory at all (puzzled by Date's affirmation in one of his books that OO has no theoretical foundations at all - as opposed to the well known mathematical foundation of relational model), and the book that I discovered recently , PrinciplesOfObjectOrientedSoftwareDevelopment gave me all the links I needed, and a very good synthetic treatment of major directions.

Therefore, I strongly contend that LSP may not be valid within your favourite theory on OO, but it is very sound in other systems. Usually an argument may erupt here that a particular theory is closer to modelling the real world in a natural and useful way ... However, we first have to acknowledge the plurality of valid approaches on OO. -- CostinCozianu

Costin. You embody what you criticize. Alistair is specifically saying that LSP is dependent on the context, thus creating a ternary relationship. You say you are disagreeing but then go on about the same thing, that there are many versions of OO, i.e. one must deal with a variable context. So you are contradicting yourself. Alistair is not saying there is only one OO. You made that up. Alistair is not saying LSP cannot work. You made that up too. He is simply saying that it cannot be considered axiomatic to all OO approaches. Which you disagree with, then restate. Why must you disagree?

Well, because he says something very different then what I say. That is no-matter what OO theory, LSP is always wrong unless one defines "subtype-of" as a ternary relation, (T1 subtype-of T2 only in the context of a program P). You can stretch my position to generate a ternary relation, what I'm basically saying is that within a particular OO theory "subtype-of" can be a binary relationship, and there are several systems that make the subtype-of as a binary relation valid, but I'm afraid you cannot define that as a ternary relation. -- CostinCozianu

(Thanks for doing your research, I appreciate that) - - The burden of proof is now on you. In my paper, I give a program P that will invalidate any non-degenerate definition of subtyping. That is, for any even reasonably intuitive and useful definition "S is a valid subtype of T" and a way to prove S is such a subtype of T, and any such S and T, then S is in fact not substitutable for T in P. There is no definition of subtyping in the literature that does not include substitutability. QED. The result has nothing to do with the behavioral properties of S and T, and everything to do with the behavior of P. If you believe there is any other definition of subtype that will survive that proof by contradiction, please supply it. Otherwise all is rhetoric. Here is the test program:
 class P ( ...'
 public boolean broken( T inParam ) {
 return (inParam.getClass().getName().equals("T"));
(as for "theory of OO", I do not subscribe to any... I care more for good software than theory, and have yet to see any OO theory that is both sound and produces good software. That perhaps is why I object to the great acclaim given LSP - it is unsound but produces pretty good software, I don't mind it as a first-cut heuristic, but it is bad mathematics.) -- AlistairCockburn

Alistair, a simple example of an alternative inheritance model is given in TheThirdManifesto.

Before we continue this discussion I'd like to ask you what exactly is the most current reference on Barbara Liskov's position on LSP. Is what you quoted in your paper from ("Barbara Liskov, "Data Abstraction and Hierarchy", SIGPLAN Notices, 23,5 (May, 1988).") the current widely accepted definition of LSP?

I also ask you that because I've seen several authors (Rumbaugh, Meyer) giving several very informal definitions of LSP (probably the way they understand LSP), while Elien in PrinciplesOfObjectOrientedSoftwareDevelopment gives a quite extensive formal treatment of LSP without actually quoting the original definition - see chapter 10, Behavioral refinement.

Much like you, I don't really care about OO theory for the practical programming I'm doing, because "theoretically proof" OO languages have not gained industry acceptance.

However , I do hope that we are witnessing an analogue situation with what happened to those programming database application in 70's. The relational theory was in place and was debated and developed, but practically no application development conforming to the relational model could have been done. Nowadays, I enjoy programming against almost theoretically proof relational databases. And because I'm quite young I hope that someday I'll be able to program in an almost theoretically proof OO language. -- CostinCozianu


I used Chris Date's material from Date, C., Darwen, H, Foundations for Object/Relational Databases, Addison-Wesley, 1998. In that, you could watch his definitions change as he went from considering strictly data storage to programs. This is inexplicable and insulting unless you have in your mind the third term in the ternary relation - then it becomes clear and natural. Recall, my point is that you need that third term: S is a subtype of T with respect to program set P. If you include that third term, life becomes easy. If you drop that third term, it is just like watching people argue over A is Better than B without saying "with respect to evaluator C." You still haven't shown me a binary definition of subtyping that withstands my program above. -- Alistair

Chris Date evolved his typing model quite a bit since the first edition of his book. In the second edition of his book (2000) he presents some explanations about what he changed and why, I wasn't paying attention to that subject though.

However his point of view is quite simple, you have values and variables corresponding to a type. A value, is immutable it pre-exists the creation as a mathematical abstraction. A variable is what we usually know, very much like an instance, it can contain distinct values at different moment in times.

A value has always a definitive type, as it is immutable. A type is defined as the mathematical set of all values it contains. Consequently the sub-type relation is just the set inclusion . So a circle value is always a ellipse value. In order to have a consistent type system, he then goes on and requires that the set of all types together with the subtype-of relation should be a lattice, that is for any two types T1 T2 there should be type inf(T1, T2), such as for any T' having T'<=T1 and T'<=T2 it implies that T'<= inf(T1, T2). Symmetrically for sup(T1, T2). There should be two distinct types, analogous to the total set and the empty set, The total set is the root of the lattice, every type is a sub-type of the total-set and the empty-set is subtype of all the types in the lattice. Those two distinct elements have some funny names in Date's book that I don't recall exactly right now, and I don't have the book nearby.

I apologize if this is already familiar to you, but just in case this has been drastically changed from the first edition, or in case somebody who doesn't have the book wants to watch or intervene in the discussion.

Please remember that even if this model goes against the current OO trends and common practices, the type and inheritance model was conceived by Mr. Date in order to function within a "data language" for an ideal theoretical database, not as a general purpose programming language. For me, it seems quite logical that the relational model should forego the instance notion as we have it when programming in Java, C++, Smalltalk, because the whole relational model was conceived and evolved based on the fundamental notion of value: primitive value, tuple value, relation value. Date extended the set of primitive values with the notion of object value, in order to support objects as valid attributes within a tuple, and thus provide extensibility to a database engine.

After this general overview, I can show you that the principle of substitutability as stated by Date for value really holds: an object value of a type T can always be replaced with a value of a derived type. Your assertion that reflection breaks subtype-of definition doesn't hold here because first of all the language doesn't support reflection, and second because values are immutable and any program conceived for a value input of declared type T really is conceived for the whole set of values, thus forcefully including sub-types as subsets of values of the parent set of values.

Since you cannot modify values, than your counter-example when you modify a circle and make it ellipse doesn't hold in this case. You just can't modify it.

At this point I assert that Date probably should have stopped having a perfectly consistent and simple OO and functional (side-effect free) language.

But he adds variables in the sense we all know. A variable has a declared type, and it may contain at different point in times any value belonging to a subtype of its declared type (of course this is also forcefully a value in the declared type as set). The exact type of a variable's value at a given moment is the Most Specific Type of the variable.

Now the substitution principle for variables follows logically as a contravariant to the value substitutability. A variable v1 of declared type T1 can always be substituted with a variable v2 of declared type T2 IF T1 <= T2 T1 is subtype-of T2, which works beautifully even if it is contrary to what we are used to in C++, Java and the likes, though a contra-variant principle is present in the latest version of C++ on almost the same logical grounds as Date's principle for variables.

In order to visualize what happens we need three types, since the circle and ellipse is no longer relevant (a valid program will not be able to a variable circle so that it will contain an ellipse value, this will generate either a compile time or a run-time error). But consider we have Circle <= Ellipse <= GeneralCurve?. And we have the following variables, using an approximative C++ like notation, since Date's object variables are pretty close to C++ references:

 GeneralCurve?& var1;
 ...
 void transform(Ellipse& var2)
 {
   var2= function(<valueOf> var2); // the function has to return a value of type at least Ellipse
 }
 ...
 main(...)
 {
   var1= Circle( 0, 5);
   transform(var1);
 }
You can see now that it works well because of the contravariance. Whatever value is assignable to the formal parameter, "object variable" of declared type Ellipse var2 it has to be at least an Ellipse value, therefore it is forcefully assignable to a variable of declared type t2.

Again, Date's proposed OO extension for a database language does not contain reflection.

I'll come back later on the problems of reflection and your approach with subtype-of as a ternary relation. -- Costin

And I'll come back when I have time to discuss your presentation of Date's ideas. They are handled in my paper. Roughly, he introduces a pure data storage environment with properties P1, then presents a subtype relation that works in P1 but not some other P2. Then he moves to computational environments P2 and introduces another subtype relation that works there. Neither of these work in P4, which allows reflection. All of which supports my claim that "subtype" (more exactly, substitutability) is only a meaningful relation when described - as you just did - in the context of an environment of use. Something they don't highlight. But once you notice it, you notice that they make use of all the time. -- Alistair

Alistair, maybe I can see your idea better now. Either it was confusing in your paper, or I don't know enough English, or it was just too subtle for me. You're saying now that subtype-of can only be defined within an environment which I interpret as an OO system (language, theoretical model, database, runtime etc...). What I initially understood from your paper was that subtype-of was a ternary relation defined on Types x Types x Programs. Therefore you asserted that subtype-of changes (and probably needs to be verified again) for each and every client program, which would be too painful in my view.

Now I interpret what you saying that subtype-of is defined on Environments which I can interpret as classes of programs, therefore you need to verify only once for each class of programs where class of programs means programs compiled in or running in a specific environment (OO system). For example one class of programs can be programs that are developed and run in environments without support for reflection.

Did I get your idea right this time? -- CostinCozianu

Yep. Now you're onto it. To test whether we're still in sync, let's move from the smallest to largest 'environments.' Suppose some / any definition of subtype that you like, and suppose a type (not merely class, but anything that has a type) S and a type T. You want to go to the bother of proving that S < T (meaning S is a subtype of T).

To make your own life easy, you only have this one program 'p' (i'll use lower case for an individual program and upper case for sets of programs). Now, your only job is to prove that S is substitutable for T in 'p'. That's easy, if p is nice and small. You look at the code in 'p', and say, "Sure thing, S is substitutable for T in p." Done.

I actually think there is an application for this sort of proof in certain situations with certain programs, and it could make someone's life easier.

Next, let's move to a class of environments P1. Let's pick simplistic databases. They don't execute code, they only store data. So "substitutable" or even "subtype" is pretty easy to arrange: just make sure the subtype occupies less space than the supertype. (point < circle < ellipse < closed curve). For this class of environments, proving S < T is simple, and making use of that is simple.

Next, let's move to a class of program environments with no subclassing, P2. Now the definition of subtype has to pay attention to function signatures, but it can be done to prove that some S < T.

FORTRAN programs are in the P2 category, so the assertions "Subtype means ... in P2 environments, and S is ..., and T is ..., and provably S < T in P2 environments, and my program p is a P2 category program (therefore S<T in my program)" is a useful construction.

Next, let's move to a class of programs and environments in which subclasses can override superclass methods. Bad stuff starts to happen to the subtype definition, because of the overriding, and the multiple client problem. From reading the literature, I don't think this problem is soluble, i.e., I don't think there is a definition of Subtype that is sound and usable in this environment.

Finally, let's add reflection. This breaks all the formal definitions of subtype, because of the reflective program I gave earlier. I don't think there is any S that is a subtype of any T in the reflective environment.

Java programs are in the P4 category, so no mathematical-strength subtyping assertions can be made. Which makes sense. (LSP is still a handy starting heuristic, but you can't take it to the bank.)

Are we still together, or did we split apart somewhere in the middle? -- AlistairCockburn

Yes, now we understand each other. I have some comments however.

First, I don't understand what is the purpose of defining LSP outside a specific OO system. There can be several systems each with its own set of axioms. In many of those systems LSP will hold either as an axiom (definition) or as a theorem if another definition of "subtype-of" is given. In other systems it will be broken, no problem with that.

Second, I still don't see how reflective programs break the LSP, even with the program that you exemplified. I claim that the LSP definition as you cited in your paper (and I verified with other independent citations) is very interpretable. Not having access to the original paper as yet, I'll abstain from claiming that I know what exactly Barbara Liskov meant, or what is her position now (I think she should be trusted to know what exactly a principle that has her name is suppose to mean). However, I also noted that several authors took the matter of interpretation in their own hands, so maybe the principle was fuzzy from the very beginning after all.

The key in interpreting the principle is in how we define behavior of a program P. PrinciplesOfObjectOrientedSoftwareDevelopment has a whole chapter on this and from there, I traced down some probably thousands of pages with hairy scary theoretical stuff ( even for a theory passionate like me). Probably I'll try to select some of them and study with pen and paper if I have that time.

However the general idea that the behavior of a program should be understood as a set of constraints (logical formulae defined in an assertion language) that relate the inputs to the outputs. As long as the constraints (equations if you want ) are verified the behaviour is unchanged.

Let's take for example a system that deal with numbers and we have the obvious N < Z < Q < R < C for natural, integers, rational, real and complex number. And let's have it support reflection.

Thus if we have a program
 p1 : C -> C,
 p1 ( C c)
 {
   return c+1;
 }
then we can formulate a clear equation o= i+ 1 (o being the output and i being the input),

but if we have
 p2: C->boolean
 p2(C c)
 {
   return c.getClass().getName().equals("C");
 }
Then the LSP is broken ONLY if we define its behavior as the equation o=true

So we'd be back to square 1, again we would have that "subtype-of" is a ternary relation related not to classes of programs but to each program (piece of code) individually PLUS the specification of its behaviour.

In this case I contend that it is more beneficial to judge this case as the subtyping is really well defined and N<Z<Q<R<C always, while the programmer who defined the program p2 made a logical mistake and either (a)his definition of the behaviour of p2 is wrong OR (b) the behavior is well defined but his implementation is bad - after all if he had the behaviour specification, he could just say return true. :)

Sorry, Costin, you lose on this one. There is an entire research community out there whose research activity is in coming up with a definition of "subtype" such that N is substitutable for C in ALL programs. No blame is permitted to be placed on the programmer. Read LSP, above. It says "such that for all programs P defined in terms of T, the behavior of P is unchanged." Her words, not mine.

As you notice, she says the behavior of P is unchanged. I really need to see her definition of what is behavior. The behavior of a buggy program remains buggy no matter what theory we use. :)

Therefore my tentative conclusion is that LSP still should hold in reflective languages (maybe it can be broken by other reasons). But in any language or any system of this world you can have either bad specifications or bad implementations, or both, and we can use LSP to detect such bugs. -- CostinCozianu

Let's reduce the definition of behaviour to the set of all programs that halt (because I don't know how to express it for the programs that don't halt. ;)

The behaviour of a halting program P is the function that maps the set of all input tuples I to the set of all output tuples O.

So, for instance, a simple program P like

 Circle/Ellipse v
 print v.type.name
 halt
would print

 Circle
in the Circle case and

 Ellipse
in the Ellipse case. Thus, the behaviour of P has changed, showing that neither Circle nor Ellipse is Liskov substitutable in terms of P. Thus, reflection breaks the LiskovSubstitutionPrinciple. -- SunirShah

You broke it on purpose :). We can also use implicit reflection by overloading a function. -- RichardHenderson.

(Great! Thanks, Sunir, for shortening the program to two lines. Much easier. -- AlistairCockburn)

Sunir, you go straight into my argument. You say that the behavior of a program is a function, but in the above example you want it to be a constant function. Should we expect then, that return x+1 will always return 1? -- CostinCozianu


The example does seem flawed. It relies on a limitation of the class query mechanism doesn't it? We are specifying a function that returns the exact type of the argument, which is not what we are using it for. We want it to understand subtypes too, a much more complex functionality. So this does look like a bug well found. Or am I totally missing the point :) ? This doesn't contradict Alistair's view. It just means we have to watch reflective/environmental logic very carefully. The implied context can still be made explicit though, leading to an augmented binary relationship. Assertions would work in the type domain. Probably passing in all possible subtypes to a function to see if behaves itself. An error means the implied type identity and the implemented type identity are at variance in the context of usage. This may be the desired behaviour, however. We use type variant code a lot and use overloading to do it. So an extended type has more power than an LSP subtype, and introduces a new class of 'subtype' bugs/behaviours. -- RichardHenderson

Richard, you're right that an extended notion of subtype (as a matter of fact, we really talk about subclassing here) gives you more power that you can use to have a whole new class of bugs. At this points I think both I and Alistair agree, although Alistair prefer to put those bugs on LSP, while I contend that programmers should be more careful. Passing all the types of subclasses to a function is a good heuristic start, and I totally agree with test-infecting the code. However it raises your hair to think both of the theory of detecting errors and the real practice, because when you write a UnitTest you don't have all the classes that a potential client might use to extend your framework and pass it to your function. Therefore TestInfect? a code is a very good heuristic but it has to be tripled with test-inspect a code. This inspection is in principle not bad if it can be facilitated by a compiler, given a sound formal approach to typing in an OO language that we don't have today. Some people claim that such a language might be ML.

Smalltalk believers pretend, contrary to the obvious, that a type system will not do you much good. It is true that a not so carefully thought type system stands in your way quite annoying (I don't want to start counting how much I've used the cast () Java operator this week), but even if you consider this discussion about some very basic examples you'll have to admit that a strong and sound typing system might be desirable, somehow. -- CostinCozianu


At some point, I'm going to lose track of this conversation... It may be good to state at this point my own view is this: I sort of hope that the first 3 of those four are within the realm of our agreement, and that you'll permit me the pleasure of the fourth if you don't personally agree with it - and I won't mind recanting on the fourth if you come up with something. (I'll just let you work on it for the next N years :-) -- AlistairCockburn

Definitely we have a wide fuzzy area of agreement :) Here's what I think.

On (1), that's absolutely de gustibus and I respect everyone's option to his choice, see what I replied to Richard above. But even if you try to suggest that it is a researchers' only concept, I want to say that ADA programmers are quite happy about ADA strong typing.

On (2) you can't blame LSP for not supporting is-kind-of. After all, each theory is free to choose what are its goals. Is-kind-of is a good engineering heuristic that I also use every day. You can't expect that in my Java programs Circles are really Ellipses :) even if I think that theoretically they should be.

On (3) you have to be careful , because if you replace substitutability as a formal approach, you may further hinder reusability. Because my conclusion above is that for some environments, you'll have to check is-kind-of relation for each client program and not merely for classes of programs as you want to.

Yes, those environments are in wide use today, and you're quite right that these classes of bugs that derive from the belief that LSP applies to languages like C++, Java, Smalltalk, Eiffel automatically and by default, if we just simply choose to derive from a base class, or implement an interface. This is the essential thing where I agree with you, that the LSP does not apply by default to the OO languages in common use today (kind of reformulation to your conclusion, if you allow this bastardization).

But on the other hand, how do we know if the above mentioned languages are not going to be the Cobols of ObjectOrientedProgramming over the next X years? That's a really tough question and I don't think we have an answer as yet.

On (4), I really like to think of myself as capable of working for the next N years, but today I already lost 8 hours in a traffic school to make up for a speeding a speeding ticket, and it really cut off my momentum :) However, I'll probably be able to research if others haven't done it already. -- CostinCozianu


While my comments elsewhere on the page imply that I think that generic definitions of subtyping are hopeless, I'd like to object to a line of reasoning that both Alistair and I have been applying. Reflection is not a valid rejection of subtyping in general.

While logical types aren't strictly equivalent to the notion in programming, the mathematical arguments applied here as in Liskov certainly do encompass both camps. But ironically the theory of types was invented in the context of logic to avoid reflective situations like the Russell paradox. The reason why the project never worked out was because reflective situations are inherently unresolvable. This shouldn't be surprising; Goedel knew it.

It follows that the only meaningful definition of a type and subtype in logic can come only within the internal frame of the type system. That is, in a situation without recursive, reflective sets. This is still useful practically, such as for the common case of interface substitutability, or the Eiffel/UnitTest case of behaviour substitutability.

Nonetheless, these situations don't characterize software completely, as Alistair had only begun to notice. Due to the very nature of the UniversalTuringMachine, software exists in a hierarchy frames. That is, I can simulate a complete environment for a program inside my program. Further, software provides the ability for an inner frame to communicate to an outer frame (similar to our universe if you believe in praying to god). Thus, programs are inherently reflective and recursive, invalidating the type/subtype definitions above.

For instance, C++ provides some information about types at runtime. The programmer has the sizeof operator, the offset() macro, RTTI, etc. Thus, the types are not only some mathematical construct created for the internal program semantic model, but it's also a live data structure inside the program accessible at compile/run time.

We can take this concept much further, though. Not only are types runtime data, they are also data structures inside the compiler/interpreter/environment. Some languages, especially interpreted languages, give the programmer directly access to the internal data structures of the outer frame instead of marshalling and duplicating the information like C++. Self allows you to manipulate the slot hash of the prototypes. Perl gives you direct access to almost anything.

So, an advanced programmer is fully aware that types are merely interfaces to the outer frame. You declare types to instruct the outer frame how to manipulate this "variable" (for there are no variables) on the underlying TuringMachine, e.g. what machine instructions to output, how much space to allocate on the stack for it, etc. You can also use the types to ask the outer frame about yourself. You can also "hack" the outer frame's manipulation of the TuringMachine by ignoring the so-called semantic role the types are meant to have. Indeed, types by themselves don't do anything. Only the basic manipulation of the underlying TuringMachine does something meaningful. And by meaningful, I suggest to the end user (a human).

I am suggesting the complete theoretic annihilation of the type/subtype exercise. I am further suggesting the theoretic annihilation of PrinciplesOfObjectOrientedDesign as artificial and excessive restrictions. None have any real purpose. Certainly not enough to enter into "object purism" arguments, or type/subtype debates, because it's always legal (if not moral) to thwart the mathematical models to make your programs do what you need to do. Indeed, ease of programming is the only possible avenue for meaning. Just don't let that ever confuse the fact that types are just data structures between you and the TuringMachine. -- SunirShah


Ooh, '...napalm in the morning ;).' I still think that subtyping assertions can be useful. The ternary relationship can be made binary by explicitly accounting for it. We should at least test for behavioral integrity under type substitution where we are expecting it to be true, or not. As for the logical notion of subtype, burn it. Why not? If it looks like a circle and it acts like a circle, hell, it's a circle. When its squashed, its not a circle any more. There might be things you can do with both, but let's not pretend they are related by more than convention. Substitutability implies type identity over the range of usage. Calling it a subtype confuses the issue. -- RichardHenderson.


Ok, I may be way off base on this discussion, but something did strike me as odd in Alistair's code example above. I'm going to repeat it here:

 class Client { ...
   public boolean usable (Ellipse1 ellipse) { 
     return (ellipse.getClass ().getName ().equals ("Ellipse1")); 
   }
 }
Now, let's change this code (I realize this may be cheating, but I want to know what Alistair thinks):

 class AnotherClient? { ...
   public boolean usable (Ellipse1 ellipse) { 
     return (ellipse instanceof Ellipse1); 
   }
 }
Now, here's my sticking point: In Alistair's original example, every class type passed into the method usable would fail, unless it was of type Ellipse1. However, in my example, any class type derived from Ellipse1 would return true. Now, what I'm trying to say is that this was a choice of implementation within the service provider. And, as it so happens, could either be correct or incorrect. However, I question Alistair's assertion in his paper that this would be a worthwhile or even desirable implementation technique. He argues to this point in his paper, but I fail to be convinced. In fact, I tend to believe the opposite: Run-time type inspection is to be avoided at all costs. But, more to the point, he simply chose a technique that was flawed. I believe Costin argued this point above, and I guess I'm beating a dead horse, but there is, obviously, a way to implement this method so that it behaves properly to "all programs" that might use it. -- JeffPanici

Hi, Jeff - - - The question isn't "can we write a program that can wriggle it's way around the flaws in LSP?" The question is whether LSP could be true. We only need one counterexample and it become not true. I like SunirShah's counterexample best, so far:

 program(v){
   print v.type.name
 }

That cute little program demonstrates non-substitutability. ergo LSP is not a valid research goal. Did I get your question right? -- AlistairCockburn

Yes, you did. I do understand your point that, mathematically, LSP doesn't hold up. I still tend to think that one can use LSP, however, as a guideline when trying to use base-class inheritance (in any modern OO language). I find that many people are surprised by how limited base-class polymorphism can be, and I think that LSP only helps to highlight this weakness. I can accept that LSP falls along the same lines as OnceAndOnlyOnce: a principle, which can be difficult (if not impossible) to follow, but is there for guidance. This was a great discussion, and I'm glad I found it. -- JeffPanici

Actually, the only thing it demonstrates is that the type systems of some languages cannot be used to represent the "subtyping by Liskov". This doesn't mean that we must forget about LSP in these languages, or even that we must forget about LSP for our types described in terms of these type systems. This means only that if for whatever reason we want to consider LSP for our objects, we should check that our program contains no LSP-incompatible places that interfere with our reason. The words "LSP-incompatible places" here imply not that "LSP is not true", but that there can be programs for that our mapping from "subtyping by Liskov" to language-based subtyping is incorrect.

For example, the even integer value can be a pretty valid LSP-subtype of the integer value, as long as both are mapped into the same language-based type int, with the int.name=="whatever-constant-the-language-runtime-thinks-it-is". If we want to implement the same LSP-types as two distinct language-based types... well, we are probably going to have some headaches, but we cannot blame Barbara for that.

I don't like Liskov's definition of subtyping (I find it too limiting for practical purposes), but that's another story. -- NikitaBelenki

And this program begs for another comment. There are not only languages (most of them) that support unnamed objects, there also are the languages (C++, for example) that support unnamed types as well. Imagine that we have such a language, and it also supports a reflection. What should this program print for the object of an unnamed type? ;) -- NikitaBelenki



First LSP is not defining subtyping, it is defining what subtyping should be if you wish to be type safe. It is a formal notion that says 'if your language system follows these rules then you can use the type system to avoid behavior related bugs in the face of implementation changes to your base class'. It also means that if your language system does not enforce these rules then you as a programmer will need to do some thinking on your own or live with the consequences. What we call subclassing is not subtyping as defined by LSP; but the type system acts as if it can reason in a type safe manner and the result is that the type system misses bugs that a properly defined and implemented type system could catch. This is the point of the research project as I understand it.

Now it may well be that such a language system can never be built and therefore programmers must take due care to follow the meta principle of the LSP -- but that is in fact the whole point of the LSP and its cousin design by contract.

-- Marc Grundfest
Alistair, I find it disappointing that after all the discussions on this page, you still come up with assertions like "Liskov doesn't hold true". And your example doesn't prove anything, or at least if you'd bother to express it with the least of mathematical rigour, you'll see that it doesn't. Just saying that print(v.type.name), disproves LSP doesn't hold any water. How exactly does it do that?

I created LiskovWingSubtyping which is at least a more correct of a title, and used in research papers, and this page is already an unrefactorable mess. Please define a LspCounterExample? if you may. -- CostinCozianu

Thanks, JeffPanici, for saying, "I do understand your point that, mathematically, LSP doesn't hold up. I still tend to think that one can use LSP, however, as a guideline when trying to use base-class inheritance (in any modern OO language)." That is exactly the point I wish to make.

Costin, I'm not worried about you being disappointed. I have given you two programs for which LSP does not hold, and you have given me zero definitions of subtyping which hold for those programs. It's your turn to put up. -- AlistairCockburn

More than me being disappointed, the problem is that you, consciously or not, mislead people. You exemplified two programs but you didn't even present a shadow of a proof (you know like we used to do in math classes). UIf you bother to do that, you'll see where you're judgement is flawed. I'm not going to do it for you, sorry. But I can help you with another "counterexample", which doesn't even use reflection to distinguish between the type and the proper subtype:

Let be R the type Real and I type Integer and the function floor:Real -> Integer as usual. Obviously we have that Integer <= Real. But the following program:

 p(v:Real )
 {
  if (v-floor(v) == 0)
     return true;
  else 
    return false;
 }
The program will return true for Integers and false for proper non-integer Reals. If Alistair's reasoning is right , it would follow that LSP is broken even without reflection. Still, LSP is not broken because the program by itself doesn't prove anything. -- CostinCozianu

No, Costin. What you have demonstrated is that Integer is not a subtype of Real in the execution environment P2 (i.e., that there is one program that breaks one asserted subtype relation). But that prediction follows naturally from standard definitions of subtype. What I'm saying is that NO useful definition of subtype exists that can handle reflection. -- AlistairCockburn

Well, the program doesn't prove anything, and Integer is subtype of Real for absolutely obvious reasons. The assertion that the program above disproves Integer is a subtype of Real is flawed for the same reasons as your assertion that reflection breaks subtyping.

The program was really an example that you don't necessarily need reflection to distinguish the elements of a proper subtype. The fact that you can distinguish the proper subtype of an element of a declared base type, which is sometimes possible without reflection, doesn't mean that the element is not of the base type also.

And of course the behaviour of the program p above cannot be defined as always return true. -- Costin


I feel that the concept missing from this discussion is the mathematical concept of equivalence classes. It is trivially obvious that for two different programs, there exists a mechanism that distinguishes them. For example, just do a diff of the source code.

For any discussion of LSP, we need to define the equivalence relation for the concept of "behaviour of P is unchanged". For example, let's say we have 2 array-based implementations of a stack. The only difference will be that I add some additional padding into each entry, so that one will exhaust the memory before the other.

I can find a number of scenarios that could distinguish these implementations. One is obviously to see how many items can be pushed onto the stack before it fails. Another is to use a wallclock timer to observe the performance (values of time will be difference, so behaviour of client is changed by the substitution of different implementations). I'll just concentrate on the out-of-memory behaviour

Lets try a few predicates:

  (1): pre-push1 => post-push1 and pre-push2 => post-push2

These are necessary for correctness, but don't say anything about substitutability. So let's try:

  (2): (post-push1 <=> post-push2) and (1)

This is a very strong definition of equivalence, but its not very useful. We still need a way to express the fact that our 2 stacks are equivalent, even though they have different memory failure points. We need to weaken the equivalence. We need to look at relations between preconditions:

  (3): (pre-push2 => pre-push1) and (2)

Now we're getting somewhere. If pre-push2 is stronger (more restrictive) then pre-push1; then the expression will evaluate to true (if (2) is true).

For subtype substitutability we can define an equivalence relationship as being over a specific set of operations such that the analogous expression as (3) is true for each operation (and interface substitutability holds).

If we require symmetrical substitution, then we need to define a contract that both honor. We then simply require that.

  (pre-contract => pre-impl and pre-contract => post-contract) => post-impl

this implies that the contract would include a common-dominator resource requirement. If we don't want to do that, then we might permit:

  (pre-contract => pre-impl and pre-contract => post-contract) => post-impl or out-of-memory-exception

This final term is the one that's important for weakening the equivalence relation. Its a sledge-hammer that overrides the contract. If we are using exceptions correctly, then we could tag on a generic "or exception" to all post-conditions. If there are specific areas of variability you wish to allow, then you can fiddle with the pre/post conditions of the contract to permit them.

Thus the contract defines the basic equivalence relation; and we can weaken it, later, if we want to ignore factors that are outside the scope of the contract. -- DaveWhipp


The discussion of Real and Int above seems to have a rather large mistake. Before I discuss it, I'm going to rephrase the LWS principle in slightly different terms. Here is the original again:

If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.

Rephrased, for any 2 types S and T we have S is a subtype of T if and only if for each object o_s of type S we can find an object o_t of type T such that for every program P defined in terms of T we must have P(o_s) = P(o_t).

Rephrased again (last time) we have S is a subtype of T iff there is some f:(objects of type S)->(objects of type T) such that for any program P defined in terms of T, we have P(o_s) = P(f(o_s)) for all o_s of type S. Let's call f the LWS function.

These ignore the possibility that P takes more than 1 argument of type T but that makes no real difference.

So, leaving reflection out of things for a moment let's look at Real and Int. Is Int a subtype of Real? That's the same as asking: can we find the LWS function f:Int->Real? The answer is yes. f(x) := x + 0.0 does the trick. If we have a program P(v:Real) then P(f(x)) = P(x + 0.0) = P(x) for all integers x.

This includes the example given above

 P(v:Real )
 {
  if (v-floor(v) == 0)
     return true;
  else
    return false;
 }

from which Alistair concluded "that Integer is not a subtype of Real in the execution environment P2". I'm pretty sure Alistair and Costin are mistaken on that since P(f(v)) = P(v+0.0) = P(v) for all integer values of v (both sides evaluate to true). This program does not contradict "Int is a subtype of Real" in any environment.

Now to consider a certain style of reflection - call it "bad reflection" (I'm not trying to pick a fight, I just couldn't think of a better name). The example given to break LWS was pretty much P:T->String

P(T:v){
   return v.type.name
}

but there is something wrong here. A type is something like an interface in Java (except it includes behaviour as well as function signatures), an object is "of type T" if the object's class implements the necessary methods that allow it to behave like a T. (If someone does not accept this but rather insists that "type" simply means "class" then what could "subtype" mean besides "subclass" and then what is the point of this whole discussion?)

So you cannot ask for v.type. v could have many types, in fact you already know one of them, namely T. It only makes sense to ask for v.class, so the program should be

P(T:v){
   return v.class.name
}

Of course this is even worse because now we can have 2 objects v1 and v2 say, both of type T but of different classes. Then P(v1) != P(v2) even though they are both of type T. So actually with "bad" reflection, not even T is a subtype of T! Each class defines it's own type, disjoint from all other types. So not only is there "no useful definition of subtype", there's no useful definition of type when "bad" reflection is around, "type" simply means "class".

However all is not lost, Alistair mentions "local subtypes" at the end of his paper but I think he doesn't take the idea as far as it could go. The idea is that you can say "for this particular program, S is a subtype of T" and that may be good enough but if you change the program you have to verify your subtype relations again. Rather than saying "for this particular program" why not say "for the entire class of programs that don't use 'bad' reflection" then, as long as you don't use 'bad' reflection, you don't have to verify your subtype relations every time you change the program. You could also say "for the entire class of programs using X" for any other X.

What exactly is 'bad' reflection good for anyway? The examples given in Alistair's paper were for finding the test methods in a class. This may be a common idiom but what are the test methods doing in the class in the first place. Is there any good reason for them to be there? It's good to have them in the same file but in the same class? They are not part of the class and assuming everything is correct, all they are is a bunch of methods which return True. How about we put each test in it's own class (if your favourite language does not allow you to have multiple classes in the same file then that is a deficiency). Each of these classes is of type Test. You could then do something like

test_everything {
  foreach Class (AllClasses?()) {
    if (Class.implements("Test")) {
      if (Class.test()) {
        print Class.name + "OK";
      }
      else {
        print Class.name + "NOT OK";
      }
    }
  }
}

Notice that this is a form of reflection that doesn't break LWS because it does not examine the class of an object.

Maybe there is a really good example where "bad" reflection is the best solution. I'd be interested to see an example that isn't just a deficiency in the language or imposed by an already existing poorly designed class. If there is one I'd imagine it will be "meta programming" like Alistair's examples - testing, version migration etc. Which means that the actual meat of the project is still free to derive whatever benefit it can from the LW definition of subtype in an environment without "bad" reflection. -- FergalDaly?

Please note that I was not necessarily wrong in as much as I was replying to Alistair's assumptions, not to your modified theory. Delete this comment as well.

Now if S <: T why would we need a function from S to T? Because the natural idea of subtyping implies that every element of type S is-a element of type T, therefore the natural inclusion morphism should suffice, shouldn't it?

I guess we need to address the distinction between "types" and "classes" or more precisely between abstract data types and their realization. Local subtypes are kind of nice in theory, but they do break modularity, locality of reasoning and introduce quite a few host of problem, so we need to see a compelling argument for ther introduction. -- CostinCozianu

Hi, Fergal... Your example: P(T:v){ return v.type.name } was sufficient. Liskov carefully doesn't introduce classes, and nor do I, since I'm responding only to her math. Once you introduce classes, you are in a different discussion, and people will rightly jump in that "classes are not synonymous with types". Now, you classify the interesting reflection as "bad" reflection, but I might equally name it "useful" reflection, or even "rampant" reflection. It doesn't matter which, because LSP contains the word "all", which is the undoing of the thing. All anyone has to do is find a single counterexample, no matter how trivial or useless, and the math unravels. (It happens, though, that "rampant" reflection is often really useful (as in JUnit).) With respect to the function type.name, that is discussed on LiskovWingSubtyping, where we see that if type.name is a valid operation, then no type S ever is a subtype of any other type T, and LWS/LSP vanish in a twinkling, but if type.name is not not considered part of the type signature, being a meta-construct provided by the environment, LWS/LSP becomes inapplicable when working in reflective environments such as Java/Ruby/Smalltalk/C++/etc. Q.E.D. on both sides. -- AlistairCockburn

Of course there's no QED, because the way it is explained LiskovWingSubtyping is that all properties provable about program P with elements of type T should be provable with elements of type S.

So let me see, what can you prove about
 return v."type".name
and how is it broken when v has actual type S <: T, in say Java.

Costin, which comment are you asking me to delete? In the analysis of the floor() example, I totally agree with your statement "LSP is not broken because the program by itself doesn't prove anything" however the floor() example does not put a hole in Alistair's reasoning either.

Forget about my restatement and consider only the original version. Fill in the placeholders in the original LSP by making the following substitutions "o1"->"i", "o2"->"r", "S"->"Int", "T"->"Real" and you get:

If for each object i of type Integer there is an object r of type Real such that for all programs P defined in terms of Real, the behavior of P is unchanged when i is substituted for r then Int is a subtype of Real.

So Int <: Real if we can fid an r for every i which is basically indistinguishable from i. In most languages we can easily find the r namely r = i + 0.0 or c = (cast to Real) i or some other way of expressing it in your language of choice and so Int is a LW subtype of Real. (see the note in italics below for an important point)

The floor() example distinguishes between numbers which have a fractional part and numbers which don't but that's not a difference that the LSP cares about. What LSP cares about is whether you can find some Int that has no "partner" in Real. So if you could find a program P such that P(4) != P(x) for any real number x then 4 would have no "partner" so LSP would say that Int is not a subtype of Real. Of course if you set x=4.0, you cannot find a P which can tell them apart (without reflection) so 4.0 is the "partner" of 4. So, the floor() example does not show anything, as you said yourself.

I'm glad you said "the natural inclusion morphism". The use of a morphism is implicit in the LSP when you say "for each o1 of type S there is an o2 of type T" you are defining a function from (all things of type S) to (things of type T). My restatement doesn't modify anything, it just gives the morphism/function a name, f.

The natural inclusion morphism does suffice for Int and Real (in fact it's the only possibility) but the power of Liskov is that you're not restricted to the natural inclusion morphism. I was trying to emphasize this by making the morphism explicit. For instance, to prove that ColouredCircle? is a Liskov subtype of UncolouredCircle?, take the morphism which maps all coloured circles of radius r to the uncoloured circle of radius r. Note: I am not saying that the actual objects should be mapped like this at runtime, this morphism is only used in the proof that S is a subtype of T, once that proof is established we know that we can safely call methods from T on an object of type S and therefore it's safe to pass an object of type S into any program designed to take an object of type T.

Note that ColouredCircle? is bigger than UncolouredCircle? but, by allowing other morphisms, Liskov makes it a subtype. This is nice because then we don't have to deal with lots of abstract supertypes. For instance if we write lots of routines that deal with UncolouredCircles?, we can suddenly start putting coloured, odourless, flavoured circles into these routines without having to alter the routines or the definition of UncolouredCircle?. In particular, we don't have to insert a whole heap of abstract supertypes into our hierarchy to deal with circles which may or may not be coloured, flavoured, patterned or smelly and all the combinations of these.

If we restrict ourselves to the natural inclusion morphisms then this also gives a common definition of subtype but in this case LSP adds no value.

Interestingly, Liskov completely breaks the type/subtype <-> class/subclass relation. For instance we can make a subclass of Circle called SillyCircle? which overrides get_radius to always return r + 1. If we take the types defined by Circle and SillyCircle? (call them CircleT and SillyCircleT) then using LWS, SillyCircleT would not be a subtype of CircleT nor vice versa. But they do have a common subtype which you might call ReadOnlyCircleT. This is quite a useful subtype, for instance you could define the rendering half of your graphics package purely in terms of ReadOnlyCircleT, ReadOnlyPolygonT etc. Managing this level of type granualarity by hand is a pain, hopefully some day compilers will do it for us.

I think you're right, clarification of this (and much more I'd guess) is needed. -- FergalDaly?


Alistair, I've read the 99 paper. As you point out, they do not use the word "class" however they distinguish between "types" and "virtual types" which "have no objects of their own" so I think for "class" read "non-virtual type".

I'm happy to call it "rampant" reflection, that's a non-judgemental term. I agree with you that allowing x.type.name breaks Liskov subtyping however I cannot think of valid use of x.type.name. If for instance you are going to do

if (x.type.name ="type1") {
	do something
} elsif (x.type.name ="type2") {
	do something else
} etc...

then all you are doing is manually implementing method dispatch and you would be much better off using your language's method dispatch which will be much easier and less error prone. Is there another use for it? If not then why make it available?

As for JUnit, the only reason reflection is useful here is because of Java's verbose syntax for creating a closure. Comparing the reflective style with the static style we have

Benefits: Costs: With a few tweaks to JUnit's interface a static style test suite could look like

public void Setup {
  Add(
    "Constructor",
    new Test() {
      public void runTest {
        // test the constructor
      }
    }
  );
  // use more Add()s for more tests in the suite
}

Compare this to the reflective style

public void testConstructor {
  // test the constructor
}

So with reflection we can avoid typing "Add();newTest(){}run" for each test and for the whole suite you can leave out "publicvoidSetup{}". That's a saving of 20 chars per test and 17 per suite. Not much really, considering the costs. The only benefit is purely due to Java's verbosity.

In a language like ML, Setup could return a list of (name, closure) pairs. In such a language, the static style would actually be less verbose than the reflective style because you no longer have to include the word "test" in all the method names.

So JUnit does not imply that rampant reflection is good, it simply implies that certain parts of Java are too verbose.

JUnit also uses runtime class loading. Although this comes as part of the reflection library, I do not consider it reflection - it just happens to require much of the same infrastructure. Whether it's reflection or not, it has no implications for the LSP so it's not rampant.

I have a feeling that any language which implemented full Liskov subtyping and provided a rich enough set of features, including some "non-rampant" reflection, with a nice syntax would have no need of rampant reflection. So it seems to me that the problem is not with the LSP but with languages which force you to use rampant reflection rather than providing a good enough type system.

I could be wrong, maybe rampant reflection is fundamentally useful in general and not just as a work around but I still can't think of any examples.

-- FergalDaly?


Here's a snippet of an email Liskov sent me some time ago:

'LSP isn't a property of a programming language. Any programming language can be used to cause the property to be violated for some S declared to be a subtype of T. The reason is that LSP is a semantic principle having to do with the behavior of T. The most a language can do is enforce syntax (the subtype has to have all T methods with compatible interfaces).'

It seems to me that the above discussion mistakes what Liskov calls a "type" with the interface of a class. Liskov never makes any claim of that kind. In her world a type can easily be a subset of what programmers call an interface, even just one function of a class that has 100 other functions that break the semantics going from base to derived. Adding or selecting one function of a type to disprove LSP is futile. -- AndrewQueisser

I think she is mistaken. In a language where the subtype relations are declared, she is correct. In a (hypothetical?) language where the subtype relations are derived by examining the source code, then S is a subtype of T if and only if the LSP holds for S and T.

What you say is correct but I think you may have been mislead slightly by the examples. They have been given in terms of x.type.name which implied that type is a method of x. So you are correct, looking at this single type method doesn't tell you anything about LSP. However the LSP says you must look at substituting an object of type S for an object of type T in all programs, so if you can do something in your program like

if (x instance of 'T') {
  return 1
}else{
  return 0
}

Then there is no possible object of type S that will give the same result as an object of type T and so T has no subtypes.

So basically, being able to inspect the type of an object does not violate the LSP it just makes it a useless way of defining subtype. -- FergaDaly?
I was thrown into debunking the LiskovSubstitutionPrinciple within the Java language on a recent article I wrote about equality in Java objects at http://www.javaworld.com/javaworld/jw-06-2004/jw-0614-equals.html

Fortunately, this was relatively easy to rebunk and detailed at http://alblue.blogspot.com, and in fact leads to a stunningly simple example that LSP does not hold in Java without the need of appealing to dynamic class or instance based tests.

  public boolean lspHolds(Object o) {
    return o.toString().contains("@");
  }

Calling this with 'lspHolds(new Object())' results in it returning true. However, calling it with (say) a String, results in it returning false: 'lspHolds(new String())'. I've done a like-for-like substitution, yet the behaviour is now negated.

Why? Well, it's because the 'toString()' contract is _weak_. You can't guarantee the exact same behaviour over any condition with such a weak contract. If the contract were very strong (say, it always must return a hexadecimal reference, followed by an at symbol, followed by the class name), then you might be able to guarantee the behaviour, because the code would then rely on part of the contract. However, because it is so weak, it admits many possible implementations.

Where the LiskovSubstitutionPrinciple fails is that it assumes that just because the contracts are preserved, full identical behaviour must follow. That isn't the case. The behaviour still follows DesignByContract - and indeed, this is where the LiskovSubstitutionPrinciple and good design diverge - but it isn't possible to guarantee the same behaviour on such weak contracts.

Consider a method whose contract is "Returns a positive number". You can easily subclass that, and give a different implementation such that the contract still holds but that the behaviour is different. It's still a subtype, and it's still part of good design; but it shows that the LiskovSubstitutionPrinciple doesn't have to hold.

I hope we can get LiskovSubstitutionPrinciple out of the PrinciplesOfObjectOrientedDesign and replace it with a weaker (but more correct) notion of DesignByContract, where subclasses have to obey their superclass' contracts.

-- AlexBlewitt?
I am sorry, but it is no longer possible to escape the conclusion that almost no-one on this page has read and understood the original paper on the subject: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.1223 LSP is a formal concept, and even with my limited knowledge I can see that many of you are confusing syntactical interfaces with semantic ones. And the above observation that 'it isn't possible to guarantee the same behavior on such weak contracts' is not a critique of the LSP (a principle of formal design) but rather a critique of the contracts. It is true, and has been mentioned on this page, that most OOP languages do not (other than Eiffel and I think the newer dot net platform) provide anything close to full support for design by contract. I would think that this is why the LSP is so important: since our present tools are lacking we, as programmers, must supply the needed thinking. Moreover, the attempt to disprove the LSP by showing that it is possible to write a program whose behavior is not fully determined by its syntatic interface is both trivially true and irrelevant to the question of the formal definition of a (Type safe) subtype.

I intend to read the real scientific literature on this point. While I find this site a very useful source of insight and industry knowledge, it has, on this occasion at least, disappointed.

-- Marc Grundfest

PS For others who are lost on this page see LiskovWingSubtyping. Had I followed that link first I might not have had to make the above comment. I tend to Think of LSP not LWS prehaps this is an error, prehaps not, but I will know more when I finish reading the literature. I thank the author of LiskovWingSubtyping for rescuing this topic. It is too important not to..

-- Marc Grundfest

I do differentiate the two. To me, LiskovWingSubtyping is a definition for subtype that is used in some type systems; LiskovSubstitutionPrinciple is the idea that one should use LiskovWingSubtyping. The former shouldn't be any more of an issue than say dependent typing or any other type system. The latter is more controversial, I tend to follow it, as it helps me (and those who come after me) reason about a function from just the local information. But, I won't go so far as to claim that it should never be violated.

I am still researching the issue, But I am very curious as to the kinds of cases were you think the LSP should be violated. I am not in a position to claim that they do not exist, but I have not yet seen a case. If you are still around, can you think of a example.... I am thinking that for all cases were LSP is violated it would be better to avoid subclassing. I would like to explore this idea...

-- MarcGrundfest

Usually, it happens when part of the code is beyond my control and for some reason has to be used to accomplish something the original authors didn't intend. It's rare and a hack, and I always feel a bit dirty afterwards; but sometimes you have to do it to get things done.

Agreed. We are all pragmatists when our paycheck is about to be 'unsigned' ;) -- MarcGrundfest

I tried to refactor but this thing is huge. -- JakobEriksson


The discussion there (spread over multiple related pages) shows why it is problematic to make either Circle or Ellipse a derived class of the other, no matter what you can prove about mathematical geometry. Any such subclassing will break a promise in the base class's interface.

Actually no. One can successfully make Circle a subclass of Ellipse. You will have to restrict Ellipse's mutators to ones that are closed for Circles (as well as the more obvious closed for Ellipses).

That way lies madness, via reductio ad absurdum: you could also just insist on having no methods, then there are no promises to break.

The C++ FAQ that was linked goes into some detail (under the parent page: http://www.parashift.com/c++-faq-lite/proper-inheritance.html ), and is worth reading.

In particular they mention the setSize(x, y) method for Ellipse, which is an unreasonable method to disallow Ellipse from having, but is unreasonable to allow for Circle, since for Circle x must equal y -- so that leads to half of the demonstration, that Circle should not be derived from Ellipse.

I'm not going to duplicate the rest of the argument here; the point was to help people understand what the LiskovSubstitutionPrinciple actually means, as opposed to the misunderstood StrawMan that appeared throughout the above ThreadMess.

If someone really, really understands LiskovSubstitutionPrinciple and chooses to break it, that's up to them. But it's important to be aware of what is actually meant.

What madness? And while you could insist on having no methods, that is overly restrictive. BTW, if you look at the "Is a Circle a kind-of an Ellipse" sub-topic, you will note that the FAQ says "Depends". I don't see anywhere it says that it's unreasonable to disallow "setSize()" for Ellipses. I would think (and given what else I've read I believe they would think this too) that what is unreasonable to disallow would depend on what you were trying to accomplish. I also agree with the rest of your argument. I just felt you had gone too far on this one particular point.

For the sake of other readers, the subpage you mention concludes with:

"(Note: setSize(x,y) isn't sacred. Depending on your goals, it may be okay to prevent users from changing the dimensions of an Ellipse, in which case it would be a valid design choice to not have a setSize(x,y) method in Ellipse. However this series of FAQs discusses what to do when you want to create a derived class of a pre-existing base class that has an "unacceptable" method in it. Of course the ideal situation is to discover this problem when the base class doesn't yet exist. But life isn't always ideal...)"

So yes, I overstated things, in my zeal to get the point across. ItAllDepends?.


CategoryModellingLawsAndPrinciples

EditText of this page (last edited June 19, 2014) or FindPage with title or text search