Math is not always science. Sometimes a lot of equations and axioms can be derived from an idea, but that alone does not validate the idea itself. It is a nice bonus if useful ideas have a "math" behind them, but not necessarily a prerequisite. If in comparing two ideas, one has a nice math behind it and another doesn't, perhaps one should go with the mathified one if they otherwise seem equal.
In other words, the
quantity of things that can be surmised or derived based on root axioms alone does not necessarily reflect the
quality of the base axioms themselves.
Misuse of math tends to happen when the root ideas are unsound or still up in the air. Somebody takes these shaky or unproven ideas as givens and creates extensive formulations. Such work is a good thing to explore, but doing "interesting" things with simple ideas alone does not validate the root ideas in itself. A classic example from
AddingEpicycles is the assumption that planetary orbits are all based on perfect circles. Elaborate systems of nested circles were often used to match the idea that "
EverythingIsa circle" to the observed motions of the planets. Eventually it was replaced with Newtonian physics, which had better predictive value than nested circles and was based on the observable properties of gravity. Relativity later superseded Newtonian physics. Newtonian physics turned out not to be perfect (perhaps nothing known is), but at least serves as a
UsefulLie because of its low complexitytoreliability ratio.
I believe that is similar to what we need in the software design field:
UsefulLies with low complexitytoutility ratios. Every abstraction is probably going to turn out to have deviations and flaws, but we still need abstractions to help us manage overwhelming complexity. Thus, we need simple but powerful ideas. We can use math to help us find and test ideas, but it may not serve as the ultimate validation tool for such ideas.
 top
Excellent observations, Top! Does this mean that you have now rejected the OoLacksMathArgument? After all, that silly dispute strikes me as just as much of a MisuseOfMath as anything else.
I have moved away from
LaynesLaw issues around defining "math" and toward
OoLacksConsistencyDiscussion. Relational's consistency advantage over OO may or may not be due to its mathbased beginnings. I think it helped because math requires consistency, or at least specificity, to work, but the real "road test" is observed improved consistency.  top
Procedural programming lacks consistency too, Top. Sometimes people use pointers, other times they don't. Sometimes they pass in a single struct as a parameter, and other times they split up the paramaters and don't use a single struct. This is inconsistent. Sometimes we return error codes as an OUT parameter, whereas other times we return the error as a function result. Again inconsistent. There are many inconsistencies in many programming practises. Tables in databases often contain inconsistent data: some is phone numbers, some is blobs, some is only positive integers and not negative, some is strings (which is why you need a TypeSystem, to handle different kinds of Types of things).
I meant that the rules were clear and available for inspection. Using different rules to achieve the same goal is another matter.
Also, did you know that object oriented programming is just structured programming with extensions? Since you dislike OO, do you also dislike structs? A struct in C with methods, is an object. Structs are objects. The OO people invented new snake oil terminology instead of sticking with existing terminology like "struct" or "record". They could have just called it "extended struct" instead of calling it "object". They introduced new terminology to sell something old: structured programming. So if you dislike OO, then you dislike extended procedural programming, because that's what OO is. Sometimes the extensions are not needed and programs can be done without them, other times these extensions come in handy. That's what you need to figure out, when they are handy and when they are not.
I agree that structured programming by itself is limiting. That's why I like to combine it with
TableOrientedProgramming. TOP is more powerful than OOP. For example, it handles manytomany relationships much nicer. But this is not the place for arguments over such. top
PageAnchor: "Difference Between Math And Science"
Math is about producing an internally consistent model or notation. Science is about the applicability of that model to the external world. Math can be completely divorced from the physical world, while science cannot. Math can "lie" as long as the lies are consistent; science cannot because in the end it must be tested against the physical world.
I hate to ask what programming where programming exists then. . .
 I would probably place it with math. There are some long discussions on this somewhere around here.  top
 Such as IsProgramming?<foo> pages  but I reject the notion that math can be completely divorced from the physical world. The symbols are still tied to this realm. We may think about math in abstract axiomatic terms, but that thinking, and the symbols we use to represent things  are still tied to this world. I think that was part of what Godel proved  that math is not selfsupporting  requiring an external system to support it. . . at least that's the only explanation I can find for why Greek letters are still used. Now there may be an underlying "truth" in math that we represent with abstract symbols, but I don't consider modern mathematics to be that truth  it's just a bunch of symbols we invented to get close to it.
 But the symbols are arbitrary. One can assign unique integer ID's to the symbols, for example, and use those instead. And those integers can be represented in binary or all kinds of different notation. Whether math is bound to the physical world or not, I don't think it is the symbols that bound it if so. They symbols are for human convenience only.  top
 One way to view Godel's work is that he showed that any sufficiently complicated symbol system is inconsistent with itself  in essence proving that all symbol systems need an external system to exist. [That's misunderstanding it.]
Graffiti seen on a temporary barrier during construction on Evans Hall (the home of the math department at
CalBerkeley):

 "Math is God"

 "No, Math is the set of all Gods"

 "God is Physics"

 "No, Physics is Math on earth"
Some Math misuses:
 "People are 67% more likely to believe someone who gives a statistic."
 "0 is a number, infinity is not. That is why you cannot divide infinity by zero."
 "There is a good reason that computer science students taking the required calculus are not taught how to find the antiderivative of f(x)=1<<x&3"  (this example is problematic, but is being left intact to provide context for the responses below)
Misuses? The first statement without context is probably useless. If it reflects an actual study, it could be interesting. The second statement is basically correct, except the inference not really useful in the issue of understanding why infinity is not a number in the usual system. The third statement is true, i.e. there *is* a good reason. How do any of these statements reflect the claims of the page?
Response:
 The first statement is recursive and subtle. I have a friend who loves to say it, and he makes a point of changing the 67% to something different each time.
 this is neither recursive, nor subtle. Your friend making up imaginary statistics makes no difference to my comment. If this were the result of a properly done study, it could be an interesting statement. Pulling it out of your butt isn't interesting, but that is a comment about the context. The point is that such a statement may or may not be a misuse of math, but without context it is impossible to tell.
 The second is mathematically correct, but it is misleading  there are much simpler reasons you cannot divide by zero.
 The question was about dividing infinity by zero, not about dividing by zero. Most people don't understand the role that infinities play in the usual real number system. The problem with dividing infinity by zero is, first and foremost, that it doesn't even make sense to talk about. This is (somewhat subtly) different from the problem of dividing 1 by zero, for example.
 I would love to hear the good reason that makes #3 correct.
 Ask yourself first 'what is analysis?' (and thereby calculus). Now ask yourself what sort of mathematical space the operation 1<<x&3 lives in, after sorting out your notation, etc.
 Answering a question with another question is the oldest and most disreputable method of debate.
 This is not debatable, so I'm sure that respondent just wanted you to think about it a bit. Simply then, there are two problems. 1st, defining exactly what it is you mean by f(x)=1<<x&3. Secondly, and more importantly, if this is following programming conventions, then your f lives on a discrete space. In other words, this type of function does not live on the sorts of domains that it even makes sense to talk about (the standard) calculus on. In other words, you are asking a nonsensical question because you misunderstand the calculus, the types of operations described, or both.
 The statement is meant to say "if one knew the antiderivative of the function f(x) = 1 (bitwise shift left) by x (bitwise AND) 3, one could calculate the integral(area under the curve). Why is that a nonsensical thing to do?"  I'm asking seriously.
 The answer to your question is contained in the response. You are defining a function f whose domain is a (in the usual case, limited) set of binary encodings of integers. Calculus, roughly speaking, studies functions of a continuous variable. In other words, x has to be able to take any real value in some set of intervals (there are more constraints). Do you understand now? [Aside: a little technical relative to the level of this page] You could construct a function g where it takes on f's values for integers (allowing for infinite binary expansions) but something else everywhere else. The problem with this is that the contribution to an integral of any set of measure zero is zero, so integral would be completely dependent on the behaviour everywhere else.... Remember that antiderivatives really have nothing fundamentally to do with areas under curves  rather they are (through the fundamental theorem of calculus) an inverse of an operation defined by a limit, where that limit exists. [Technical correction  an integral is fundamentally connected with "area under a curve" (or "measure"), and is usually defined as a limit (of a sum which clearly resembles an area calculation) rather than as an inverse of another function.] Correction to the 'correction'. You are getting confused here.... (staying with the introcalculus integral, i.e. the Riemann integral, and avoiding complications due to doing this in the standard way, i.e. the Lesbesgue integral, where the connection to area under (L2) functions is less immediate) This integral is defined as the limit of the sequence of Riemann sums (well, really it is defined in terms of the Jordan measure these days, but staying with intro calculus...), but that comment was about the *antiderivative*. These are not the same thing, although a connection is made through the fundamental theorem of calculus. The area under a curve is not the same thing as a measure. The idea of an antiderivative, and the idea of an integral apply in places where 'area under a curve' does not make sense  these are the more fundamental concepts. Please be careful, and make certain you understand what you are responding to before offering "technical corrections" Your original assertion (My original statement is correct.) was "antiderivatives really have nothing fundamentally to do with areas under curves". When "area under a curve" makes sense, it can be evaluated as a Riemann integral (nit to pick, area under a curve makes sense in places where the Riemann integral does not work...), and that is connected by the fundamental theorem of calculus to an antiderivative of the function defining the curve (No, it isn't. It is connected, in the definite integral case, to the *difference* of evaluating the antiderivative at two points.). Hence there is a connection, and it is made via a fundamental theorem, so it could reasonably be called a fundamental connection. When I mentioned "measure", I was giving "measure" as an alternative, not saying that "measure" was the same thing as "area under a curve". However, when "area under a curve" makes sense, the Riemann and Lebesgue integrals agree on its value, don't they?(Yes, but that doesn't make a measure the area under a curve. The *integrals* agree, but an integral is not a measure.) Also, saying that "antiderivative" is a more fundamental concept than "area under a curve" doesn't imply that the two are not fundamentally related in circumstances where "area under a curve" makes sense.They are related, yes. One is fundamental, one is not. And now you are wandering off into semantics. My original statement is correct. An antiderivative is not an area under a curve. They are fundamentally different things, but there is a (nonobvious) connection between them through the FTC. Your socalled "correction" has confused these issues. Some of what you say is correct, and if you had added it as a comment, rather than "correction", I would not have taken you up so much...
 That function is nondifferentiable because it's only defined for integers. You can't take a derivative or an antiderivative of a function unless it's continuous. The first part is the right idea. The second part should note that being continuous in not a sufficient condition. That is, if a function is differentiable, it must be continuous but the converse does not hold (there are continuous functions that are nowhere differentiable). Furthermore, note you may integrate a function with a countable number of discontinuities (of the right type).
To clarify the above, let's restrict the discussion to some real function denoted by f(x) with an antiderivative denoted by F(x), both having as domain the closed interval (a,b), where a and b are distinct reals, with a < b. To avoid quibbles about the meaning of "under", let's also suppose that the function f(x) is positive (and finite) or zero. Let's also suppose (unless indicated otherwise) that "area under the curve" makes sense for the function f(x) (but we allow f(x) to have discontinuities). Let's further suppose that F(x) has no discontinuities, and that it has derivative f(x) everywhere in the open interval (a,b) except for a finite (possibly zero) number of values of x. If that "except" clause is too restrictive, let me know. Now let's consider (under the listed assumptions) various points raised above.
No. Your restriction is arbitrary, and has no bearing on the original comments (sorry, I missed this bit last response). I was being generous  the usual (i.e., standard) definition of an antiderivative of f(x) would require that it's derivative is f(x) throughout their domain. Either way, the Dirichlet function doesn't have an antiderivative.
1. You made the point that "area under a curve" can make sense in places where the Riemann integral does not work. Can you give an example function f(x) for which the area under the curve for f(x) makes sense but is NOT obtainable as the Riemann integral of f(x)? (Note: the assumptions were designed to exclude the possibility of an area which "makes sense", but is infinite.)
Dirichlet function. What continuous function F(x) has the Dirichlet function as its derivative?
Irrelevant. The Dirichlet function is integrable, and that integral is interpretable as an area. It is not Riemann integrable. However, as noted above, it doesn't have an antiderivative. The point under discussion is the relationship between the concepts of antiderivative and area under a curve, so one has to restrict examples to cases where both exist, not just one of them.
2. You made the point that "the idea of an antiderivative, and the idea of an integral apply in places where 'area under a curve' does not make sense". Please give example functions f(x) and F(x), where your point applies, and f(x) is integrable (Riemann integral or Lebesgue integral  your choice).
Please. Areas are strictly nonnegative values. Or simply consider a path integral. Or complex analysis. We're discussing nonnegative real functions of a real variable, so none of your responses are significant. I asked for example functions.
See above. Look, this stuff is all readily available to you  it isn't my job to get you past intro calculus here. That doesn't apply. It's trivial that an antiderivative isn't related to anything if it doesn't even exist! Hence, the existence of a function without an antiderivative doesn't help.
3. For the purposes of relating the area under the given function f(x) (with antiderivative F(x)) to another function of one variable, I choose to consider a part of the curve having "start point" (A, f(A)) and "end point" (x, f(x)), where a <= A <= x <= b, A is a constant and x is a variable. Suppose that f(x) has a definite integral between the limits A and x. I assert that the fundamental theorem of calculus applies (but there may be some cases where it doesn't  I don't have a proof of the theorem under the stated conditions), that the value of the definite integral is equal to the area in question, and also equal to F(x)  F(A). Now define G(x) = F(x)  F(A) (where x lies in the closed interval (a,b)). This makes G(x) another antiderivative of f(x), since it differs from F(x) by a constant. This antiderivative function gives the area in question directly  it is not necessary to subtract G(A) because G(A) is zero. Note that I referred to "an antiderivative" originally, so I'm within my rights to choose this one. This was my reason for making my original correction.
Will comment later.
4. I invite you to justify your objections to my original comments by providing example functions f(x), F(x) where the above reasoning breaks down  for example, where the fundamental theorem of calculus does not apply, or the area under the curve either does not make sense or cannot be found via the antiderivative.
The basic objection is very simple... you attempted to 'correct' something that was not incorrect, and have continued to demonstrate a failure to grasp what fundamental means, among other things. Again, you're not providing examples. You're free to clarify "fundamental" as well if doing so helps to clarify how your comment applies to the examples.
I provided a good example above. The fundamental point here is how these things are defined, and what those definitions mean. It seems to me you are bashing around a halfbaked grasp of introductory calculus and some very vague idea that there is more too it than that. This is all standard intro analysis though, so if you are genuinely interested, you can learn about it. Trying to force things to conform to a somewhat confused surface understanding of analysis will not get you very far..... I'm aware that the concepts being considered are not defined for some functions. However, that doesn't imply that the concepts "fundamentally have nothing to do with each other".
5. You stated "The *integrals* agree, but an integral is not a measure." I invite you to give an example of any function f(x) (no presuppositions (except that f and x are real)) for which the Riemann integral exists but the Lebesgue integral either doesn't exist or is different in value.
You are obviously confused about what a measure is. This has nothing to do with Lesbesgue vs Riemann. Integrals are defined in terms of measures, but measures *are not integrals*. Got it? Almost. What you said was "an integral is not a measure". If that's correct, at least give an example integral and prove that it can't be interpreted as a measure.
Just look at the definitions. An integral (including the Riemann integral, technically, is defined in terms of a measure, which is a function with specific properties. There is no 'interpretation' here. Are you intentionally being obtuse? No. If you think it's trivial to give an example of a Riemann integral which doesn't have the properties of a measure, just do so.
 Thanks for answering the question. I had not considered that the function must be continuous. My mistake.
 The statements reflect the claims of the page in that just using mathematics does not make something true. The truth has to exist outside of math  math can only show a model to be mathematically consistent  i.e. it can still be completely wrong in the context of the realworld. Remember the old proof showing how bumblebees cannot fly?  Until we adopted a different model of aerodynamics. The proof was mathematically correct, but that didn't mean bumblebees couldn't fly. I can say "If 4 immoral people give birth to 2 kids, that is 6 immoral people", and be wrong  because my model assumes that the children will be immoral.
 For the above reasons, I hope you see how you are incorrect here. I agree that using mathematics applied to something outside the mathematics does not make it a good model. Be careful though; there was never a 'proof' in any mathematical sense that bumblebees cannot fly (and in fact the claim is something of an urban legend). Your second example is a bit better. It would be poor model indeed, but this is quite different from the three statements above.
 That's a non sequitur  nothing has to exist "outside of math" on account of FermatsLastTheorem. (This doesn't make any sense. Why? It counters the notion that the truth must exist outside of mathematics. Mathematical truth exists within mathematics. Outside of mathematics, what does "must exist" mean?) Fermat's Last Theorem has nothing to say on the subject. If FLT has nothing to say, neither does any other mathematical theorem. So any assertion that "truth must exist outside of mathematics" follows from the nature of mathematics is a non sequitur. The use of the word "must" implies an assertion that it follows from something  if that something is not supposed to be mathematics, why mention mathematics? As for the rest of it, unless you want to argue that philosophy is mathematics, you'll have to include that as well. That doesn't make sense  include it in what? Things in which truths must exist, I expect.
Top, you should stop defending the relational model  because that isn't what you are defending at all. You are defending something else  your own table oriented model. The inventors of the relational model and the current maintainers do not speak of the relational model as you do (in other words, what you are describing is not the relational model, nor does it have anything to do with hierarchies as you have stated on other pages). When defending your model, defend it  but don't discuss your table model as the relational model
in your defenses, please. It is very confusing and misleading.
I smell a brewing
LaynesLaw debate over "relational" similar to the Nggard versus
AlanKay OOP definition battles [insert link when found]. Who sanctioned the "current maintainers" anyhow? That is kind of a eurocentric view of "truth" where appointed people instead of peer consensus determine definitions. Plus, the issue of "using relational" and "using relational properly" may be two separate things. But I'll save debating such issues for another day.... top
(Please do not put specific issues left over from other debates near the
top of a topic if possible. The top should be reserved for introductions and generalities, not detail nits.)
See also:
AddingEpicycles,
TopOnTypes,
ProgrammingIsMathDiscussion,
SoftwareGivesUsGodLikePowers,
SovietShoeFactoryPrinciple