# Empirical Rigor Cant Replace Theoretical Rigor

Contrast with TheoreticalRigorCantReplaceEmpiricalRigor

If the results of an experiment show that something works well, that doesn't mean that another theoretical experiment (on paper or using a model) is not superior. In fact, quite often the theoretical model is superior to what products current market is utilizing.

Theory is not as expensive as result based real world tests, statistics, and experiments. One cannot as easily rigorously test real world result based (empirical) systems without spending vast amounts of money. Empirical studies are often physically expensive (requiring massive amounts of people, products, and goods to test on), whereas theories are not as physically demanding. In fact, this may be a primary reason that theories are able to be rigorous: theories are available through paper and thought, without requiring massive amounts of hard to obtain materials. Massive materials (or people to test on) require much more time and money than theory.

Example

Consider a car driven by gasoline getting 10 miles per gallon, versus a car on paper that uses alcohol, water, gasoline, and a supercharger to get 50 miles per gallon. If the 10mpg car is a real car available to buy right now - this does not make it superior to the theory based 50mpg car, just because the 10mpg car is available and shows empirical results. In fact, it might even be worth it to completely ignore the 10 mile per gallon real car and not purchase it - and continue improving the theories behind the 50mpg car since it is so superior. If we spend too much time on the 10mpg car we risk all our equipment and machines being destined for the 10mpg car instead of the superior one.

Continuing to tweak the 10 mile per gallon real car because it is empirically excellent (it is shown to get us to the milk store fast) might even be futile and damaging (i.e. to the environment, to the people). Embracing the 10 mile per gallon car since it is "real" and not "in theory" can waste valuable resources which we could have used to produce the theoretical model of the future. It is tempting, since the real car is available and not just on paper - but we have to consider that theories (good ones) do have strong merits.

Nobody I know of is against research of promising ideas. The above seems to be conflating "worth testing/R&D" and "worth using (in practice)". A BigIdea needs to be road-tested before being put into general usage. In general, theory is good at generating ideas and empirical testing/metrics are good at proving them for practice. But they each can do a bit of vise-versa at times.

That is not the question. It should be obvious that both EmpiricalRigorCantReplaceTheoreticalRigor and TheoreticalRigorCantReplaceEmpiricalRigor. Both are needed and have their place.

In general I agree they both have their place, but they are not for the most part interchangeable for their purpose. But also note that things like genetic algorithms and neural nets can be used to provide a nice solution where the domain itself is not subject to theoretical rigor. The human mind was not designed with theoretical rigor (unless you are an ID-ist), and it is the most complicated and powerful machine known.

Evolutionists would point out that the human mind took millions of years to construct and involved trillions of metric tons of biological waste products on dead-end evolutionary paths. If you're going to point to it as an example as to the benefits of 'empirical rigor', you should at least be slapped with the facts: unless we have massive resources available for just throwing away, evolution and such empirical rigor based design is not suitable to any human endeavors.

That does not contradict anything I've said. I merely was illustrating that technically "both are needed" is false.

In human science and engineering, which is obviously the context of this page, there is a clear need for theoretical rigor. Your example does nothing to illustrate otherwise. And apparently even nature empircally observed a need for theory - the human brain operates based on sensory inputs and associative memory utilizing the fundamental basis for all theory: that past observations are decent predictors of future observations. Empirical alone must have failed when it comes to competition and survival.

Theory is not as expensive...

It is hard to measure the total cost of theory. Due to SovietShoeFactoryPrinciple, it may be that a theory is over-focusing on something and ignoring another theoretical factor because it is either not favored by the researcher or is not known to exist yet. The size of all possible theoretical metrics may be infinite. It's not likely that these are all cheap. And its difficult to know if the proponents are selecting the right subset. Empirical metrics for tools tend to be shaped by customer wants, and is thus limited by customer wants. There is not a comparable culling mechanism on the theory side, other than "it feels elegant to me". --top

The cost of not measuring the right things (SovietShoeFactoryPrinciple) is ALSO multiplied for empirical study, top. The size of all possible empirical metrics is at least as large as the set of all possible theoretical metrics - and will generally be MUCH larger because theoretical models simplify things and thus reduce the number of relevant dimensions one might measure. Further, and contrary to your statement, theoretical metrics, like empirical metrics, "tend to be shaped by customer wants". There is a reason people consider theoretical 'gas mileage' and 'safety'. Your entire line of argument is based in hypocrisy and double-standards: you seem to believe that the theory-guys need to cover every possible metric while the empirical guys only need to do a few. You claim to be from the MentalStateOfMissouri, so why don't you 'show me, show me, show me' an actual use case where applying existing theory will be more expensive than applying existing empirical procedure for the same set of measurements.

The cost of formally proving a program "correct" versus traditional UseCase testing, at least for "good enough" levels. UseCase testing is usually cheaper. (However, I suspect that SafetyGoldPlating proponents will argue that the prevention of potential errors is usually "worth it", and we'd be back to the usual HolyWar fights.)

The theory-guys can cover 'UseCase' testing quite cheaply, too - much, MUCH cheaper for things like UseCase testing a rocket in theory vs. empirically. In fact, it is so cheap that they usually skip right past that to examining the general case. And you seem to be fighting for your double-standards again. To be fair, you MUST compare identical metrics. Tell me: what is the relative cost of formally proving a program "correct" (for some particular invariant) empirically?

Do you mean simulations? Simulations are simply a stand-in for empirism, not "theory" per se. And they are an imperfect substitute for the real thing. As far as the cost of testing software with sample data, it depends on how thorough one wants to be. The more you test, the more potential problems you find. The cutoff point is a political/managerial kind of decision.

Simulations are thought-experiments based on a model - they are "theory"; your "simply a stand-in for empiricism" word-games won't fly: in the context of theoretical testing - making sure things work 'on paper' - theory is almost always "a stand-in for empiricism". Besides, JustIsaDangerousWord; the value of "a stand-in for empiricism" shouldn't be degraded by the word "simply". And the degree to which you test something in theory before you implement is also a political/managerial kind of decision, so that entire line of argument isn't capable of distinguishing costs between rigorous empirical and theoretical testing.

I think we have a definitional conflict over "theory" here. As a working definition in this context, I won't consider a simulation that is intended to mirror reality as best as possible part of the "theory" side. Simulations are a "reality sheet-sheet". By making this a working assumption, hopefully we can avoid a LaynesLaw stoppage. Theoretical models may in some ways represent reality, but usually they attempt to abstract out certain one or few BigIdea's. --top

What would you consider a simulation that abstracts out most of reality, focusing on abstract concepts like 'forces' and 'mass' and 'velocity' and 'energy' and a model of relationships between them? And despite the word-games you seem tempted to play, simulations and theory can't be separated. By their very essence, simulations are applied theory, and application of a theory to a model of initial conditions is a simulation. Attempting to divide the two is very much akin to attempting to divide the wetness from water.

I really feel your whole line of argument is based in you possessing double-standards: you seem to have higher expectations for rigorous theoretical tests than you do for rigorous empirical tests. But this is NOT reasonable. If you're going to compare cost to cost, you need to compare situations where the goal is to obtain the same set of qualities or metrics by either means. Every single argument you make that applies to both empirical and theoretical testing is irrelevant. Every single argument you make that implies theory has higher standards than empirical tests is both hypocritical and unfair.

It is true that simulations, models, theory - these sorts of things can and will be imperfect. OTOH, plenty of empirical tests suffer from imperfections too - components not built to specification, accidents, lack of accuracy or precision in measurements, observer error, the problems associated with measuring things that happen in small time/space scales, etc. When an theory-based test fails, it means something is wrong with the model of initial conditions or something is wrong with the theory. When a emprical test fails, it means something is wrong with the object being tested or that something went wrong with the test. When a theory-based or empirical test succeeds, it only is a point of data indicating that something didn't go wrong. Both theoretical and empirical metrics are subject to SovietShoeFactory? problems. Both of them focus upon measuring that which people making decisions consider 'important'. There are a great number of similarities between these that you've been sitting there, with your impressive ignorance, implying to be relevant differences.

There are some essential differences between theory and empirical. The most essential one is that theory and model are constructs of thought and founded in making predictions, while empirical is based in collecting sensory data. Theory is in-the-head, on-paper, perhaps in-a-simulation; empirical requires constructing something real. Theory and simulation and models can abstract away details deemed to be unimportant to a particular test. Empirical requires dealing with those details for each test. And these last points are what indicate theory will, for testing the same properties, be cheaper in the general case.

There are cases where empirical and theoretical tests are about the same price. These mostly occur when you have an already-built system and the tests to-be-performed are cheap or largely non-destructive. Not even the strongest theory-adherents would encourage you to find the 'theoretical' length of a rocket after it has been built.

There are also cases where theory can't provide answers: good theory is already based on past observations, and so will be largely accurate within the boundaries of the observational space that went into constructing the theories but cannot provide answers outside those boundaries, and will always be imperfect (even theories regarding formal languages, logics, and definitions will be incomplete, from GoedelsIncompletenessTheorem). Because of this, theory adherents won't claim that you can avoid performing empirical tests with the purpose of keeping the theory and model in line with reality.

Theory-adherents will, however, argue that a high degree of empirical 'rigor' - great numbers of repeated empirical tests - is largely redundant and unnecessary; after all, the theory itself is already built based on lack of known contradictions over a range of empirical observations. Thus, so long as the model to which the theory is applied remains within the boundaries set by the observations that formulate the theory, there is no valid reason to believe the conclusions made using the theory will be in error... NOT EVEN if the theory is later proved to be wrong based on future observations outside those boundaries. (Example: Newtonian mechanics still perfectly usable for most models despite being proven in error by observations over speed-of-light and particle accelerators.)

So, TheoreticalRigorCanReplaceEmpiricalRigor, at least until it comes time to create new theories that cover new properties or new observational extremes. And it's cheaper. Indeed, these two facts combined form the basis for scientific progress. And while EmpiricalRigorProbablyCanReplaceTheoreticalRigor, it will tend to be expensive beyond measure, and would almost certainly prevent any scientific progress - we'd be reinventing everything from scratch every single time. even medical diagnosis, applied without theory, would involve re-discovering the concept of 'cells' and 'germ theory' etc. on every incoming patient.

Barring an emergency, NASA would never fly a manned rocket without testing the rocket first in actual flights.
• And how can one rigorously test this rocket 1342 times without costing the entire nation a few kabilladillyzillion dollars? Theory can be rigorously analyzed 1342 times over and over again. Calling "testing a rocket" (once or a few times) "rigorous" is fallacious. In addition, usually these "tests" of the rocket are based on theory - will the test work out okay? Let's try and prove the test rocket first - then our test won't cost us as much money! Not only will our rocket mission cost us less - but now our tests of the dummy rocket will cost us less too.

That's not really true; most of the earlier NASA rockets were not reusable, so they had to be used without prior unmanned flight tests. They'll test what they can, of course, to ensure components and systems meet specifications according to the theoretical models they're utilizing - that's quite reasonable. But such tests aren't really 'rigorous' nor are they entirely empirical, not with regards to actual flight. In the end, it comes down to faith in theory.

You are splitting hairs. The designs were tested even if each individual rocket could not be tested in its entirety. If they could do such for manned missions without damaging them, they would.

They'll test what they can so long as it is cheap enough. Calling it 'rigorous' when they aren't even testing whether it can fly? That's unreasonable. They don't even 'rigorously' test whether one design decision will be better in the overall context than another; they rely on the theory. You're the person who keeps bellowing that "the rocket must fly!" as the test before it is acceptable, but the reality is that even NASA's tests are: according to our theory, the rocket should fly if these components are working in accordance with specifications in the designs we engineered based upon yet more theory... so, test the components that won't be destroyed by their testing, or that can be 'cheaply' duplicated relative to our budget. I never claimed theory avoids need for testing (in fact, I said the opposite) - but it does eliminate much need for 'rigorous' testing because one (ideally) embeds said 'rigor' in constructing and testing the theory itself.

I think we have a definitional conflict over "theory" here. As a working definition in this context, I won't consider a simulation that is intended to mirror reality as best as possible part of the "theory" side. Simulations are a "reality sheet-sheet". By making this a working assumption, hopefully we can avoid a LaynesLaw stoppage. Theoretical models may in some ways represent reality, but usually they attempt to abstract out certain one or few BigIdea's. --top

If you're ignorant as to what means "theory", then get a damn education; basically, you're telling me that you haven't a damn clue what you've been talking about the whole damn time you've been flapping your mouth - really, given your history, I shouldn't be surprised. Anyhow, what would you consider a simulation that abstracts out most of reality, focusing on abstract concepts like 'forces' and 'mass' and 'velocity' and 'energy' and a model of relationships between them? Simulations and theory can't be separated. By their very essence, simulations are applied theory, and application of a theory to a model of initial conditions is a simulation: trying to separate them is like trying to separate the wetness from water.

I refuse to reply after such blatant rudeness. Please clean it up if you wish me to continue.

Really? Is that a promise? In that case, I'm leaving it here. WikiWiki - a world of words - doesn't need people who flap their hands and make noises without barely any comprehension of what the words they're speaking actually mean. WikiWiki really doesn't need TopMind.

Don't get too happy, the scope was meant for the above and only the above, you anti-empirical wingnut.

It seems that empirics works best (and can only be replaced by theory at a very high cost) if the subject under consideration is
• real (physical)
• available in large numbers/quantities
• subject to changes (can be experimented with)

On the other hand the more of the above are missing theory plays its advantages. Examples:
• astronomy (can't mess with the stars)
• rockets (no large numbers)
• security (no real)

MayZeroEight

EditText of this page (last edited June 2, 2008) or FindPage with title or text search