s are those where you're erroneously convinced that some desirable result is caused by taking some particular kind of action. Subsequent failure of the desirable result to occur is not used as disconfirming evidence that you're wrong to be convinced that way, but is instead used as evidence of a need for more of that action.
Performance-related pay may be one of these. It seems obvious that more pay for better performance will result in better performance. Studies, however, have shown little or no effect other than an initial jump.
So the sequence might go like this:
- A company introduces PerformanceRelatedPay as a proportion of compensation
- Initially, performance increases
- Then performance declines to something close to the norm, because of the above.
"Clearly" the problem is not enough PerformanceRelatedPay
. Make it a bigger proportion of compensation immediately...
Of course, it could be that more PerformanceRelatedPay
would work, but you can't prove it from the above.
I've been told that far too often, this sequence of events occurred (is this a OverSimplifiedHistory?
- A physician drains a little blood from his patient.
- The patient doesn't get better.
- "Clearly", not enough blood was drained. Repeat until the patient *does* get better.
example above seems like a clear example of the HawthorneEffect
If workers know your measuring their ability to do "X", they'll do more "X" -- for a short while.
But if the improvement lasts for more than a few weeks, then it's not the HawthorneEffect
Possibly not. If it were a pure Hawthorne effect, any change to pay would produce a temporary improvement in performance. Try cutting people's pay and see if that has a similar effect?
This event actually happened at one of my employers. And yes, it does have a similar effect, although there are some other different variables so the comparison is probably invalid. The company claimed it had to do a reduction of 15% of wage expenses, either a 15% pay cut for each employee, or 15% of employees get laid off. People suddenly started trying to get themselves involved in as much work as possible, so that they would appear less expendable. A few weeks later, of course, everything returns to normal...
"The Hawthorne defect: Persistence of a flawed theory"
report by Berkeley Rice
... results conflict with, or at least fail to support,
the notion of the Hawthorne effect.
subsequent research has failed to duplicate the supposed Hawthorne effect
So, is belief in the Hawthorne effect itself a SelfSealingBelief
The story above this would imply not
The story above says everyone in the company believed there was a high chance 15% of employees would be laid off, so everyone did things to *appear* less expendable. Was their performance measured ? Did it actually increase ? I'm guessing it did *not* increase -- people just spent more time doing highly visible work and less time doing work that was less visible. So there still is no evidence for the HawthorneEffect
Thanks. Now for the hard parts:
- How can one determine whether one is seeing a real effect or just a SelfSealingBelief?
''Vary other things. Instead of PerformanceRelatedPay
, try improving working conditions, or just talking to people, or team-based rewards. If you get improved performance, you might be wrong. Alternatively, if you were right all along, you now have more tools in your toolbox; if you don't it's suggestive that you were right. Look for evidence from other situations where your favored action didn't give the improved performance, and see if you can see why (this is what happened in PerformanceRelatedPay
Look at your deductions to check you've not made assumptions (perhaps the action hadn't changed even though the performance had, perhaps other things changed as well as the action).
Consider if you've been objective in your search for causes. Have you gone looking for evidence to fit your theory, or have you dispassionately (as much as that it is really possible in reality) looked for all factors that could have led to the changed performance?
Bring in an outside point of view. Teams tend to have a shared map of the world (indeed they need to to work effectively as a team). All such maps simplify the world by ignoring facets of it, or by making simplifying assumptions (and again they have to - the world's too complicated). The cause of a SelfSealingBelief
is often an erroneous assumption ("People are primarily motivated by their level of compensation", say), but it's difficult to step outside and see your own map. Outsiders bring different maps and different perspectives.''
- How can one best re-open a discussion with someone who has labelled one's hard, cold, obvious conclusions as SelfSealingBeliefs?
''In my opinion and experience, very few conclusions involving people are cold, hard and obvious. They're also often very context dependent.
If the "someone" has labelled the conclusions as possibly an SelfSealingBelief
, they presumably had their reasons. So perhaps they aren't as hard, cold and obvious as you thought - at least as viewed by the other person.
Try looking at the evidence from that point of view - can you see why they've labelled it so? Do they have all the same evidence as you? If you have more, perhaps you can provide it. If they have more than you, ask them to supply it.
If you can see why they've labelled it so, but you still don't agree, then you're into the answer to the first question, either to demonstrate to them that what resembles an SelfSealingBelief
isn't, or maybe to realize they have a point, or perhaps to explore ways of getting more evidence either way. ''
(Sarcasm intended, but question serious.) -- rj
Answers serious too. I managed to avoid sarcasm, at least intentional sarcasm, difficult as that is to someone British :-)
related to SelfFulfillingProphecy