- Justin Kruger and David Dunning
- Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
The part of this paper which reports overestimation and underestimation does not seem to me to have a firm foundation. It is rather like people assess themselves poorly overall and use a scale of 60 to 70 percent in self-assessment overall (this they call "regression effect" and say they don't "believe" it would explain all the skew).
The interesting thing is, the people in the second-best quartile have the best self-assessment initially (because their competence falls in the scale of 60 to 70 percent) - but even more interestingly, they are the ones that trust their abilities least
However, the hypothesis that self-underestimation of the aptest quartile is due to ignorance of the competence level of others seems to have good grounds. Also, the part of study that experiments with the interconnectedness of learning subject matter and learning subject-oriented metacognition gives firm support for their hypothesis. Sadly, this last part of study concerns such a small number of test subjects (N=19) in such a constrained domain (learning away a certain logical misreasoning typical of humans) that the results of the test cannot be generalized.
Moreover, the study does not
provide a hint of what should, or could, be done about bad metacognitive skills - except for the nice notion that by teaching people things, you simultaneously teach them to assess their own ability.
It would be an interesting study to compare the achievement levels of the same people, before and after their metacognitive competence increased. Having recognized their limitations, would they try things less and fail less, or would they succeed more often by trying less challenging tasks?
Or would they just not try anything at all?
What part does confidence play in success and what part does self-deprecation play in failure?
Do those who are skilled and know it have any duty to help those who are not skilled and do not know it?
Or is it enough for them to be as they are and to leave those who are not where they are as they are?
Moved from UnconsciousIncompetence (the study being discussed is the same one reported in this paper):
This reminds me of an interview I heard on NPR with a researcher who had done a study which showed that incompetent people lacked the skills to realize they were incompetent, thus compounding their problem.
This is hardly a surprise - incompetent people who recognize it generally do not stay incompetent. It's like pointing out that poor people tend to live in poor neighborhoods.
No, that's not quite it. The point is that if someone's incompetence at some task goes beyond a certain level
, they will most likely not even be able to tell how bad they are, since they lack even the most rudimentary notions of what a good performance is. The study expresses this by saying that such people "lack metacognitive skills" in the relevant domain. This is more interesting than simply saying "stupid people are stupid". Your formulation implies that it's down to whether people "recognize their incompetence", which implicitly makes it their responsibility to do so. The study makes the point that it's actually useless expecting people with extremely poor skills in an area to improve off their own bat - they need to be taught the "metacognitive" part so they are at least capable of understanding what competence means
The study revolved around a test that the study group was asked to take. [There were actually tests in four different areas.]
Before taking the test, they were asked to estimate how well they would do. After the test, they received their scores, and were told they would take another similar test. Prior to the second test they were asked to give another estimate of how they would do. The results were as follows:
- Virtually all study members overestimated their performance on the first test.
- Actually, the top quartile underestimated their performance:
- Those who scored highest on the first test downgraded their estimates for the second test.
- No, the top quartile (for study 4 where this was tested) upgraded their estimates, which made them more accurate.
- See the rightmost column:
- Those who scored lowest on the first test did not significantly alter their estimate for the second test.
Quite interesting, No?
For more detail, see:
For those interested in StatisticalProcessControl
, the choice not to alter estimates is considered the correct one. There is simply not enough available evidence, especially from a single data point, to alter a prediction. There is a body of theory that states that altering one's prediction about the future based on the immediate past leads to greater inaccuracy.
Sure, but does a "test" constitute "one data point"? If all the questions are very difficult, is seems reasonable to assume the next test will also be difficult. Even if the test consisted of only one question, the person has to know that "question difficulty" is random and the difficulty of the "total population of potential questions" is such that the next question might be very simple.
If, instead of assuming competent and incompetent participants, one assumes all participants are of roughly equal ability, then the variation of results is explained merely by chance. On any test, 25% of the participants will be in the bottom quartile and 25% will be in the top quartile. There is no predictive evidence that scoring in the bottom quartile on the first test leads to any conclusion except that there is a 25% likelihood of scoring in the bottom quartile on the second test.
Nonsense. Just look at Table 2 (above); it clearly shows that there is a very strong correlation between performance on the 'before' and 'after' tests.
The experiment(s) described in the paper are flawed in that they never established the existence of the subpopulation. This seemed to dawn on the experimenters as reflected in the conclusions to the first (humor) experiment, but never is really addressed in the design of the follow on experiments. In an experiment such as this, the burden of proof is on the experimenters to prove the subpopulations exist. Without evidence that the subpopulations exist, one cannot make any inferences on how the subpopulations act. One cannot determine a subpopulation based on a single measurement and certainly not by drawing lines on a graph paper at 25% and 75%.
Michelangelo said regarding the Sistine Chapel ceiling - it originally was just blue with stars - "the place is wrong, and no painter am I"
I wonder what would be there now if the place had been right and a he was a painter.
On the ceiling? With a paint brush?
Damn! Do you have to complicate all my jokes? I need a chocolate. Then read UselessTruth.
Re: "the authors found that participants scoring in the bottom quartile on tests of humor..."
Humor exams? Now that I gotta see.
Go ahead. That part of the study is IMNSHO completely flawed.
I think this is part of a bigger issue: HumansAreLousyAtSelfEvaluation
This completely explains the horribly amusing performers on Pop Idol (American Idol), X-Factor and those other competitive reality shows.
Same goes for David Gonterman, who styles himself a Nobel Prize winner despite writing terribly crappy fanfiction, as well as Uwe Boll, that German director who thinks is elevating videogames to a new form of cinematic art, when in reality he's just the pawn of a bunch of corrupt German productors exploiting a legal loophole that makes it more profitable to make lousy films rather than good films. -- DaNuke?
Study suggests that over-confidence can give one an advantage under certain environments:
Humanity Risk Equivalency Paradox
(for lack of a better name)
For the sake of argument, suppose I was UnskilledAndUnawareOfIt
(as my more verbal detractors claim). If I can fall under the spell of such human failure, then why would you, the claimer, be immune? How do you know that YOU are not the self-deceived one? If I can run a "self-diagnostic" and not find anything materially wrong even though I was outright wrong in reality, then why couldn't this affliction also happen to you?
It can. For that reason, I constantly reflect on my own cognitive processes and compare, contrast, and sanity-check my views and conclusions with my experience and that of my colleagues and other sources. Do you do the same?
- It is quite telling that you didn't simply answer "yes". Pseudo-affirmatives like "yip" and "yep" frequently disguise uncertainty or dishonesty.
- Now you voted yourself an expert in psychology also. Interesting coming from somebody who rejects WetWare in software design.
- This requires no expertise in psychology. It's a recognised truth about evasive behaviour, familiar to anyone involved in management and/or negotiations.
- I'm also familiar with projection.
- Can you show how it applies here? Are you sure that in claiming "projection", you're not projecting?
- I too "constantly reflect on my own cognitive processes and compare, contrast, and sanity-check my views and conclusions with my experience and that of my colleagues and other sources". That's one reason I'm here. Show me material and common objective wrongness on my part and I'll flunk myself and go away. You have my word. -- top
- Almost everything you write about types is contrary to recognised definitions or demonstrates profound lack of understanding of basic ComputerScience. ObjectiveEvidenceAgainstTopDiscussion is quite definitive, too.
- I doubt there is a single "recognized definition". You pull phoney standards bodies out of your ass. It's happened with "module" and "algorithm" also. You make shit up out of the blue. See a shrink. ObjectiveEvidenceAgainstTopDiscussion has nothing objective on me beyond minor typos. It's just gripes from your fellow Vaguites after I attacked their sacred cows. It's a misnomer. Kill me with objective logic, not claims. Any bastard with a keyboard can claim shit about people. Authoritative claims are cheap. The true smarties use their knowledge to objectively trap their victim with something objective. You cannot do ItemizedClearLogic on your claims because your brain is not as powerful as you think it is. Your ego writes checks your objectivity abilities cannot cover, bouncing your ass all over FuzzVille. Your brain memorizes trivia well, but cannot transform it into empirical results. Admit you have a weakness.
- Your reaction speaks for itself.
- I used to get a lot guff from OOP-everywhere fanatics. They called me everything in the book, even worse than you. I thought maybe somehow I was missing something, not seeing reality properly, that my head was broken. However, I published an email address, and what you might call "fan mail" soon started pouring in, thanking me for fighting a necessary fight that they didn't themselves feel comfortable fighting. An IT author even told me about a book critical of OOP he started writing, but stopped after getting rude hate-mail after publishing an article on the topic, calling him a caveman, fogie, Luddite, etc. In the end, OOP has faded from the OneTrueWay into one tool among many. I will continue to attack sacred cows. Live with it - t
- The so-called "fan mail" you've received is unquestionably far, far, far less than all the people who happily have used -- and continue to happily use -- OOP and FP and HOFs and all the things you rail against. You're not attacking "sacred cows", because genuine intellectual attacks come from application of deep understanding and logic. If you were genuinely attacking an OOP/FP/HOF/thin-table "sacred cow", you would be doing so by demonstrating their inconsistencies, inadequacies and limitations with potent knowledge-based logic and irresistible alternatives. Instead, your "attacks" consist mostly of whingeing about how we don't give you enough code samples, simple enough definitions, or convincing enough (to you, and you alone) arguments in favour of using what we use (successfully!) instead of reverting to the hypothetical marginal-developer-friendly set of beginner-level procedural tools that you appear to prefer.
- I'm not against some usage of most of those. You are putting words in my mouth. Most of my complaints come when somebody insists "X is objectively/clearly better" without demonstrating it. It's not my burden to prove the opposite; they made the claim. And in some cases I do have objective information about these, such as OnceAndOnlyOnce violations of columns or interface signatures.
- Are there any for which you oppose any use of them?
- I believe that OOP and HOF's overlap too much such that HOF's are a redundant feature for must apps unless forced on a shop by certain API's. HOF's tend to "confuse the staff". We've been over this already.
- Is your goal to stop the use of HOFs?
- Only where I see it be over-evangelized like OOP used to be.
- So your intention is counter-evangelism? How is that different from what you perceived to be "over-evangelism" of OOP?
- It's not counter-evangelism. If somebody makes big claims with small evidence, I'll point out that their evidence is small. [Reworded unpleasant language after calming down. -t]
- Do you think that's funny? This conversation is over.
I can think of some career-based motivations for why some WikiZens
view the nature of IT evidence the way they do. It thus doesn't have to be caused by "being crazy" in the nut-house sense, but perhaps caused by protecting your "career-ego"
from the ugly realities of the marketplace. There does seem to be a "career pattern" of my detractors that leads me to certain speculation around such. I could go into more detail, but it would be seen as offensive, and I'm not in the mood (yet) to launch such specific missiles here. Ego is more powerful than insanity
when it comes to UnskilledAndUnawareOfIt
in my observation. (I don't deny that my ego may also be lying to me, but I cannot conscientiously detect it so far.)
Perhaps we can identify potential causes of UAUO: -- top
- Broken brain (such as failure to recognize logical inconsistencies even when made clear)
- Hate to lose an argument such that one does not back down.
- A given viewpoint increases the real or perceived value of one's career and/or works. (for example, you are good at X, so you want to make everything based around X to favor your existing skill/strength.)
- Screwing with people for the entertainment value of seeing them frustrated (but not aware of this pleasure bias).
- Obsession over a narrow or specific principle above all else because for some reason one's brain gets fixated on a specific concept due to a brain or psychological flaw.
See also: UnconsciousIncompetence