Unskilled And Unaware Of It

People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

The part of this paper which reports overestimation and underestimation does not seem to me to have a firm foundation. It is rather like people assess themselves poorly overall and use a scale of 60 to 70 percent in self-assessment overall (this they call "regression effect" and say they don't "believe" it would explain all the skew).

The interesting thing is, the people in the second-best quartile have the best self-assessment initially (because their competence falls in the scale of 60 to 70 percent) - but even more interestingly, they are the ones that trust their abilities least.

However, the hypothesis that self-underestimation of the aptest quartile is due to ignorance of the competence level of others seems to have good grounds. Also, the part of study that experiments with the interconnectedness of learning subject matter and learning subject-oriented metacognition gives firm support for their hypothesis. Sadly, this last part of study concerns such a small number of test subjects (N=19) in such a constrained domain (learning away a certain logical misreasoning typical of humans) that the results of the test cannot be generalized.

Moreover, the study does not provide a hint of what should, or could, be done about bad metacognitive skills - except for the nice notion that by teaching people things, you simultaneously teach them to assess their own ability.

It would be an interesting study to compare the achievement levels of the same people, before and after their metacognitive competence increased. Having recognized their limitations, would they try things less and fail less, or would they succeed more often by trying less challenging tasks?

Or would they just not try anything at all? What part does confidence play in success and what part does self-deprecation play in failure? Do those who are skilled and know it have any duty to help those who are not skilled and do not know it? Or is it enough for them to be as they are and to leave those who are not where they are as they are?

Moved from UnconsciousIncompetence (the study being discussed is the same one reported in this paper):

This reminds me of an interview I heard on NPR with a researcher who had done a study which showed that incompetent people lacked the skills to realize they were incompetent, thus compounding their problem.

This is hardly a surprise - incompetent people who recognize it generally do not stay incompetent. It's like pointing out that poor people tend to live in poor neighborhoods.

No, that's not quite it. The point is that if someone's incompetence at some task goes beyond a certain level, they will most likely not even be able to tell how bad they are, since they lack even the most rudimentary notions of what a good performance is. The study expresses this by saying that such people "lack metacognitive skills" in the relevant domain. This is more interesting than simply saying "stupid people are stupid". Your formulation implies that it's down to whether people "recognize their incompetence", which implicitly makes it their responsibility to do so. The study makes the point that it's actually useless expecting people with extremely poor skills in an area to improve off their own bat - they need to be taught the "metacognitive" part so they are at least capable of understanding what competence means.

The study revolved around a test that the study group was asked to take. [There were actually tests in four different areas.] Before taking the test, they were asked to estimate how well they would do. After the test, they received their scores, and were told they would take another similar test. Prior to the second test they were asked to give another estimate of how they would do. The results were as follows:

  1. Virtually all study members overestimated their performance on the first test.
    • Actually, the top quartile underestimated their performance:
  2. Those who scored highest on the first test downgraded their estimates for the second test.
    • No, the top quartile (for study 4 where this was tested) upgraded their estimates, which made them more accurate.
    • See the rightmost column:
  3. Those who scored lowest on the first test did not significantly alter their estimate for the second test.
    • Right.

Quite interesting, No?

For more detail, see:
For those interested in StatisticalProcessControl, the choice not to alter estimates is considered the correct one. There is simply not enough available evidence, especially from a single data point, to alter a prediction. There is a body of theory that states that altering one's prediction about the future based on the immediate past leads to greater inaccuracy.

Sure, but does a "test" constitute "one data point"? If all the questions are very difficult, is seems reasonable to assume the next test will also be difficult. Even if the test consisted of only one question, the person has to know that "question difficulty" is random and the difficulty of the "total population of potential questions" is such that the next question might be very simple.

If, instead of assuming competent and incompetent participants, one assumes all participants are of roughly equal ability, then the variation of results is explained merely by chance. On any test, 25% of the participants will be in the bottom quartile and 25% will be in the top quartile. There is no predictive evidence that scoring in the bottom quartile on the first test leads to any conclusion except that there is a 25% likelihood of scoring in the bottom quartile on the second test.

Nonsense. Just look at Table 2 (above); it clearly shows that there is a very strong correlation between performance on the 'before' and 'after' tests.

The experiment(s) described in the paper are flawed in that they never established the existence of the subpopulation. This seemed to dawn on the experimenters as reflected in the conclusions to the first (humor) experiment, but never is really addressed in the design of the follow on experiments. In an experiment such as this, the burden of proof is on the experimenters to prove the subpopulations exist. Without evidence that the subpopulations exist, one cannot make any inferences on how the subpopulations act. One cannot determine a subpopulation based on a single measurement and certainly not by drawing lines on a graph paper at 25% and 75%.

Michelangelo said regarding the Sistine Chapel ceiling - it originally was just blue with stars - "the place is wrong, and no painter am I"

I wonder what would be there now if the place had been right and a he was a painter.

Probably sculptures.

On the ceiling? With a paint brush?

Damn! Do you have to complicate all my jokes? I need a chocolate. Then read UselessTruth.
Re: "the authors found that participants scoring in the bottom quartile on tests of humor..."

Humor exams? Now that I gotta see.

Go ahead. That part of the study is IMNSHO completely flawed.

I think this is part of a bigger issue: HumansAreLousyAtSelfEvaluation.
This completely explains the horribly amusing performers on Pop Idol (American Idol), X-Factor and those other competitive reality shows.

Same goes for David Gonterman, who styles himself a Nobel Prize winner despite writing terribly crappy fanfiction, as well as Uwe Boll, that German director who thinks is elevating videogames to a new form of cinematic art, when in reality he's just the pawn of a bunch of corrupt German productors exploiting a legal loophole that makes it more profitable to make lousy films rather than good films. -- DaNuke?
Study suggests that over-confidence can give one an advantage under certain environments:

Humanity Risk Equivalency Paradox (for lack of a better name)

For the sake of argument, suppose I was UnskilledAndUnawareOfIt (as my more verbal detractors claim). If I can fall under the spell of such human failure, then why would you, the claimer, be immune? How do you know that YOU are not the self-deceived one? If I can run a "self-diagnostic" and not find anything materially wrong even though I was outright wrong in reality, then why couldn't this affliction also happen to you?

It can. For that reason, I constantly reflect on my own cognitive processes and compare, contrast, and sanity-check my views and conclusions with my experience and that of my colleagues and other sources. Do you do the same?

I can think of some career-based motivations for why some WikiZens view the nature of IT evidence the way they do. It thus doesn't have to be caused by "being crazy" in the nut-house sense, but perhaps caused by protecting your "career-ego" from the ugly realities of the marketplace. There does seem to be a "career pattern" of my detractors that leads me to certain speculation around such. I could go into more detail, but it would be seen as offensive, and I'm not in the mood (yet) to launch such specific missiles here. Ego is more powerful than insanity when it comes to UnskilledAndUnawareOfIt in my observation. (I don't deny that my ego may also be lying to me, but I cannot conscientiously detect it so far.)

Perhaps we can identify potential causes of UAUO: -- top

See also: UnconsciousIncompetence, HumansAreLousyAtSelfEvaluation, BecomingCompetent

View edit of June 19, 2013 or FindPage with title or text search