From our big sister WikiPedia
): "commonsense describes beliefs or propositions that seem, to most people, to be prudent and of sound judgment, without dependence upon esoteric knowledge". The trouble is there are extremely few beliefs or propositions that "most people" can agree upon. How many wars have been fought over which end of the egg to open first?
But it is commonsense that we have common sense.
I disagree. See?
Would you prefer we rename it RareSense?
I might even say renaming it would be prudent and of sound judgement
it, but I bet not too many others would ...
MrManners? reminds the Gentle WikiPedia Zealot that dictionary authors
expect circular definitions...
On the other hand, "common sense" in the AI world means (at least at times) truly obvious and universal conclusions, such as that if there's one ball in the box, and I remove it, then there are no balls in the box. AI systems do not know this sort of thing without explicitly being programmed to make this sort of inference, and the full set of axioms and inference rules to use to do this is still unknown, and this is what is meant in saying that AIs lack "common sense" that all humans possess.
The boundary between that sort of low level common sense, versus the kinds of high level common sense which are called illusory, is very badly defined, since neither kind of common sense is well defined to start with, let alone where one ends and the other begins.
Comments moved from FrameProblem
, where they were off-topic:
Pinker said: "Only when artificial intelligence researchers tried to duplicate common sense in computers, the ultimate blank slate, did the conundrum, now called "the frame problem," come to light. Yet somehow we all solve the frame problem whenever we use our common sense."
Pinker was rattling along there real good until that last sentence, when he demonstrated a FrameProblem
of his own. CommonSense
told us the earth was flat and the sun rotated around it on a crystal sphere until just recently. It told us we couldn't fly to the moon, that heart transplants would endanger our immortal souls, and so many other kinds of tomfoolery that I'd get very tired typing just a slightly representative list. I guarantee that 80% of what we put down today to CommonSense
is neither common to others nor ultimately sensible, given hindsight. History demonstrates that humans without FrameProblem
s don't exist. --Pete
Your terrific insight that human common sense is highly fallible is a great contribution to understanding AI better (the context of the Frame Problem). Now that you've figured that out, surely strong AI is just around the corner.
Actually I rather think it is.
Speaking of common sense, you could just as easily have said that Pinker left out the critical word "successfully", and if you corrected his final sentence by adding that single accidentally elided word, your critique would evaporate:
- "Yet somehow we all solve the frame problem whenever we [successfully] use our common sense."
I like the little of Pinker I've read, but I think he chooses his language fairly carefully, and does not accidentally elide words.
The problem is actually not with Pinker's language nor his concepts, but that he is explaining a point about what the FrameProblem means, whereas your point is to exercise some (possibly deserved) cynicism about humans.
To me, the fact that common sense is often wrong about things -not- of short-term significant consequence (such as the flatness or roundness of the world in most circumstances) is not necessarily a frame-problem failure. For instance, now that we know that Newtonian mechanics is wrong, but is a fine approximation of relativity at slow speeds, we still use Newtonian calculations for most purposes because that's good enough. Likewise, the fact that the Earth is (approximately
) a very large sphere, not a plane is not significant enough to matter when I plough a "rectangular" garden plot.
CommonSenseIsAnIllusion. We don't use Newtonian calculations for most purposes; well, you might but I use Zhegalkin polynomials on GBT for most (metric) purposes. And my garden plots are hexagonal, not rectangular. And I don't plough a garden plot - I throw seeds/seedlings on top and mulch 'em with wet newspaper and dynamic lifter. RareSense!
Non-AI material moved here from FrameProblem:
And the problem soon broadened to an epistemological domain. For example, engineering students are asked to construct a way for a monkey to climb up to reach a banana. The banana is suspended high in the air by a string looped through an eye bolt on the ceiling and tied off at the wall. Various odd sized pieces of wood are provided. When the students are done a monkey is to be introduced to the room.
After some hours the students have assembled an elaborate and carefully balanced cantilever of wood. All stand back to watch as the monkey is introduced. The monkey looks at the cantilever, the string, and the banana, walks to the woodpile, selects an odd block of wood, takes it to the wall, stands on it, reaches up and unties the string.
We laugh because the students have been affected by a FrameProblem
- they thought the string was in the frame, not consequential.
Blue Angels Analogy
As another example, several years ago the entire Blue Angels airshow team, seven airplanes, all smashed straight into the ground at the same time. The inquiry revealed no contact between them or mechanical fault. Apparently what happened was that, because it's very hard to watch the horizon as well as the other planes when performing stunts, the group worked by having just the lead pilot watch the horizon - all the rest simply maintained their orientation with respect to him. So when he lost the horizon they all augered in after him.
The worst Blue Angels crash ever recorded had two jets involved.
(Perhaps it was another flight team, not BA. Note that I didn't supply the original anecdote. --top)
In other words they were treating the ground as part of the frame. A fatal FrameProblem
I don't buy it. Humans don't fix any part of their understanding in a frame. The BlueAngelsCrash? is adequately explained by thunking limits - by the human inability to deal with more than 7 simultaneous events. This isn't a FrameProblem but a simple cognitive bottleneck.
Yes, but why did they all pick the same "event" as a reference? If the choice was random, then at least a few would pick the ground (unless you are arguing that this one of the few times that coincentally they didn't.). Plus, it is only an analogy for a specific problem, not necessarily painting all of humans or human nature that way. In other words, it is one of many possible and different mistakes. Some kind of GroupThink
can take hold, which is probably why they all chose the same frame of reference.
The choice was not random; they all chose the lead airplane as the reference because, as a team, that's what they had decided to do beforehand. The real question is WHY they chose a common reference point in the first place, rather than each one relying on his own ability (to determine the frame, if you will). The answer is what has already been referred to as the thunking problem: they knew they would have trouble concentrating on each other and the ground at the same time. I don't think you can argue that they didn't think the ground was consequential; a pilot more than anyone would understand the consequentiality of the ground while flying. I don't believe this is a frame problem; its not that the pilots considered the ground to be in the frame, its that they were trusting someone else to deal with its consequences since they weren't able to. If you focus on the ground only, you risk smacking your teammates. If you focus on the teammates, you risk inheriting their errors. If you try to focus on both, you flood your senses and do neither well.
Further, computers can transcend the physical world. We can create virtual rules or worlds that defy, or make irrelevant, the physical laws that govern the world we grew up in. If we can leave the physical world, then there is less that may be common in the results. See the "position" discussion in LaynesLaw
s become tough enough they get solved by a ParadigmShift
See also: HumansAreLousyAtSelfEvaluation