Consider the following puzzle, borrowed from Nobel-prize winner Daniel Kahneman's Thinking, Fast And Slow:
A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost?
The puzzle naturally evokes an intuitive answer: 10 cents (the correct answer is 5 cents). The puzzle is a very simple math puzzle that is easily solved using careful reasoning. But when we are intellectually lazy, we tend to follow our gut instincts or intuitions, even when the task is not the kind of task that should be handled in this way. Mathematical and logical exercises typically cannot be solved using our gut instinct.
Daniel Kahneman and his colleague Amos Tversky, however, have taken this insight one step further. They have argued that we do not reason rationally in everyday circumstances and regularly are subject to cognitive illusions, produced by heuristics, or rules of thumb, that we rely on when we reason fast. The mistake we make in these cases is to rely on intuition-based decision making processes rather than slow conscious and careful reasoning. Only the latter type of cognitive processing is reliable as a method for making decisions and predictions. Or so the argument goes.
This, however, is not quite right. Most problems we face in everyday situations are not mathematical or logical in nature. Here are three examples. People who catch balls, such as outfielders in baseball, behave as if they solve “a set of differential equations in predicting the trajectory of the ball” (Dawkins). However, research has shown that outfielders do not engage in mathematical calculations of the trajectory of the ball (Brogaard & Marlow). Catching the ball is not possible on the basis of slow conscious reasoning. So, how do outfielders catch the ball? The answer is that they rely on something called a “gaze heuristic” (Gigerenzer). The brain does not bother calculating any real facts about the speed or trajectory of the ball. Instead, it uses an algorithm that adjusts the outfielder’s running speed so that the ball appears continually to move in a straight line in his field of vision. In other words, through practice, the outfielder’s brain has developed its own algorithm to make it possible for him to catch the ball.
A second example of the successful use of gut instinct or intuition comes from cases in which name recognition can benefit decision making. When foreigners who know very little about American cities are asked to determine whether there are more people in Milwaukee or Detroit, most are inclined to answer ‘Detroit’ rather than ‘Milwaukee’, because most recognize the first name but not the second (Gigerenzer). In this type of case, foreigners answer correctly, whereas most of us in the U.S. would not have a clue as to which of the two cities have the largest population. Familiarity and name recognition can thus produce accurate predictions in circumstances where knowledge about the options is limited but you recognize one option but not the other. This has also been shown to apply in cases of stock portfolios. Gerd Gigerenzer proved that a stock portfolio he put together on the basis of name/brand recognition fared better than those put together by expert traders.
A third example of the successful use of intuition comes from expert chess players. Experts have gone through a process of perceptual learning that allows them to automatically recognize chess configurations as units rather than having to analyze every configuration presented to them during a chess game. Whereas novices are only able to encode the position of the individual chess pieces in long-term memory, expert chess players encode complicated patterns. The basic unit encoded in long-term memory is the ‘chunk’, which consists of configurations of pieces that are frequently encountered together and that are related by type, color, role, and position (Chase and Simon). The number of figurations that the expert player has stored in long-term memory can be as high as 300,000 (Gobet & Simon). Experts clearly do not proceed via slow, conscious reasoning but rely on a learned gut instinct – which in this case is grounded in an advancement in the perception of chess configurations.
"Pragmatic processes are based on gut instinct and not on conscious reasoning"
Even activities that can be stated as logical or mathematical exercises are rarely meant to be approached in this way in ordinary life. Consider the following case (Tversky and Kahneman).
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student,
she was deeply concerned with issues of discrimination and social justice, and also participated in
anti-nuclear demonstrations. Which is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
Most people chose option 2, regardless of whether they were novice, intermediate or expert statisticians. However, the probability of two events occurring in conjunction is always less than or equal to the probability of either one occurring alone. So, the correct answer – in a purely probabilistic context – is option 1.
In most everyday contexts, however, people do not attempt to communicate what the sentences they use express semantically but instead attempt to convey non-literal information. A well-known case is that of “John and Mary got married and had a baby” versus “John and Mary had a baby and got married.” The two sentences are logically equivalent and have the same semantic meaning but in ordinary discourse the sentences normally also convey a temporal order, as in “John and Mary got married, and then they had a baby” versus “John and Mary had a baby, and then they got married.”
In Linda the bank teller case there are two ways in which a non-literal reading may be assigned to the case. Consider the difference between (1) and (2):
(a) Linda is a bank teller.
(b) Linda is a bank teller and a feminist.
(a) Linda is only a bank teller (i.e. a bank teller but not a feminist).
(b) Linda is a bank teller and a feminist.
1(a) does not exclude any feminist bank tellers. So, any person who falls into the 1(b) category also falls into the 1(a) category. Since it is logically impossible for a person to fall into the 1(b) category without falling into the 1(a) category, it cannot be more likely for a person to be in the 1(b) category than it is for her to be in the 1(a) category. In fact, all else being equal, it is more likely for a person to be a bank teller but not a feminist than it is for a person to be both a bank teller and a feminist.
The first remark does not apply in the case of (2). If someone falls into category 2(b), then as a matter of necessity they do not fall into category 2(a). Yet if we randomly choose an individual from the general population, it is evidently more likely that they are a bank teller and not a feminist than it is that they are a bank teller and a feminist. Linda is not a randomly chosen individual, however. The reader is given background information about Linda. The background information tells us that when Linda was in college, she was a devoted feminist. If the reader assumes that the majority of people who are devoted feminists in college continue to be feminists later on, then the only rational response to the question of Linda’s post-college occupation is to say that there is a greater chance that Linda is a bank teller and a feminist than a bank teller and not a feminist.
Granted, the task as originally stated was that of determining the probability with respect to a case in which the semantic meaning given is that of (1). So, if the task is followed literally, then the answer is that it is more likely that Linda is a bank teller. But this skill of providing answers on the basis of the meaning that is literally given to us is not typically a useful skill. If the host at a conference asks you to find out whether the keynote speaker has already had breakfast, and you discover that she had breakfast on the previous day but not that same morning, you would commit no semantic errors if you reported back to the host that the keynote already had breakfast. This, however, would not be a satisfactory job done. Even though the host did not mention it, she clearly was interested in knowing whether the keynote had breakfast that same morning and not whether she had breakfast the day before or a week prior to that.
The upshot is that people’s intuitive answer in Linda the Bank Teller case is grounded in a useful intellectual skill, namely that of being able to determine in real-world environments what the speaker is attempting to convey rather than what the sentences she utters semantically express. This latter skill is exercised using intuition, and in most ordinary circumstances, using logical reason to interfere with the exercise of this skill would produce an unintended outcome.
The cognitive flaw we allegedly make in the Linda case also turns on the formulation of the problem. Suppose the task really is to determine which of the two options is more probable in (1). People may be more likely to provide the correct answer if the literal meaning is made explicit. For example, the two answer options could have been formulated as follows:
(a) Linda is a bank teller (and we are not saying that she is only a bank teller and not also a
feminist. We are leaving that option open).
(b) Linda is a bank teller and a feminist.
Given this way of articulating the problem, we would expect research participants to assign equal probability to 3(a) and 3(b) if the background information about Linda’s college days is given a lot of weight. If the instructions also included a remark to the effect that it is not the case that most college feminists continue to be feminists, people might assign a higher probability to 3(a), which is the desired outcome in this particular case.
So, as it turns out, only in a narrow set of circumstances, such as when the task is strictly logical or mathematical, would it be wise to interfere with our fast intuition-based decision-making skills. One reason for this is that we often do not have the required information to use slow, conscious and careful reasoning effectively. Another reason is that there are many circumstances that call for pragmatic approaches. For example, we often rely on information that is only implicit among conversationalists. Suppose you and I share an apartment and take turns doing the dishes. One late night the sink is full of dirty dishes, and it is your turn to do them. I might convey this to you by uttering the sentence “the dishes are dirty.” The sentence uttered semantically expresses that the dishes are dirty, but you will normally be able to automatically infer that I meant to convey to you that you should do your job. Pragmatic processes are based on gut instinct and not on conscious reasoning that is strictly semantic or logical in nature. Most everyday situations call for the use of our gut instincts rather than strict, literal reasoning.
Brogaard, B. & Marlow, K. (2015). The Superhuman Mind, New York: Penguin Group.
Chase, W. G., & Simon, H. A. (1973a). “The Mind’s Eye in Chess.” In W. G. Chase (Ed.), Visual information processing (pp. 215-281). New York: Academic Press.
Chase, W. G., & Simon, H. A. (1973b). “Perception in Chess,” Cognitive Psychology, 4, 55-81.
Dawkins, R (1976). The Selfish Gene, Oxford: Oxford University Press.
Gigerenzer, G. (2007). Gut Feelings: The Intelligence of the Unconscious, New York: Penguin Group.
Gobet, F., & Simon, H. A. (2000). “Five Seconds or Sixty? Presentation Time in Expert Memory,” Cognitive Science, 24, 651-682.
Kahneman, D. (2011). Thinking, Fast And Slow, New York: Macmillan: Farrar, Straus and Giroux.
Kahneman, D. & Tversky, A (1972). “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology 3: 430-454.
Kahneman, D. & Tversky, A (1996). “On the Reality of Cognitive Illusions,” Psychological Review 103: 582-591.
Reingold, E. M., Charness, N., Pomplun,M., & Stampe, D. M. (2001). “Visual span in expert chess players: Evidence from eye movements,” Psychological Science, 12, 49-56.
Read more from this issue of IAI news here: The Limits of Reason.