Bias, Blindness and How We Truly Think (Part 2): Daniel Kahneman

Illustration by Bob Gill Close

Illustration by Bob Gill

Close
Open

Illustration by Bob Gill

In 1738, the Swiss scientist Daniel Bernoulli argued that a gift of 10 ducats has the same utility to someone who already has 100 ducats as a gift of 20 ducats to someone whose current wealth is 200 ducats.

It was one of the earliest known efforts to look at the relationship between mind and matter -- between the magnitude of a stimulus and the intensity or quality of subjective experience. And it tells us something about how people make choices between gambles and sure things.

Bernoulli was right, of course: We normally speak of changes of income in terms of percentages, as when we say, “She got a 30 percent raise.” The idea is that a 30 percent raise may evoke a fairly similar psychological response for the rich and for the poor, which an increase of $100 will not do.

The psychological response to a change of wealth is inversely proportional to the initial amount of wealth, which suggests that utility is a logarithmic function of wealth. If this function is accurate, the same psychological distance separates $100,000 from $1 million, and $10 million from $100 million.

Bernoulli drew on his psychological insight into the utility of wealth to propose a radically new approach to the evaluation of gambles, an important topic for the mathematicians of his day. Earlier thinkers had assumed that gambles are assessed by their expected value: a weighted average of the possible outcomes, where each outcome is weighted by its probability. For example, the expected value of 80 percent chance to win $100 and 20 percent chance to win $10 is $82 (0.8 x 100 + 0.2 x 10).

Taking the Gamble

Now ask yourself this question: Which would you prefer to receive as a gift, this gamble or $80 for sure? Almost everyone prefers the sure thing. If people valued uncertain prospects by their expected value, they would prefer the gamble, because $82 is more than $80. Bernoulli pointed out that people do not in fact evaluate gambles in this way.

He observed that most people dislike risk, and if they are offered a choice between a gamble and an amount equal to its expected value they will pick the sure thing. In fact a risk- averse decision maker will choose a sure thing that is less than the expected value, in effect paying a premium to avoid the uncertainty.

Bernoulli invented psychophysics to explain this aversion to risk. His idea was straightforward: People’s choices are based not on dollar values but on the psychological values of outcomes, their utilities. The psychological value of a gamble is therefore not the weighted average of its possible dollar outcomes; it is the average of the utilities of these outcomes, each weighted by its probability.

Bernoulli proposed that the diminishing marginal value of wealth (in the modern jargon) is what explains risk aversion -- the common preference that people generally show for a sure thing over a favorable gamble of equal or slightly higher expected value.

Consider the choice between having equal chances to have 1 million ducats or 7 million ducats and having 4 million ducats with certainty. If you calculate the expected value of the gamble, it comes out to 4 million ducats -- the same as the sure thing. The psychological utilities of the two options are different, however, because of the diminishing utility of wealth: The increase in utility from 1 million ducats to 4 million is greater than the increase from 4 million to 7 million. Bernoulli’s insight was that a decision maker with diminishing marginal utility for wealth will be risk-averse.

Bernoulli’s Moral Expectation

Bernoulli’s essay is a marvel of concise brilliance. He applied his new concept of expected utility (which he called “moral expectation”) to compute how much a merchant in St. Petersburg would be willing to pay to insure a shipment of spice from Amsterdam if “he is well aware of the fact that at this time of year of one hundred ships which sail from Amsterdam to Petersburg, five are usually lost.” His utility function explained why poor people buy insurance and why richer people sell it to them.

That Bernoulli’s theory prevailed for so long is even more remarkable when you see that, in fact, it is seriously flawed. The errors are found not in what it asserts explicitly, but what it tacitly assumes.

Consider, for example, the following scenarios: Today, Jack and Jill each have wealth of $5 million. Yesterday, Jack had $1 million, and Jill had $9 million. Are they equally happy? (Do they have the same utility?)

Bernoulli’s theory assumes that the utility of their wealth is what makes people more or less happy. Jack and Jill have the same wealth, and the theory therefore asserts that they should be equally happy. But you do not need a degree in psychology to know that today Jack is elated and Jill despondent. Indeed, we know that Jack would be a great deal happier than Jill even if he had only $2 million today while she has $5 million. So Bernoulli’s theory must be wrong.

The happiness that Jack and Jill experience is determined by the recent change in their wealth.

For another example of what Bernoulli’s theory misses, consider Anthony and Betty: Anthony, whose current wealth is $1 million, and Betty, whose current wealth is $4 million, are both offered a choice between a gamble and a sure thing: equal chances to end up with $1 million or $4 million or end up with $2 million for sure.

In Bernoulli’s account, Anthony and Betty face the same choice: Their expected wealth is $2.5 million if they take the gamble and $2 million if they opt for the sure thing. Bernoulli would therefore expect Anthony and Betty to make the same choice, but this prediction is incorrect.

Theory-Induced Blindness

Here again, the theory fails because it does not account for Anthony and Betty’s different reference points. Anthony may think, “If I choose the sure thing, my wealth will double. This is very attractive. Or, I can take a gamble with equal chances to quadruple my wealth or to gain nothing.”

Betty would think differently: “If I choose the sure thing, I lose half of my wealth with certainty, which is awful. Alternatively, I can take a gamble with equal chances to lose three-quarters of my wealth or lose nothing.”

You can sense that Anthony and Betty are likely to make different choices because the sure-thing option of owning $2 million makes Anthony happy and makes Betty miserable. Note also how the sure outcome differs from the worst outcome of the gamble: For Anthony, it is the difference between doubling his wealth and gaining nothing; for Betty, it is the difference between losing half her wealth and losing three-quarters of it.

Betty is much more likely to take her chances, as others do when faced with very bad options. As I have told their story, neither Anthony nor Betty thinks in terms of states of wealth: Anthony thinks of gains, and Betty thinks of losses. The psychological outcomes they assess are entirely different, although the possible states of wealth they face are the same.

Because Bernoulli’s model lacks the idea of a reference point, expected utility theory does not account for the obvious fact that the outcome that is good for Anthony is bad for Betty. His model could explain Anthony’s risk aversion, but it can’t explain Betty’s preference for the gamble, a risk-seeking behavior that is often observed in entrepreneurs and in generals when all their options are bad.

All this is rather obvious, isn’t it? One could easily imagine Bernoulli himself constructing similar examples and developing a more complex theory to accommodate them; for some reason, he did not. One could also imagine colleagues of his time disagreeing with him, or later scholars objecting as they read his essay; for some reason, they didn’t either.

The mystery is how a conception that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: Once you have accepted a theory, it is extraordinarily difficult to notice its flaws. As the psychologist Daniel Gilbert has observed, disbelieving is hard work.

(Daniel Kahneman, a professor of psychology emeritus at Princeton University and professor of psychology and public affairs emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs, received the Nobel Memorial Prize in Economic Sciences for his work with Amos Tverksy on decision making. This is the second in a four-part series of condensed excerpts from his new book, “Thinking Fast and Slow,” just published by Farrar, Straus and Giroux. The opinions expressed are his own. See Part 1, Part 3 and Part 4.)

To contact the writer of this article: Daniel Kahneman at Kahneman@princeton.edu

To contact the editor responsible for this article: Mary Duenwald at mduenwald@bloomberg.net

Bloomberg reserves the right to edit or remove comments but is under no obligation to do so, or to explain individual moderation decisions.

Please enable JavaScript to view the comments powered by Disqus.