Daniel Kahneman. Psychological experiments D

Our actions and deeds are determined by our thoughts. But do we always control our thinking? Nobel laureate Daniel Kahneman explains why we sometimes act irrationally and how we make bad decisions. We have two systems of thinking. “Slow” thinking is activated when we solve a problem or choose a product in a store. Usually it seems to us that we are confidently in control of these processes, but let’s not forget that behind our consciousness, in the background, “fast” thinking is constantly working - automatic, instantaneous and unconscious...

This is the second work by Daniel Kahneman that I have been lucky enough to read. Yes Yes exactly… lucky... The first was a book. Kahneman writes very easily, interestingly... about difficult things; sometimes paradoxical. Many of the experiments described in his books are fascinating. I can't resist the charm of its style and texture. Therefore, the notes are voluminous. So, I recommend...

Daniel Kahneman. Think slow...decide fast. – M.: AST, 2013. – 656 p.

Download a short summary in the format or

Introduction

A deeper understanding of judgment and choice requires a larger vocabulary than is commonly used in everyday life. The majority of mistakes people make follow certain patterns. Such systematic errors, called biases, occur predictably under the same circumstances. For example, audiences tend to evaluate an attractive and confident speaker more favorably. This reaction is called the "halo effect", which makes it predictable, recognizable and understandable. You can usually tell what you're thinking. The thinking process seems clear: one conscious thought naturally causes the next. But that's not the only way the mind works; Moreover, it basically works differently. The mental work that leads to impressions, premonitions and many decisions usually happens unnoticed.

Origins. This book represents my current understanding of value judgments and decision making, influenced by discoveries in psychology over recent decades. And it all started with Amos Tversky. We first developed a theory about the role that similarity plays in prediction. We then tested and developed this theory with many experiments like the following. To answer this question, assume that Steve was randomly selected from a representative sample. Someone describes his neighbor: “Steve is very shy and unsociable, always ready to help, but has little interest in others and reality. He is quiet and neat, loves order and systematicity and is very attentive to detail." Is Steve more likely to work as a farmer or a librarian? Everyone immediately points out Steve's resemblance to a typical librarian, but almost always ignores equally important statistical considerations. Did you remember that for every male librarian in the United States, there are more than 20 farmers?

Subjects used similarity as a simplifying factor heuristics(roughly speaking, a purely practical rule) in order to more easily arrive at a complex value judgment. Reliance on heuristics, in turn, led to predictive biases (persistent errors) in predictions.

In the fifth year of cooperation, we published the main findings of our research in the journal Science, which is read by scientists from various fields of science. This article, entitled “Judgment under Uncertainty: Heuristics and Errors,” appears in its entirety in the final part of this book.

Scholars of the history of science (see, for example,) often note that at any given time within a given discipline, scientists largely rely on the same basic assumptions in their field of study. Social sciences are no exception; they rely on some general picture of human nature that provides the basis for all discussions of specific behavior but is rarely questioned. In the 1970s, two positions were generally accepted. First, people are generally rational and usually think clearly. Second, most deviations from rationality are explained by emotions: for example, fear, attachment or hatred. Our article challenged both of these assumptions without directly discussing them. We have documented persistent thinking errors in normal people and found that they are caused by the thinking mechanism itself rather than by disruption of the thinking process under the influence of emotions.

Five years after the article appeared in the journal Science, we published the article “Prospect Theory: Analysis of Decisions under Risk,” where we outlined the theory of choice, which became one of the foundations of behavioral economics (see, for example,).

Exaggerated emotional coherence (halo effect). The tendency to perceive everything about a person as good (or bad), including what you haven't seen, is called the halo effect. This is one of the ways that System 1 generates a picture of the world around us, making it simpler and more logical than it really is. We often observe a person's features in a completely random sequence. However, the order is important because the halo effect increases the strength of first impressions.

My method for reducing the halo effect comes down to a general principle: get rid of error correlation! To understand how this principle works, imagine that many subjects are shown glass jars filled with coins and asked to estimate the amount of change in each jar. As James Surowiecki explained in his excellent book, participants tend to perform poorly on these types of tasks, but when all the individual judgments are combined, the result is unexpectedly good. Some people overestimate the actual quantity, others underestimate it, but if all the estimates are averaged, the result comes out quite accurate. The mechanism is simple: everyone looks at the same jar, all estimates have the same basis, the errors of each do not depend on the errors of others and (in the absence of systematic distortions) are averaged to zero.

The principle of independence of judgment and decorrelated errors can be successfully applied by company managers during meetings and all kinds of meetings. A simple rule to follow is that all participants write down a summary of their point of view before discussion, and thus the diversity of knowledge and opinions within the group is effectively used. In a standard open discussion, too much weight is given to the opinions of those who speak earlier and more convincingly than others, forcing others to join.

What you see is what you see (WYSIATI). The most important design feature of the associative mechanism is that it represents only activated ideas. Information that is not retrieved from memory at least subconsciously may as well simply not exist. System 1 is great at constructing the best possible story that includes the currently activated ideas, but it does not (and cannot account for) information it does not have. The measure of success for System 1 is coherence created history. The quantity and quality of data on which it is based is not particularly important. When there is little information, as is often the case, System 1 functions as a mechanism for jumping to conclusions.

The tendency to jump to conclusions from limited data is so important to understanding intuitive thinking and is mentioned so often in this book that I will use a rather cumbersome acronym for it: WYSIATI, which stands for “What You See Is.” All There Is). System 1 is categorically insensitive to the quantity and quality of information on which impressions and premonitions are based. For a coherent story, it is important that the information is consistent, but not necessarily complete. In fact, often when you know less, it is easier to put everything you know into a coherent scheme.

This is why we think quickly and make sense of incomplete information in a complex world. In general, our coherent stories are close enough to reality to serve as a basis for rational action. However, WYSIATI leads to a variety of selection and judgment biases, including the following:

  • Overconfidence: As the WYSIATI rule implies, neither the quantity nor the quality of evidence affects the subjective confidence of individuals. Belief in one's own beliefs generally depends on the quality of the story constructed from what one sees, even if one sees only a little. We often don't consider the possibility that we don't have the data needed to form a judgment - what we see is what we see. Moreover, our associative system likes to lean towards a coherent activation pattern and suppresses doubt and ambiguity.
  • Framing effect: Different ways of presenting the same information often evoke different emotions. The statement “The one-month postoperative survival rate is 90%” is more reassuring than the equivalent statement “The one-month postoperative mortality rate is 10%.” Likewise, products labeled “90% fat free” are more attractive than those labeled “10% fat.” The equivalence of the formulations is obvious, but a person usually sees only one of them, and for him only what he sees exists.
  • Neglecting prior probability: Remember the timid and neat Steve, who is often mistaken for a librarian. The personality description is vivid and vivid, and while you probably know that there are more male farmers than librarians, this statistical fact probably didn't occur to you when you first considered the question.

Chapter 8. How Judgments are Made

System 2 will direct attention to searching for the answer in memory. System 2 is capable of receiving questions, or it can generate them, but redirecting attention and searching for an answer in memory occurs in any case. System 1 works differently. It constantly monitors what is happening inside and outside the mind and generates assessments of various aspects of a situation without specific intention and with little or no effort. These basic estimates play an important role in intuitive judgments because they are easily substituted for more complex answers - this is the main idea of ​​​​the method of heuristics and biases.

Basic assessments. Over the course of evolution, System 1 has developed the ability to provide a constant assessment of the basic tasks that an organism must solve in order to survive: How are things going? is there a threat? Is there a good opportunity? is everything okay? We have inherited neural mechanisms that continuously assess the level of threat and cannot be turned off. For a person, a good mood and cognitive ease are equivalent to assessing the environment as safe and familiar. A specific example of a basic assessment is the ability to distinguish friend from foe at a glance.

Alex Todorov, my colleague at Princeton, discovered that people judge competence by combining two dimensions: strength and reliability. Faces that exude competence combine a strong chin with a slight, confident smile. That's an example judgment heuristics. Voters are trying to form an impression of how good a candidate will be in office and are biased toward a simpler assessment that is quick, automatic, and available at the moment System 2 makes its decision.

Kits and prototypes. System 1 represents categories through a prototype or a few typical exemplars, and so is good at averages, but not so good at summation.

Intensity comparison. Another ability of System 1 is its underlying intensity scale, which allows it to find matches in a wide variety of areas. For example, Julie learned to read at age 4. Contrast her reading ability with this intensity scale: How tall should a man be if he is as tall as Julie is? How about meter eighty? Clearly not enough. What about two fifteen? Probably too much. You need a height that is as extraordinary as being able to read at four years old: quite remarkable, but not amazing.

We will see later why this method of prediction based on comparison is statistically incorrect, although completely natural for System 1, and why for most people - with the exception of statisticians - its results are acceptable for System 2.

“Mental shot with shot.” Control over intentional calculations is not very precise: we often count much more than we need or want. I call these extra calculations mental fractions. You can't hit one spot with a shotgun because the pellets are scattered in different directions, and it seems that System 1 is just as difficult to do exactly as much as System 2 requires of it.

Chapter 9. The answer to an easier question

Question substitution. I offer a simple explanation of how we generate intuitive opinions about complex issues. If a difficult question is not quickly answered satisfactorily, System 1 looks for an easier related question and answers it (Figure 5). I call the operation answering one question instead of another substitution. I also use the following terms: The target question is the assessment you intend to make. A heuristic question is a simpler question that you answer instead of the target question. The formal definition of a heuristic method is something like this: it is a simple procedure or attitude that helps to find an adequate, although often imperfect, answer to difficult questions. The word heuristic comes from the same root as eureka. You may not realize that the target question was difficult because the intuitive answer came to mind easily.

Heuristics of affect. The predominant influence of conclusions over arguments is most noticeable in situations involving emotions. Psychologist Paul Slovic has proposed an explanation of the affect heuristic, which uses our likes and dislikes to shape our beliefs about the world around us. In matters of emotional attitude to something, System 2 is not a critic, but a protector of the emotions of System 1; it encourages, rather than prohibits. She primarily seeks information and arguments that are consistent with her existing beliefs, rather than information that will allow her to analyze them. The active, coherence-seeking System 1 offers solutions to the undemanding System 2.

The table below is a list of System 1 traits and behaviors. I hope this list will help you develop an intuitive “sense of personality” for the fictional System 1. Like many characters you know, you will have intuitions about what System 1 would have done it under different circumstances, and b O Most of your premonitions will be correct. So, the characteristics of System 1:

PART II. METHODS OF HEURISTICS AND DISTORTION
Chapter 10. Law of small numbers

System 1 is perfectly adapted to one form of thinking - it automatically and effortlessly recognizes causal connections between events, sometimes even in cases where there is no connection. However, System 1 is not very good at dealing with “purely statistical” facts that change the likelihood of outcomes but do not make them happen. A random event is – by definition – unexplainable, but series of random events behave in an extremely regular manner.

Large samples give more accurate results than small ones. Small samples are more likely to produce extremes than large ones. Amos and I's first study together showed that even experienced researchers have poor intuition and a vague understanding of the meaning of sample size.

Law of small numbers. For the research psychologist, sample variability is not just an oddity, it is an inconvenience and a costly nuisance, turning any research into a game of chance. Suppose you want to confirm the hypothesis that the vocabulary of six-year-old girls is, on average, larger than that of boys of the same age. In the entire population, the hypothesis is correct; girls at six years old have a larger vocabulary on average. However, girls and boys are very different, and you can randomly select a group where there is no noticeable difference, or even one where boys score higher. If you are a researcher, such a result will cost you dearly because, after spending time and effort, you will not confirm the correctness of the hypothesis. The risk is reduced only by using a sufficiently large sample, and those who work with small samples are leaving themselves to chance. The first article I co-wrote with Amos was called “Belief in the Law of Small Numbers.”

Preferring certainty to doubt. The Law of Small Numbers is a manifestation of the general tendency towards certainty instead of doubt. A strong predisposition to believe that small samples accurately represent the entire population means something more: we tend to exaggerate the consistency and coherence of what we see. Researchers' over-reliance on the results of a few observations is akin to the halo effect, the feeling that we often have that we know and understand a person about whom we, in fact, know little. System 1 anticipates facts, building up a complete picture from fragmentary information. The jump to conclusion mechanism behaves as if it believes in the law of small numbers. Overall, it creates an overly meaningful picture of reality.

Reason and occasion. Associative mechanisms look for causes. Statistical patterns are difficult to perceive because they require a fundamentally different approach. When we look at an event from a statistical point of view, we are interested in its relationship to what could have happened, not how exactly it happened. There was no special reason; chance chose him among others.

The illusion of regularity affects our lives. How many good deals does your financial advisor have to make before you decide he is unusually effective? How many successful acquisitions will convince the board that the CEO has a talent for such deals? The simple answer to these questions is that following your intuition, you are more likely to perceive a random event as natural. We too readily reject the idea that much of our lives are random.

Chapter 11. The Anchoring Effect

The anchoring effect occurs when subjects are confronted with an arbitrary number before estimating an unknown value. This experiment produces some of the most reliable and consistent results in experimental psychology: estimates do not move away from the number considered, hence the image of anchoring to a certain point.

The anchoring effect is generated by two different mechanisms, one for each system. One form of binding occurs in a goal-directed adjustment process, that is, in the action of System 2. Binding through priming is an automatic response of System 1.

The anchoring effect as a means of adjustment. Amos liked the idea of ​​the anchor-and-adjust heuristic as a strategy for estimating unknown quantities: starting with an “anchor” number, estimating how small or large it is, and gradually adjusting our own estimate as we mentally “move away from the anchor.” Adjustments tend to end prematurely because people stop, having lost the confidence to move on.

Anchoring as a precedence effect. The anchoring effect is a special case of suggestion. A process like suggestion works in many situations: System 1 struggles to construct a world in which the anchor is the right number. This is one of the manifestations of associative coherence. Anchoring effects are all around us. Because of the psychological mechanisms that generate these effects, we find ourselves more suggestible, and, of course, there are many who want to exploit our gullibility. When negotiating to buy a home, the seller takes the first step by setting a price: the same strategy works.

Negotiators can be advised to focus on searching their memory for arguments against anchoring. The frightening power of the anchoring effect is that you are aware of the presence of the anchor and even pay attention to it, but you do not know how it guides and limits your thoughts because you cannot imagine how you would think without the anchoring effect.

Chapter 12: The Science of Accessibility

The availability heuristic is the substitution of an assessment of the frequency of occurrence, the ease with which examples come to mind. The availability heuristic involves both systems. One of the most famous studies of affordability asked spouses, “What percentage of your effort goes into keeping your home in order?” They were also asked similar questions about taking out trash, initiating meetings, etc. Will the sum of self-assessed contributions be equal to, greater than, or less than 100%? As expected, the amount turned out to be more than 100%. The explanation is a simple availability bias: both spouses remember their own contributions and efforts much more clearly than the other half's, and differences in availability lead to differences in frequency estimates. The same distortion is noticeable when observing a group of employees: each participant feels that he has done more than he should, and his colleagues are not grateful enough for his contribution to the common cause.

The psychology of accessibility. How will impressions of the frequency of occurrence of a category be affected by the requirement to list a specific number of examples? The feeling of quickly finding examples turned out to be more important than their number. The ease with which examples come to mind is a System 1 heuristic, which is replaced by a focus on content when System 2 is more involved.

Chapter 13. Availability, emotions, risk

Risk researchers quickly recognized the importance of the idea of ​​accessibility. Affordability helps explain why there is a tendency to purchase insurance and take protective measures after accidents. Changes in memory explain the emergence of the cycles of “distress - excitement - increasing calmness”, known to all disaster researchers.

Estimates of causes of death are distorted by media reports. Our estimates of the frequency of events are distorted by the prevalence and emotional intensity of the information around us. Slovik developed the concept affect heuristics: people make decisions and make judgments based only on emotions - do I like that? I do not like? How strong are my feelings? The affect heuristic is an example of substitution in which the answer to an easy question (how do I feel about this?) serves as the answer to a much more difficult question (what do I think about this?).

A team of researchers led by Slovik convincingly demonstrated the operation of the affect heuristic by examining opinions about a variety of technological processes, products, and devices, including water fluoridation, chemical production, food preservatives, and automobiles. Respondents were asked to rate the benefits and risks associated with each category. There was an incredibly high negative correlation between these scores. Technologies that respondents considered favorable were rated as providing many benefits and few risks; technologies that disliked the subjects were assessed negatively, with many disadvantages listed and advantages rarely mentioned.

Enduring affect is a fundamental component of associative coherence. As psychologist Jonathan Haidt noted, “the emotional tail wags the rational dog.” The affect heuristic makes our lives easier by imagining the world to be much more organized than it actually is.

Slovik questioned the core competency of experts: the idea that risk is objective. “Risk does not simply exist, outside our minds and culture, waiting to be measured. The concept of risk was invented to better understand and cope with the dangers and uncertainty of life.” Slovik concludes that “defining risk in this way becomes an exercise of power.”

For example, daminoside is a chemical that was sprayed on apples to regulate their growth and improve their appearance. The panic began with media articles reporting that the chemical, when consumed in large doses, causes cancerous tumors in rats and mice. The “Daminoside Panic” illustrates a basic limitation in our minds’ ability to deal with small risks: we either ignore them completely or overemphasize them—without any in-between. This feeling is familiar to any parent waiting for their teenage daughter to return from an extended party.

Chapter 14. Tom V's specialty.

As far as you know, Tom W. was chosen at random, like a marble from a container. To decide whether this ball will be red or green, you need to know how many balls there were. Instead, you focus on the similarity of Tom W's description to stereotypes. We call this similarity representativeness, and we call the heuristic prediction by representativeness.

Logicians and statisticians have developed inconsistent but very precise definitions of probability. For ordinary people, probability (synonymous with "plausibility") is a vague concept associated with uncertainty, predisposition, plausibility and surprise. Asking about probability activates “mental fractions,” prompting answers to easier questions. One of them is automatic assessment of representativeness. The representativeness heuristic is also invoked when someone says: “She will win the election, you can see it from her” or “He won’t make a scientist, he has too many tattoos.” Representativeness predictions are common, but suboptimal from a statistical point of view. Michael Lewis' bestseller The Man Who Changed Everything is a story about the ineffectiveness of this method of prediction. Professional breeding coaches typically predict the success of potential players based on their build and appearance. The hero of Lewis's book, Billy Beane, manager of the Oakland Athletics baseball team, made the unpopular decision to select players based on game statistics.

Disadvantages of representativeness. Estimating probability by representativeness has important advantages: intuitive impressions are almost always more accurate than random guesses. In other situations, stereotypes lie and the representativeness heuristic is confusing, especially if it neglects prior probability information. The second disadvantage of representativeness is insensitivity to data quality. Remember the rule of System 1: what you see is what you see.

How to train your intuition. Bayes' rule determines how to combine existing beliefs (prior probabilities) with the diagnostic value of information, that is, how much a hypothesis should be preferred to an alternative. For example, if you believe that 3% of master's students are majoring in computer science (a priori probability), and you also believe that, based on the description, Tom W. is four times more likely to study computer science than other sciences, then Bayes' formula follows consider that the probability that Tom W. is a computer geek is 11%.

There are two important things to remember about the flow of Bayesian reasoning and how we tend to break it. First, prior probabilities are important even when information about the case in question is available. This is often not intuitively obvious. Second, intuitive impressions of the diagnostic value of information are often exaggerated.

Chapter 15. Linda: less is more

Probability is a sum-like variable. This can be seen in the following example: probability (Linda is a cashier) = probability (Linda is a feminist cashier) + probability (Linda is a cashier and not a feminist). System 1 evaluates the average instead of the sum, so when non-feminist cashiers are removed from the list, the subjective probability increases. However, the question “How many of the 100 participants…” is easier to answer than the question about percentages. A likely explanation for this is that the mention of one hundred people evokes a spatial image in one's mind. This representation, called a frequency representation, makes it easier to recognize the fact that one group is included in another. The solution to the puzzle seems to be that the question “how much?” makes you think about individual people, and the same question in the wording “what percentage?” - No.

Chapter 16. Reasons trump statistics

Consider the description and give an intuitive answer to the following question: At night, the taxi driver hit and fled the scene. There are two taxi companies in the city, Green and Blue. You have been provided with the following information:

  • 85% of city taxis are from the “Green” company, and 15% are from the “Blue” company.
  • The witness identified the taxi as "Blue". Forensic testing tested the witness's reliability in night conditions and concluded that the witness identified each of the two colors correctly 80% of the time and incorrectly 20% of the time.

What is the probability that the hit-and-run taxi was Blue and not Green?

This is a standard Bayesian inference problem. It contains two pieces of information: a priori probability and not entirely reliable eyewitness testimony. In the absence of a witness, the probability that the culprit taxi is “Blue” is 15%, that is, this is the a priori probability of such an outcome. If taxi companies were equally large, the prior probability would become uninformative. In such a case, considering only the reliability of the witness, you would conclude that the probability is 80%. The two sources of information can be combined using Bayes' formula. The correct answer is 41%. However, you probably guess that when solving this problem, subjects ignore the prior probability and choose a witness. The most common answer is 80%.

Causal stereotypes. Now look at the same story with a different representation of prior probability. You have the following data:

  • Both companies have the same number of cars, but Green taxis are involved in 85% of incidents.
  • The information about the witness is the same as in the previous version of the task.

These two versions are mathematically the same, but different from a psychological point of view. Those who have read the first version of the task do not know how to use a priori probability and often ignore it. Those who see the second option, on the contrary, pay considerable attention to the prior probability, and on average their estimates are not far from the Bayesian solution. Why? In the first option, the a priori probability of “Blue” taxis is a statistical fact, the number of taxis in the city. A mind craving causal stories has nothing to think about: the number of “Blue” and “Green” taxis in the city does not force drivers to flee the scene. On the other hand, in the second option, Green taxi drivers are more than five times more likely to get into accidents than Blue drivers. The conclusion arises immediately: the drivers of “Green” taxis are desperate madmen! You have developed a stereotype of Green carelessness that you apply to unknown individual company drivers. The stereotype easily fits into the causal story,

The taxi example illustrates two types of prior probabilities. Statistical prior probabilities are facts about the population within which the situation is being considered that is unimportant for a particular case. Causal priors change your view of how the event happened. These two types of information about prior probabilities are treated differently: Statistical prior probabilities are usually given little weight and sometimes ignored altogether when specific information about the case in question is available. Causal prior probabilities are treated as information about a particular case and are easily combined with other information related to it.

Formation of a stereotype is a negative concept in our culture, but I use it neutrally. One of the main characteristics of System 1 is the representation of categories in the form of norms and prototypes. In social categories, such ideas are called stereotypes. We represent categories through true and false stereotypes. For example, in hiring, there are strict social and legal norms against stereotyping. That's how it should be. In sensitive social situations, it is not advisable to draw potentially incorrect conclusions about an individual based on group statistics. From a moral point of view, it is considered desirable to treat prior probabilities as general statistical facts rather than as assumptions about specific individuals. In other words, in this case we reject causal prior probabilities.

Confronting stereotypes is morally admirable, but we should not mistakenly assume the simplistic view that there are no consequences for doing so. This is a price worth paying for the betterment of society, but denying its existence, while soul-soothing and politically correct, is not scientifically justified. In political debate, the affect heuristic is often used: principles we like are supposed to have no costs, and those we don’t like supposedly have no benefits. We must be capable of more.

Chapter 17: Regression to the Mean

I taught the psychology of effective training to Israeli Air Force instructors. I explained to them an important principle of skill training: rewarding for improvement works more effectively than punishing mistakes. After listening to my explanation, one of the most experienced instructors in the group said: “I have repeatedly praised the cadets for their clean aerobatic performance. During the next attempt to perform the same figure, they perform worse. And when I scold them for poor performance, they usually do better next time.” His conclusion about the effectiveness of reward and punishment turned out to be completely wrong. The instructor observed the effect of regression to the mean that occurs due to random fluctuations in the quality of performance. Naturally, only those who performed maneuvers much better than average were praised. But the cadet was probably just lucky on this attempt, and thus the next attempt would have been worse regardless of whether he was praised or not. And vice versa: the instructor scolded the cadet if he performed an unusually bad task, and therefore would have done better on the next attempt, regardless of the instructor's actions. It turned out that the inevitable fluctuations of a random process were given a causal interpretation.

Talent and luck. A few years ago, John Brockman asked scientists to share their favorite equations. I suggested these:

  • success = talent + luck
  • big success = a little more talent + a lot of luck

The fact that regression also occurs when trying to predict an earlier event from a later one should convince you that it has no causal explanation.

The phenomenon of regression is alien to the human mind. This phenomenon was first described by Sir Francis Galton, second cousin of Charles Darwin, who had truly encyclopedic knowledge. In a paper entitled "Regression to the Mean in Inheritance," published in 1886, he reported measuring several successive generations of seeds and comparing the heights of children with the heights of their parents. It took Francis Galton several years to understand that correlation and regression are not two different concepts, but two perspectives on one. The general rule is quite simple, but it has surprising consequences: in cases where the correlation is not perfect, regression to the mean occurs.

To illustrate Galton's discovery, let's take a proposition that many find rather curious: Intelligent women often marry less intelligent men. Since the correlation between the intelligence of women and men is not perfect, it is mathematically inevitable that intelligent women will marry men who are, on average, less intelligent (and vice versa). Observed regression to the mean cannot be more interesting or more explainable than non-ideal correlation.

We pay good money to those who come up with interesting explanations for regression effects for us. A commentator on a business news channel who correctly remarks that “this year was better for business because last year was bad” will likely not last long on the air.

Chapter 18: How to Deal with Intuitive Predictions

Subjects are asked to make a prediction, but they substitute an estimate of the data, not noticing that they are answering a question different from the one asked. This process is guaranteed to produce systematically biased predictions that completely ignore regression to the mean.

To make an undistorted prediction, do the following:

  1. Start by estimating your typical GPA.
  2. Determine the average score that corresponds to your impressions of the information available.
  3. Assess the correlation between your data and your GPA.
  4. If the correlation is 0.3, move 30% of the distance away from the typical GPA toward the impression-matched GPA.

My corrective procedures are valuable because they force you to think about the amount of information you know.

A look at regression from a two-system perspective. Extreme predictions and the desire to predict unlikely events on insufficient evidence are expressions of System 1. It is natural for associative mechanisms to match the extremeness of predictions to the extremeness of the data on which they are based - this is how substitution works. It is natural for System 1 to generate overconfident estimates because confidence is determined by the coherence of the best story that can be constructed from the available evidence. Keep in mind, intuition tends to make extremely extreme predictions, and you tend to believe them. Regression is also difficult for System 2. The very idea of ​​regression to the mean is difficult to explain and understand. Galton understood it with great difficulty. Many statistics teachers do not like to lecture on this topic, and students are often left with only a vague understanding of this important concept. To evaluate regression, System 2 requires special training. Not only do we intuitively match the forecasts to the input data, it also seems reasonable. It is impossible to understand regression based on personal experience. Even if she is identified, as in the case of flight instructors, she is given a causal interpretation that is almost always wrong.

PART III. OVERCONFIDENCE
Chapter 19. The illusion of understanding

Nassim Taleb - trader, mathematician, philosopher - can rightfully be considered a psychologist. In his work "" he introduces the concept narrative distortions(narratives) to explain how flawed interpretations of the past shape our views and expectations of the future. Narrative distortions arise from our countless attempts to understand the laws of life. The interpretative narratives that we find compelling are usually simple and concrete rather than abstract. They would have O A greater role is given to talent, stupidity or calculation than to luck. At the same time, the narrator highlights the exceptional events that took place and forgets about many others that did not take place. Essentially, any noticeable incident gives rise to a causal narrative. Taleb believes that we are constantly deceived in this way: we pile flimsy conclusions on the foundation of the past and consider them unshakable.

The fact that in many cases a choice preceded the completion of an event leads us to overestimate the role of skill of the participants and underestimate the influence of chance. The essence of this misconception is this: we believe that we can understand the past, and therefore the future is knowable; However, in reality we understand the past less than we think.

“Hindsight” and its cost to society. The main limitation of the human mind is that it is almost impossible to return to the past, to take the same position, knowing about future changes. As soon as you have built a new picture of the world or part of it, the old one is erased - you no longer remember how and what you believed before.

The inability to recreate previous views inevitably entails an overestimation of the unexpectedness of the events that occurred. Baruch Fischhoff, while still a student, was the first to show the development of retrospective distortion, or the “I knew it” effect. The tendency to revise the history of one's own views in the light of what happened gives rise to a persistent cognitive illusion.

Imagine this: as a result of a minor surgical intervention, something unexpected happens and the patient dies. At trial, jurors are more likely to believe that the intervention actually contains greater risks than anticipated and that the doctor who prescribed it should have foreseen this. Due to such an error (deviation towards the result) almost

it is impossible to correctly evaluate the decision due to the fact that the opinion that seemed reasonable when it was made has changed.

Deviation towards the result. When the outcome is bad, the client scolds the agent - they say that he did not read the warning on the wall, but he himself forgets that the inscription was made in invisible ink and appears only after the fact. A move that seemed prudent in forecast may look blatantly careless in retrospect.

The worse the consequences, the more susceptible we are to hindsight bias. Considering the tragedy of September 11, we are especially willing to believe that the officials who were able to prevent it turned out to be slobs or blind.

Recipes for success. The meaning-making mechanism of System 1 helps us see the world around us as simpler, more coherent, and more predictable than reality. The illusion that the past can be understood gives rise to the illusion that the future is predictable and controllable. Delusions calm us down, reducing the anxiety that would inevitably arise with the awareness of the uncertainty of our existence.

Do leadership style and personality of the leader affect the profits of enterprises? Of course! This has been confirmed by systematic studies that have objectively assessed the qualities of directors, their decisions and the associated changes in earnings. The director's policies do influence the company's performance, but the effect of this influence is not as great as the business press claims. Researchers judge the strength of a relationship by the value of the correlation coefficient, which varies from 0 to 1. In the chapter on regression to the mean, this coefficient was defined as a measure of the relative weight of factors common to the two values ​​being compared. The most generous estimate of the correlation coefficient between enterprise success and leadership quality reaches 0.3, indicating 30% agreement between the criteria. A correlation of 0.3 implies that a strong leader runs a successful firm about 60% of the time—only 10% higher than a random distribution. Knowing this, it is easy to see that the merits of leaders are not as great as they are extolled.

However, from the point of view of most business analysts, a director on whom so little depends can hardly count on recognition even if the company is successful. It's hard to believe that people will line up for a book that describes the methods of a leader whose efforts achieve success that is little more than accidental. The consumer craves clear advice about the ingredients for success and failure in business; he needs stories that give him a sense of understanding, even if illusory. Philip Rosenzweig, a professor at a Swiss business school, demonstrates in his work "" how this need for illusory confidence is met by two popular genres of business literature: stories of the rise (more rarely, the fall) of selected companies and entrepreneurs, as well as analytical comparisons of more and less successful firms

Rosenzweig concluded that examples of success and failure consistently exaggerate the impact of management style and business practices on a company's income, and therefore such examples are unlikely to teach anything. To understand what is happening, imagine that a business expert from among the directors is asked to comment on the reputation of the head of a certain company. The expert is keenly interested in whether the company has prospered or declined in recent years. This knowledge creates an aura: an expert would rather call the director of a thriving company methodical, flexible and decisive. Now imagine that a year has passed and the situation has worsened. The same director will be assessed as closed-minded, confused and authoritarian.

In fact, the halo effect is so powerful that you yourself are disgusted by the idea that the same actions can be both right and wrong depending on the situation, and the same person can be both flexible and inert. Because of the halo effect, we pervert the order of cause and effect: we believe that the company is suffering because of the inertia of management, when in fact the management appears to us to be inert because of the decline of the company. This is how one is born illusions of understanding. The halo effect, coupled with performance bias, explains the increased interest in books in which authors attempt to draw conclusions from systematic studies of successful firms and provide actionable advice. The idea behind the books is that one can identify and study “good management practices” and then apply them to achieve success. However, both assumptions are too bold. Comparing more or less successful companies is mainly a comparison of luck. Keeping in mind the role of chance, it is necessary to be skeptical about the “steady patterns” that are derived from observing successful and not so successful enterprises. With a certain amount of chaos, patterns turn out to be mirages. Given the large impact of chance, one cannot infer the quality of leadership and management practices from the observed success of companies.

Chapter 20. The illusion of significance

Confidence is a feeling that reflects the coherence of information and the cognitive ease of processing it. In this regard, it is wiser to take someone's admissions of uncertainty seriously. Statements of absolute certainty, on the other hand, suggest that the person has constructed a coherent mental narrative that may not be true.

What makes one person sell and another buy? Why do sellers consider themselves more knowledgeable than buyers and what information do they have? My questions about the operation of the stock exchange merged into one big mystery: it seems that a large industry of the economy exists only due to illusions of skill. Most buyers and sellers know they have the same information; securities are bought and sold mainly due to differences in views. Sellers think that the price is too high and will soon fall, and buyers think that the price is too low and should rise. The mystery is why both are confident that the price will definitely change. Why do they firmly believe that they know best what the price should be? In reality, the confidence of most traders is just an illusion.

There is a general consensus among researchers that almost all financial analysts, wittingly or unwittingly, rely on chance. The trader's subjective feeling is that he is making a reasonable, informed choice in a very uncertain situation; however, in highly efficient markets, informed choices are no more accurate than blind choices.

What reinforces illusions of skill and importance? The ability to assess a firm's business prospects is not sufficient for successful investing, where the main question is whether the share price contains information about the health of the firm. Illusions of importance and skill are supported by a powerful professional culture. We know that people tend to have an unshakable belief in any statement, no matter how absurd it may be, if this belief is shared by a community of like-minded individuals.

Illusions of experts. Nassim Taleb noted in his Black Swan that our tendency to invent coherent narratives(that is, coherent, consistent narratives about the past) and believing them is made difficult by accepting the fact that our powers of foresight are limited. The illusion of understanding the past gives us overconfidence in our ability to predict the future. (Successful stories are often built on the variability of the past, for example, the famous film trilogy “Back to the Future” or Stephen King’s novel “11/22/63”.)

“We have rapidly reached the point where the value of knowledge-based predictions is becoming extremely small,” writes Philip Tetlock, a psychologist at the University of Pennsylvania. – In an age of overspecialization of science, it makes no sense to believe that those who publish in leading publications - outstanding political scientists, regional scientists, economists and so on - are in any way superior to ordinary journalists or simply thoughtful readers of the New York Times when it comes to And dealing with emerging situations.” As Tetlock discovered, the more famous a forecaster, the more fanciful his forecasts. “Experts in demand,” he writes, “behave more self-confidently compared to colleagues who are not in the limelight.”

It should be realized that, firstly, errors of foresight are inevitable, since life is unpredictable. And secondly, excessive subjective confidence should not be considered an indicator of the accuracy of predictions (uncertainty will give a better result). It is in vain to expect that a cadet's behavior on an obstacle course will indicate what he will be like in officer school and in combat - there are too many special factors at work in each situation. Remove the most assertive of the eight candidates in the group, and the rest will reveal a new side of themselves. I do not deny the importance of tests - if one allows you to predict an important outcome with a confidence of 0.2 or 0.3, it should be used. But don't expect more.

Chapter 21. Intuition and formulas - who wins?

Paul Meehl, one of the most versatile psychologists of the twentieth century, tried to figure out why experts lose to formulas? One reason, Meehl suggested, is that they are trying to be smarter, think independently, and take into account complex combinations of factors. In other cases, complexity helps, but most often it reduces the reliability of predictions. It is better to proceed from simple combinations of factors. Another reason why experts lose to formulas is the inexcusable inconsistency of human generalizations when processing complex information. If you give experts the same set of data twice, they will often give different answers. Perhaps inconsistent judgment is so widespread because System 1 is so dependent on context.

Research leads us to an unexpected conclusion: to maximize predictive accuracy, final decisions should be trusted to formulas, especially in “low-confidence” areas. For example, when applying to medical schools, the final decision is left to the teachers who interview the applicants. Limited evidence suggests that interviewing is likely to reduce the accuracy of the selection procedure because interviewers tend to be overly confident in their own intuitions and too often rely on their own observations to the exclusion of other sources of information.

In the social sciences, the prevailing statistical practice is to assign a weight to each of the predictor elements, following an algorithm called multiple regression. Robin Dawes discovered that the complexity of a statistical algorithm does little to improve its efficiency. Moreover, formulas that give equal weight to all predictors often outperform others because they are not affected by sampling randomness.

Short-term forecasting in the context of therapeutic questioning is a skill that therapists have spent years honing. Therefore, such forecasts are very good. A long-term prognosis for the future for a particular patient turns out to be an impossible task for specialists. In addition, clinicians have no practical opportunity to acquire the skill of long-term forecasting - it takes too many years for feedback to obtain confirmation of their hypotheses.

The stubborn resistance to demystifying professional skills was clearly demonstrated in the reaction of European winemakers to Ashenfelter's formula. Bias against the algorithm increases when decisions have important consequences. Fortunately, hostility toward algorithms is likely to wane as their role in everyday life increases. When we are looking for books or music, recommendations from special programs only help us.

Let's say you need to hire a sales representative for your company. If you are serious about finding the best candidate for the position, start by selecting several personality traits needed to successfully perform the job (for example, technical proficiency, winning personality, reliability, etc.). Don't overdo it - six qualities are enough. Make sure that the qualities do not overlap and that they can be assessed by asking a few simple, fact-oriented questions.

Make a list of such questions for each quality and enter a scale, say, five-point. Define what is meant by the phrases “strongly expressed” or “weakly expressed.” To avoid the halo effect, collect and process data separately for each quality, and then move on to the next. To obtain an overall score, add up the scores for all six parameters. Hire the one whose final score is the highest, even if you liked the other candidate better. If you use the procedure described, you will achieve better results than if you act like most people and trust your intuition (“I liked him immediately”).

Chapter 22. Expert intuition: when should you trust it?

How to evaluate the likely significance of an intuitive judgment? When do such judgments reflect true experience and professionalism, and when are they merely an example of the illusion of significance? The answer can be obtained based on two main conditions for acquiring mastery:

  • the presence of context, and constant enough to become predictable;
  • opportunities to learn these contextual constancies through long-term practice.

When both of these conditions are satisfied, intuition is acquired as a skill. Chess is an extreme example of an immutable context or environment, although bridge and poker also have persistent statistical patterns that allow one to hone one's skills. Doctors, nurses, athletes and firefighters also deal with complex but internally logical situations. The intuitive judgments of firefighters described by Gary Klein arose from highly significant signals that the expert's System 1 learned to use, even if System 2 did not find a name for them. In the case of financial analysts and political scientists, the opposite has happened: they operate in a context with zero certainty. Their failures reflect the inherent unpredictability of the events these experts are trying to predict.

You can't blame someone for a bad prognosis in an unpredictable world. Instead, we should blame the professionals for believing that they can handle this impossible task. Exalting your own guesses regarding unpredictable situations is self-deception at best. In the absence of significant environmental signals, such “insights” are either luck or falsehood. If this conclusion seemed unexpected to you, then you still believe in the magic of intuition. Remember: you cannot rely on intuition in a context that lacks stable patterns.

Feedback and practice. The ability of a professional to develop intuition depends mainly on the quality and speed of feedback, as well as the opportunity to practice. Good feedback accompanies the work of anesthesiologists, since the results of their actions are noticeable quickly. In contrast, radiologists receive little information about the accuracy of their diagnoses and missed pathologies. Thus, anesthesiologists have a better chance of developing intuition. If the anesthesiologist says, “Something is wrong,” everyone in the operating room should prepare for an emergency.

In what cases should you trust an expert’s intuition? Given the relative immutability of the context and the ability to identify its patterns, the associative mechanism recognizes the situation and quickly develops an accurate forecast (decision). If these conditions are met, the expert's intuition can be trusted. In a less stable, unreliable context, the judgment heuristic is activated. System 1 can provide quick answers to difficult questions by replacing concepts and providing coherence where there should be none. As a result, we get an answer to a question that was not asked, but it is quick and quite plausible, and therefore able to slip past the lenient and lazy control of System 2. Let's say you want to predict the commercial success of a company and you think that this is what you are evaluating, when in fact Are you impressed by the energy and competence of the company's management?

Chapter 23. View from the outside

Early in my career, I once convinced officials at the Israeli Ministry of Education that higher education needed a course in the study of decision making. To develop the course and write a textbook for it, I assembled a team of experienced teachers. At some point we decided that it would take us about two years to complete the project. I asked a question to Seymour, one of the project participants, who knew the statistics of similar projects that other groups had. Seymour reported that the average project lasts 7 years and 40% of projects remain unfinished. In fact, it took another eight (!) years to complete the textbook.

This unpleasant episode became perhaps the most instructive in my professional career. Over time, I learned three lessons from it. The first one became clear to me right away: I recognized the difference between two radical approaches to forecasting, which Amos and I later labeled “the inside view” and the “outside view.” The second lesson was that our initial vision - two years of work to complete the project - contained a planning error. Our estimates took into account the ideal, not the actual situation. And only after a long time did I learn the third lesson - the lesson of “irrational persistence.” This is precisely why I explain our reluctance to abandon a obviously unprofitable project. Faced with a choice, we sacrificed rationality, but not initiative.

A look from the inside and its advantages. The inner eye began to evaluate the future of the project. We focused on the specific circumstances we found ourselves in and began looking for analogies in the past. On that day, we could not have foreseen that some accident would allow the project to drag on for such a long time. The question I asked Seymour turned his attention from our particular situation to the category of similar situations. Seymour estimated the prior probability of success in this category to be 40% failure and 7–10 years until the project was completed. The baseline forecast should be the anchor—the starting point for future adjustments. If the initial category is chosen correctly, an outsider will tell you where to look for the answer, or, as in our case, make it clear that internal forecasts are not even close to it.

Seymour’s forecast “from the inside” was not an adjustment to the basic one - it did not even occur to him - but was entirely based on particular circumstances, that is, on our efforts. Like the participants in the Tom W. experiment, Seymour knew important statistics, but did not think to apply them. A common scenario is that people who have information about a particular case rarely feel the need for statistics on the category to which the case belongs. When we encountered an outside gaze, we jointly ignored it.

Planning error. When we take into account the outsider's forecast and the subsequent outcome, our initial estimates of the project's delivery time seem almost delusional. I think this is not surprising: overly optimistic forecasts are found everywhere. Amos and I coined the term planning error, describing forecasts and plans that: are excessively close to best possible scenarios; can be corrected by looking at the statistics of similar cases.

Mitigating the consequences of planning error. Bent Flyvbjorg, systems analyst and professor at the University of Oxford: “Perhaps the main source of forecasting errors is the prevailing tendency to underestimate or ignore distributional information. Therefore, planners should take the trouble to define the forecasting problem and thereby facilitate the collection and recording of distributional information.”

The use of distributional information obtained from other projects similar to the one for which the forecast is being made is called the use of an “outside view.” Now this prevention has a technical name - prediction based on the original category. When well managed, an organization will reward planners to work accurately and penalize planners for failing to anticipate difficulties and for neglecting to account for unforeseen disturbances—“unknown unknowns.”

Solutions and mistakes. When it comes to forecasting risky endeavors, executives easily fall victim to the “planning fallacy.” Under its influence, they make decisions based not on a rational assessment of possible losses, profits and prospects, but on deceptive optimism. They overestimate benefits and underestimate costs. They play out success scenarios in their minds and miss tricky places where they can make a mistake or get shortchanged. As a result, they undertake projects that end in budget overruns, delays, non-refunds, or remain unfinished.

Chapter 24. The engine of capitalism

The planning fallacy is just one manifestation of the ubiquitous optimism bias. Most of us see the world as friendlier, our own traits as more pleasant, and our goals as more achievable than they actually are. We also tend to exaggerate our own ability to predict the future, which makes us overconfident. Studies of small business founders have shown that entrepreneurs have a more positive outlook on life than mid-level managers. The optimistic bias influences events whenever a person or organization takes a serious risk. Risk takers tend to underestimate the role of random factors. When you need to act, optimism (even in the form of delusion) can be useful.

Entrepreneurs are mistaken: the financial benefits from running your own business are small - with the same qualifications, people achieve b O greater profits by selling your skills to employers. Evidence shows that optimism is ubiquitous, ineradicable, and costly. CEOs of large companies sometimes risk a lot in expensive mergers and acquisitions, acting on the mistaken belief that they will be a better steward of the other company's assets than its former owners. The “arrogance hypothesis” was put forward to explain senseless takeovers. According to her, the management of the acquiring company is simply less competent than it seems.

Disregard for competition. Cognitive biases play an important role in entrepreneurial optimism, especially the “what you see is what you see” (WYSIATI) principle inherent in System 1:

  • We focus on one goal, become fixated on our plan, and neglect prior probabilities, making planning errors in the process.
  • By focusing on what we want and can do, we ignore the skills and plans of others.
  • In both explaining the past and predicting the future, we focus on the causal role of skill and neglect the role of luck, falling under the illusion of control.
  • By focusing on what we know, we reject the unknown and become overconfident in our judgment.

An expert who fully admits the limitations of his competence can expect to be fired - he will likely be replaced by a more self-confident colleague who can more easily gain the trust of clients. Unbiased recognition of one's own insecurities is the cornerstone of sanity, but most people and organizations are looking for something completely different. Together, the emotional, cognitive and social factors that support excessive optimism form a volatile mixture that sometimes forces people to take risks.

The main benefit of an optimistic attitude is that it increases your resilience to failure. Essentially, an optimistic style means that a person takes success for granted and does not beat himself up too much about mistakes.

“Lifetime epicrisis” is a partial solution to the problem. Organizations may have a better chance of curbing optimism and its bearers than individuals. Gary Klein called his technique lifetime epicrisis. The procedure is simple - if the organization is on the verge of an important decision, but has not yet committed itself to implementing it, those initiated into the plan should be called to a meeting and told them: “Imagine that you are in the future. We have implemented the plan as it stands. The consequences were catastrophic. We ask you to briefly outline the history of the disaster in 5-10 minutes - how it all happened.” The first advantage of a lifetime epicrisis is that it legitimizes doubts.

PART IV. CHOICE
Chapter 25. Bernoulli's errors

I once read from the Swiss economist Bruno Frey: “The agent of economic theory is rational, selfish, and his tastes do not change.” It is obvious to the psychologist that man is neither wholly rational nor wholly selfish, and that his tastes are by no means stable. It seemed as if our sciences were studying representatives of two different species; behavioral economist Richard Thaler later called these types “econs” and “humans.” Unlike Econs, humanists studied by psychologists have System 1. Their view of the world is limited by the information available at the moment (the WYSIATI principle), and therefore they cannot be as consistent and logical as Econs.

Every significant choice we make in life involves some degree of uncertainty, which is why decision researchers hope that the insights gained from studying simulated situations can be applied to more interesting everyday situations.

Mathematician John von Neumann, one of the greatest thinkers of the 20th century, and economist Oscar Morgenstern derived their theory of rational choice between games from several axioms. Economists see expected utility theory as having two uses: as a logic that prescribes how choices should be made, and as a description of how choices are made by economists. Amos and I, being psychologists, began studying how humanists make risky choices without making any assumptions about their rationality. Five years after we began researching games, we completed an essay entitled “Prospect Theory: An Analysis of Decision Making Under Risk.” Our theory was very similar to utility theory, but departed from it in its core. Most importantly, our model was purely descriptive; its goal was to document and explain systematic violations of the axioms of rationality when choosing between games.

In our first five years of studying decision making, we established a dozen facts about the choice between risky options. Some of the findings contradicted expected utility theory. To explain the collected observations, we created a theory modifying expected utility theory and called it prospect theory.

Bernoulli's idea was simple: decisions are based not on the monetary, but on the psychological value of the outcomes, on their utility. The psychological value of a game is thus not equal to the weighted average of its outcomes in monetary terms; this is the average of the utilities of game outcomes, weighted by their probability (Fig. 6).

Bernoulli proposed that the diminishing marginal value of wealth explains risk aversion. Consider the following choice. You are offered an equal chance of getting 1 million or 7 million - utility: (10 + 84)/2 = 47 or guaranteed to get 4 million - utility: 60. The expected value of the game and the "guaranteed amount" are equal in monetary terms (4 million), but the psychological utility of these options is different due to the declining utility of wealth. Bernoulli's discovery was that a person making a decision within the framework of diminishing marginal utility of wealth will be risk averse.

Bernoulli used a new concept - expected utility - to calculate how much a merchant in St. Petersburg would agree to pay for insuring a cargo of spices from Amsterdam, if “he knows that at this time of year, out of a hundred ships sailing from Amsterdam to St. Petersburg, five are lost.” . The utility function explained why poor people buy insurance and why rich people sell it to poor people. As you can see from the table, the loss of one million means a loss of 4 utility points (from 100 to 96) for someone who has 10 million, and a much larger loss - 18 points (from 48 to 30) - for someone who has 3 million (for more details, see . , ).

Chapter 26. Prospect Theory

By exploring contrasting views of risk under favorable and unfavorable prospects, we have taken a significant step forward: we have found a way to demonstrate the central flaw of Bernoulli's choice model. Take a look:

  • Option 1: In addition to your wealth, you received $1,000. Now choose one of the options: 50% chance of winning $1000 OR guaranteed $500.
  • Option 2: In addition to your wealth, you received $2,000. Now choose one of the options: a 50% chance of losing $1,000 OR a guaranteed loss of $500.

It is easy to see that from the point of view of the final amount of wealth (according to Bernoulli’s theory, this is the only important indicator) the options are identical. You can be guaranteed to be $1,500 richer, or you can take a gamble in which you have an equal chance of becoming either $1,000 or $2,000 richer. Thus, according to Bernoulli's theory, both tasks should give the same preferences. Check your intuition - try to guess what others have chosen:

  • In the first case, the majority of respondents preferred guaranteed money.
  • In the second case, the vast majority of subjects preferred the game.

Bernoulli's theory is too simple and does not take into account dynamics. It lacks one variable - a reference point, a previous state, relative to which gains and losses are assessed. According to Bernoulli's theory, it is enough to know the amount of wealth to determine its utility, but according to prospect theory, it is also necessary to know the initial state. Thus, prospect theory is more complex than utility theory. In science, complication is considered a cost that must be justified by a sufficiently wide range of new and (if possible) interesting predictions of facts not explained by existing theory.

At the center of prospect theory are three cognitive properties. These can be considered the operational characteristics of System 1.

  • The assessment is made relative to a neutral reference point, sometimes called the “level of adaptation.” This principle is easy to demonstrate. Place three bowls of water in front of you. Pour ice water into the left one and warm water into the right one. In a medium bowl, the water should be at room temperature. Hold your left and right hands in the cold and warm bowl for about a minute, then lower both hands into the middle one. You will feel the same temperature with one hand as hot and with the other as cold. For financial outcomes, the reference point is usually the status quo, but sometimes it can be an expected outcome or one that seems deserved, such as a raise or bonus received by your colleagues. Outcomes that are above the reference point are wins; below the reference point – losses.
  • The principle of desensitization works both in the realm of sensations and in assessing changes in wealth. The appearance of a weak light will have a great effect in a dark room. The same change in illumination will go unnoticed in a brightly lit room. Likewise, the difference between $900 and $1,000 is subjectively much smaller than the difference between $100 and $200.
  • The third principle is loss aversion. When compared directly, the losses seem larger than the gains. This asymmetry between the strength of positive and negative expectations or sensations arose during evolution. An organism that responds more strongly to a threat than to a pleasant prospect has a greater chance of survival and reproduction.

The graph (Fig. 7) shows the psychological value of gains and losses, which are the “carriers” of value in prospect theory (unlike the Bernoulli model, where the carriers of value are the amount of wealth). The graph is clearly divided into two parts - to the right and to the left of the reference point. The S-shape is striking, demonstrating decreased sensitivity for both gains and losses. Finally, the two halves of the S are not symmetrical. The function curve changes sharply at the reference point: the reaction to losses is stronger than the reaction to the corresponding gains. This is loss aversion.

Loss aversion can be measured by asking yourself: What is the minimum gain that would balance out an equal chance for me to lose $100? For most, the answer is about $200, or twice the loss. The "loss aversion coefficient" has been evaluated experimentally many times and typically ranges from 1.5 to 2.5.

Of course, there can be no talk of any gambling if possible losses turn into a disaster or your way of life is threatened. In such cases, the loss aversion coefficient is huge and often tends to infinity.

Prospect theory itself contains contradictions. Take, for example, the prospect theory assumption that the reference point—usually the status quo—has a value of zero. Let's take a closer look at the following perspectives:

  1. One in a million chance of winning $1 million.
  2. There is a 90% chance of winning $12 and a 10% chance of winning nothing.
  3. There is a 90% chance of winning $1 million and a 10% chance of winning nothing.

The “win nothing” option is present in all three games, and prospect theory assigns the same value to this outcome in all three cases. Not winning anything is the starting point, and its value is zero. Do these statements match your feelings? Of course not. In the first two cases, winning nothing is nonsense, and a zero value makes sense. And, conversely, not winning in the third case means experiencing severe disappointment. Like a salary increase promised behind the scenes, the high probability of winning a large sum sets a new benchmark. Compared to your expectations, a zero gain is perceived as a major loss. Prospect theory cannot explain this fact. However, both disappointment and anticipation of disappointment are real, and the inability to explain them is as obvious a flaw as the counterexamples that I used to criticize Bernoulli's theory.

Chapter 27. The Endowment Effect

All points on an indifference curve have the same attractiveness. This is what “indifference” means. The convexity of the graph reflects decreasing marginal utility. There are two aspects of choice that the standard indifference curve model does not predict. First, the flavors don't stay stagnant; they change with the reference point. Secondly, the damage from change seems greater than the benefits, which causes a desire to maintain the status quo.

Traditional indifference schedules and Bernoulli's representation of outcomes as amounts of wealth are driven by a flawed assumption: that the utility of the present moment depends only on the moment itself and is not related to your history. Correcting this error was one of the achievements of behavioral economics.

In the early 1970s, Richard Thaler discovered many examples of what he called the “endowment effect.” For example, you have a ticket to a concert of a popular group, which you purchased for a nominal value of $200. You are an avid fan of the band and would easily pay up to $500 for a ticket. You have a ticket and you read online that richer or more desperate fans are offering $3,000. Will you sell? If you're like most sold-out concertgoers, you won't be sold. Your minimum selling price is above $3,000 and your maximum purchase price is $500. This is an example of an endowment effect that will baffle the student of standard economics. Thaler realized that the value function of loss aversion within the framework of prospect theory could also explain the endowment effect.

The first application of prospect theory to an economic puzzle appears to be a milestone in the development of behavioral economics.

The endowment effect is not universal. If you are asked to change a five dollar bill for a dollar, you will hand over the five bills without feeling the loss. In cases of normal commercial exchanges, there is no loss aversion on both sides. What makes these market transactions different from Professor R's reluctance to sell his wine or the reluctance of Super Bowl ticket holders to sell them even at an inflated price? The difference is that the shoes the merchant sells you and the money from your budget that you spend on the shoes are “for exchange.” They are prepared to be exchanged for other goods. Other goods—wine or Super Bowl tickets—are “for use”: for personal consumption or pleasure.

The high price quoted by sellers reflects a reluctance to part with something they already own, the kind of reluctance we see in a child who desperately clings to a toy and becomes furious if it is taken away. Loss aversion is built into the automatic appraisal structure of System 1.

Think like a trader. The existence of a reference point and the fact that losses appear to be greater than corresponding gains are considered fundamental ideas of prospect theory. One should not expect an endowment effect if the owner views his goods as having value for future exchange—a common view within commercial and financial markets.

Chapter 28. Failures

The brains of humans and animals have a mechanism that allows them to prioritize bad news. Accelerating the transmission of impulse even by a few hundredths of a second increases the animal’s chances of survival in a collision with a predator and subsequent reproduction. The automatic actions of System 1 reflect our evolutionary history. Oddly enough, no relatively fast-acting mechanisms for recognizing “good news” have been discovered.

The negative in most cases kills the positive and aversion to losses is just one of the manifestations of such a predominance of the negative. Loss aversion is caused by the competition of opposing aspirations: the desire to avoid losses (it is stronger) and to gain benefits. The reference point is sometimes the status quo and sometimes a goal for the future. To achieve it means to win, not to achieve it means to lose. The predominance of the negative implies that these motives have different strengths. In this sense, the desire to avoid failure in achieving a goal is stronger than the desire to “exceed the plan.”

Maintain the status quo. An attentive observer will almost everywhere find imbalances in the intensity of the motives for aversion to losses and gain. They are invariably inherent in any negotiations, especially repeated discussions of the terms of concluded contracts. Animals fight more desperately to preserve what they have than to gain profit. One biologist noted: “When the owner of a territory encounters a stranger, the latter almost always retreats—usually within a few seconds.”

Chapter 29. Four-part scheme

Whenever you make an overall assessment of a complex object—a new car, a future son-in-law, an uncertain situation—you attribute importance to each of its characteristics. This is a fancy way of saying that some of them influence your judgment more than others. The differentiation of properties by degree of importance occurs regardless of whether you are aware of it or not - this is one of the functions of System 1.

Correction of odds. Before Bernoulli, games were judged by their expected benefits. Bernoulli used this method to assign weight to winnings (the principle has since been called the “expectation principle”), but applied it to the psychological value of the outcome. In his theory, the benefit from a game is the arithmetic average of the benefits of its results, assessed according to their probability. The expectation principle does not describe how you think about probabilities in risky projects. In the four examples below, your chances of winning $1 million increase by 5% incrementally. Would you be equally happy if the probability increased:

A. from 0 to 5%

B. from 5 to 10%

B. from 60 to 65%

D. from 95 to 100%?

The expectation principle states that your benefit increases by exactly 5% in each case. However, does this describe how you feel? No, of course not. Anyone will agree that pairs of 0-5% and 95-100% are much more impressive than pairs of 5-10% or 60-65%. An increase in chances from zero to five percent transforms the situation, creates an opportunity that did not exist before, and gives hope of winning a prize. Here we see a qualitative change, whereas in the 5–10% pair we are talking only about a quantitative one. And although in a 5-10% pair the probability of winning doubles, the psychological benefit from such a prospect does not increase commensurately - many will agree with this. The impression made by increasing the probability from zero to 5% is an example opportunity effect, which makes unlikely outcomes of events seem more significant, receiving more weight than they “deserve.”

With an increase in probability of 95–100%, another qualitative change, strong in its impact, is observed - certainty effect. Because of the opportunity effect, we tend to overestimate small risks and overpay more than necessary just to eliminate them completely. Overvaluing weak capabilities increases the attractiveness of both gambling and insurance contracts.

The Waiting Principle, according to which value is weighted according to probability, is psychologically untenable. Further, the matter becomes even more confusing - thanks to the powerful argument that any individual who wants to be rational in making decisions must obey the principle of expectation. This is the main thesis of the well-known utility theory, presented in 1944 by von Neumann and Morgenstern. They proved that any assessment of uncertain outcomes that is not directly proportional to probability leads to contradictions and other troubles.

When Amos and I began working on prospect theory, we quickly came to two conclusions: people value gains and losses rather than overall well-being, and the weight of decisions assigned to the outcomes of events is different from the probabilities of their occurrence. Both ideas were not new, but when combined they explained a characteristic pattern of preferences that we called the four-part schema (Figure 9). The first row of each cell shows alternative events (for clarity). The following describes the basic emotion evoked by the alternative. The following describes how most people behave when faced with a choice between gambling or a certain win (loss) that corresponds to the expected value (for example, between a 95% chance of winning $10,000 and a guaranteed win of $9,500). Risk aversion is said to be selected when the guaranteed amount is preferred, and risk seeking is associated with gamble preference. Finally, the expected positions of the defendant and plaintiff when discussing the resolution of a civil case are described.

Rice. 9. Four-part scheme of prospect theory

The four-part preference scheme is considered one of the main achievements of prospect theory. Three of the four cells were familiar to us, the fourth (top right) was a surprise. The top left cell describes Bernoulli's assumption - people are risk averse if they consider alternatives with a significant chance of making a large profit. They are more willing to settle for a smaller jackpot just to make sure that the winnings are correct. The opportunity effect in the bottom left cell explains the lottery's high popularity. When the prize reaches a large size, the ticket buyer forgets that the chance of winning is minimal. It’s impossible to win without a ticket, but with it you have a chance – no matter how small or minuscule. Of course, with a ticket a person acquires something more than the opportunity to win - the right to dream about wealth to his heart's content. The bottom right cell describes the purchase of insurance. People are willing to pay more than the expected value for certainty; Thanks to this willingness, insurance companies exist and thrive. Here again, something more is gained than protection from an unlikely unpleasantness - the elimination of anxiety and peace of mind. In the top right cell, we seek risk in the loss area in the illusory hope of maintaining the status quo; we are willing to risk a larger amount just to avoid recording losses.

Chapter 30. Rare events

People overestimate the likelihood of unlikely events. People attach a lot of importance to such events. O greater importance when making decisions.

Craig Fox invited basketball fans to place bets on the winner of the championship. They assigned a cash equivalent to each bet (an amount that corresponded to the attractiveness of participating in the game). The winner's payout was $160. The total cash value for the eight individual teams was $287. The results of this experiment cast planning fallacy and other manifestations of optimism in a new light. The image of a successfully executed plan is very concrete and easy to imagine when trying to predict the outcome of a project. Conversely, the alternative of failure seems vague, since anything can prevent success. Entrepreneurs and investors, when assessing their prospects, tend to both overestimate the chances and give excessive weight to their estimates.

In utility theory, the weight of decisions is equal to probabilities. The decision weight for an event that is certain to happen will be 100, and a 90% probability corresponds to 90, which is 9 times the weight for a ten percent probability. In prospect theory, changes in probability have less impact on the weight of decisions. The experiment mentioned above showed that the decision weight for a 90% chance was 71.2, and for a 10% chance it was 18.6. The probability ratio was 9.0, while the decision weight ratio was only 3.8, indicating a lack of sensitivity to probability in this range.

Bright probabilities. Participants in one well-known experiment were asked to choose one of two vessels and take a ball out of it. Red balls were considered prizes. In this case: Vessel A contained 10 balls, 1 of which was red; Container B contained 100 marbles, 8 of which were red. Which one would you choose? Your chances of winning would be 10% in the case of vessel A and 8% in the case of vessel B, so the correct answer seems to be simple. In reality it turned out differently: 30–40% of the subjects chose a vessel with b O more winning balls, thus preferring a smaller chance of winning.

Several names have been coined for this error. I will, following Paul Slovik, call it neglect of the denominator. The idea of ​​neglecting the denominator helps explain why different ways of communicating risk information vary so much in their impact. If you read that “a vaccine that prevents the development of a fatal disease in children causes disability in 0.001% of cases,” the risk seems small. Now imagine another description of the same risk: “One child in 100,000 children vaccinated with this vaccine will remain disabled for life.” The second sentence hits you differently than the first: it conjures up the image of a child crippled by a vaccine while the 99,999 safely vaccinated children recede into the shadows. As follows from neglecting the denominator, low-probability events take on much more meaning when they are talked about in terms of relative frequency (how many of them) rather than in abstract terms like “odds,” “risk,” or “probability” (how likely). As seen in previous chapters, System 1 is better at dealing with specifics than with categories.

Now, years after prospect theory was formulated, we are better able to understand the conditions under which rare events are ignored or given greater weight. A rare event will gain extra weight if it attracts special attention. Such attention is guaranteed by an unambiguous description of the prospects (“99% chance of winning $1,000 and 1% chance of winning nothing”). An obsessive anxiety (a bus in Jerusalem), a vivid image (roses), a clear interpretation (one in a thousand) and a detailed reminder (as when choosing by description) - all this “overloads” the event. Where there is no excess weight, there will be its absence, ignorance. Our minds are not prepared to understand rare events, and for an inhabitant of a planet that faces unknown cataclysms, this is sad news.

Chapter 31. Risk Policy

The idea of ​​logical constancy is unattainable to our limited minds. Because we are susceptible to WYSIATI and are reluctant to make mental efforts, we tend to make decisions as problems arise, even if we are specifically taught to perceive them collectively. We have neither the inclination nor the mental resources to maintain the constancy of preferences; we cannot miraculously create a coherent set of preferences along the lines of the rational actor model.

Paul Samuelson once asked his friend Sam if he would play a coin toss game if he could lose $100 and win $200. The friend replied, “I won’t bet because the joy of winning $200 is not outweighs the pain of losing $100. But if you allow me to make a hundred such bets, I agree.” I would say to Sam: I understand your reluctance to lose, but it costs you a lot. Consider this question: Are you already on your deathbed? Is this the last time you'll be asked to play for luck? Of course, it is unlikely that you will be offered exactly such a game, but you will have many opportunities to try your luck in another way, for a small fee (relative to your condition). You will strengthen your financial position if you consider each such game as part of a collection of small games.

This advice is not so impossible to implement. Experienced traders in financial markets live it every day, hiding from the pain of losses with a reliable shield - the principle of a broad framework. The combination of loss aversion and a narrow-minded view is a curse that leads to poverty.

Risk Policy. When making decisions, narrow-minded people determine their preferences whenever they are faced with a risky choice. Their work would be made easier by having a risk policy to follow in such cases. Examples of risk policies are familiar to everyone: “When purchasing an insurance policy, always choose the highest possible deductible” and “Never buy an extended warranty.” The risk policy presupposes the establishment of a broad framework. The risk policy that accumulates decisions is similar to the outsider's view in planning matters described above. An outsider's perspective shifts attention from the circumstances of a particular situation to the statistics of the results of similar situations. Outside perspective and risk policy are tools for combating the biases that influence many decisions: the overoptimism of planning error and the overly cautious risk aversion.

Richard Thaler mentions a decision-making discussion he had with the heads of 25 divisions of a large corporation. He asked them to come up with a risky option in which the firm would be equally likely to either lose a large amount of capital or gain twice as much. None of the performers dared to enter into such a risky game. Thaler then asked the opinion of the director of the company, who was present at the experiment. The director, without hesitation, replied: “I would like them all to take risks.” In the context of the conversation, it was natural that the leader set a broad framework that combined all 25 rates. Like Sam, who had to flip a coin a hundred times, the director could count on statistical grouping to level out the overall risk.

Chapter 32. Accounting

Disposition effect- an example of establishing narrow boundaries. The investor opens an account for each share purchased and wants to close the account in the black. A rational agent looks at his portfolio as a whole and sells those stocks that will not produce anything in the future, regardless of whether they are winners or losers.

Taking into account irreparable losses often leads to wrong decisions. You should close a hopeless project and start investing in something worthwhile. This situation is in the upper right square of the four-part diagram, where you have to choose between a guaranteed loss and a disadvantageous play. Unfortunately, the game is often (and unwisely) chosen. Focusing on a failed attempt is a mistake from the point of view of the company, but not necessarily from the point of view of the director implementing the failed project. Canceling a project will leave an indelible stain on the manager's record.

Company boards are familiar with such conflicts: they often have to replace a leader who stubbornly adheres to the original decisions and is in no hurry to write off losses. Board members don't necessarily think the new leader is more competent, but they do know that the new leader isn't burdened with the same mental arithmetic, making it easier to forget the sunk costs of past investments when assessing today's opportunities.

Regret is one of the conflicting emotions that arise when alternatives to reality are available. People tend to experience stronger emotions (including regret) in a situation that results from action than in the same situation that results from inaction. Regret risk asymmetry requires habitual choice and risk aversion. This bias appears in many contexts. Buyers, aware of the possible regrets if they make the wrong choice, stubbornly stick to traditional choices, preferring branded products over little-known ones.

Responsibility. A persistent aversion to increased risk in exchange for some other benefit can be found in many safety laws and regulations. As legal scholar Cass Sunstein has noted, the precautionary principle is costly and, if strictly applied, can be completely crippling. He mentions an impressive list of innovations that would not get the go-ahead, including “automobiles, antibiotics, air conditioning, open heart surgery...” An overly strict version of the precautionary principle is obviously untenable. The dilemma between loss aversion morality and effective risk management does not have a simple or compelling solution.

Chapter 33. Inversions

You have been tasked with determining compensation for victims of violent crimes. You are reviewing the case of a man whose arm was disabled as a result of a gunshot wound. He was shot during a robbery at a nearby department store. There are two stores near the victim's house, one of which he went to more often than the other. Let's consider two scenarios. 1. The robbery took place in a more frequently visited store. 2. The usual store was closed due to mourning, and the victim went to another, where he received a gunshot wound. Does the type of store where the accident occurs affect compensation?

Almost everyone who has seen two scenarios simultaneously (a "within-category" experiment) states that bitterness should not be considered. Unfortunately, this principle only works when both scenarios are considered together; This doesn't happen in life. We usually operate in a “cross-category” mode, where there are no contrasting alternatives that can influence your decision, and, of course, the WYSIATI effect (what you see is what you see) comes into play here. As a result, the principles you adhere to when thinking about morality do not necessarily govern your emotional reactions, and the moral judgments that arise across different situations are not internally consistent. The gap between single and cumulative assessment of a robbery scenario relates to the broad family of inversions of judgment and choice.

Categories. Judgments and preferences are coherent within categories, but may be incoherent if the objects being evaluated belong to different categories. For example, try answering three questions. What do you like more - apples or peaches? What do you like better - steak or stew? What do you like better - apples or steak? The first and second questions refer to objects from the same category, so you can immediately answer which you love more. Moreover, you will get the same comparative results from single ratings (“How much do you like apples?” and “How much do you like peaches?”) because both apples and peaches make you think of fruit. There will be no inversion of preferences, since different fruits are compared with the same norm and implicitly with each other, both in a single and in an aggregate assessment. Unlike questions within a category, there is no consistent answer to the apples versus steak question. It can be assumed that an aggregate assessment, which requires the involvement of System 2, is more stable than a single assessment, which often reflects the strength of System 1 emotional reactions.

Chapter 34. Frames and reality

Amos and I called the unjustified influence of language on beliefs and preferences framing effect(setting limits). Here is one of the examples we used. Would you agree to a game in which you have a 10% chance of winning $95 and a 90% chance of losing $5? Would you pay $5 to play a lottery where you have a 10% chance of winning $100 and a 90% chance of winning nothing? The second version receives much more affirmative answers. It is much easier to accept an unfortunate outcome when viewed in terms of the value of a lottery ticket that did not win than when a negative outcome is framed as a loss in the game. “Losing” causes stronger negative emotions than “costs.”

Doctors participating in an experiment conducted by Amos received statistics on the results of two treatment options for lung cancer: surgery and radiation therapy. One group of participants was introduced to survival rate statistics, while another group received the same information in terms of mortality. Two descriptions of short-term surgical outcomes looked like this. The monthly survival rate is 90%. The mortality rate is 10% in the first month. You already know the results: surgery seemed more attractive in the first formulation (84% of doctors chose it) than in the second (50% preferred radiotherapy).

Amos and I began our discussion of framing with an example called the Asian Disease Problem. Imagine that the country is preparing for an epidemic of a strange Asian disease that is predicted to kill 600 people. Two alternative programs to combat the disease have been proposed. Suppose the exact scientific estimates of the consequences for each program are as follows: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no one will be saved. The vast majority of respondents chose program A: they preferred a guaranteed outcome to a game. In the second version, the results of the program are formulated in a different framework. If program A' is adopted, 400 people will die. If program B' is adopted, there is a 1/3 probability that no one will die and a 2/3 probability that 600 people will die. Take a closer look and compare the two versions: the consequences of programs A and A’ are identical, as are the consequences of programs B and B’. However, within the limits set by the second formulation, the majority of participants chose “game.”

Preferences between objectively equal outcomes change due to differences in formulation.

We learn the workings of System 1, which provides an immediate answer to any question about the poor and the rich: all doubts are resolved in favor of the poor. The surprising thing about Schelling's problem is that this seemingly simple moral rule is unreliable. It gives conflicting answers to the same question depending on the framework established by the problem statement. Our moral judgments are about descriptions, not substance. Broader frameworks and shared accounts lead to more rational decisions.

PART V. TWO “I”
Chapter 35. Two “I”

It was not published in Russian.

Introduction

In 2002, the Nobel Prize in Economics was awarded to two American scientists: psychologist Daniel Kahneman and economist Vernon Smith. What united the work of these two very different researchers was that they showed that people in the economic sphere act less intelligently and less selfishly than classical economic theories assume. The Nobel Committee awarded Kahneman the prize "for enriching economic science with the results of psychological research, especially with regard to human assessment of situations and decision-making under conditions of uncertainty." If we translate this formulation into everyday language, it would sound like this: “Kahneman received the Nobel Prize for showing how the psychological mistakes that people tend to make affect human actions in the field of economics and business.” Vernon Smith was awarded the Nobel Prize for developing a new tool for economic analysis - laboratory experiments. It was while conducting such experiments, which led to conclusions brilliantly confirmed in practice, that adherent of classical economic theory Vernon Smith came to the conclusion about the irrationality of human behavior. The fact that when making decisions a person tends to act not rationally (reasonably), but irrationally, making mistakes, and the errors are not random, but well-defined, systematic ones, was established by psychologists many years ago. The study of the nature of these errors led to the creation of a new interdisciplinary direction - economic psychology, or behavioral economics.

Psychological experiments by D. Kahneman in economics

On October 9, 2002, the Nobel Committee announced the award of its Memorial Prize in Economics to two outstanding scientists: Daniel Kahneman of Princeton (USA) and Jerusalem (Israel) universities "for the integration of the results of psychological research into economic science, especially in the areas of judgment and acceptance decisions under conditions of uncertainty" and Vernon Smith from George Mason University (USA) - "for establishing laboratory experiments as a tool for empirical analysis in economics, especially in the study of alternative market mechanisms."

The nomination of Kahneman and Smith, although expected several years ago, served as a formal recognition of the fact that within the framework of the economic discipline such independent fields as experimental economics, economic psychology, and behavioral economics have emerged and taken shape. However, the 2002 nomination means something more. Firstly, the Nobel Prize in Economics, awarded to a representative of psychological science, obviously confirms the principled course of the world scientific community towards the integration of research programs of various human sciences. Secondly - and this is perhaps even more important - the very recognition of the importance of the psychological characteristics of individual behavior by professional economists marked and recorded a significant shift in the approaches and problems of the entire economic science. In essence, this fact means the recognition of not only the expediency, but also the need to go beyond formal axiomatic models that are weakly related to the real behavior that these models are intended to describe. Economic sciences are entering an era of gradual revision of established methods and doctrines, starting with the basis of the fundamentals - the model of Noto oesopoticus, the rational economic man. The fundamental empirical material for this revision comes from psychological research in which Daniel Kahneman plays a major role; and the main tool for accumulating such material was experiment as a special method of increasing scientific knowledge, which entered the arsenal of economic sciences thanks to the pioneering work of Vernon Smith.

In the tradition of neoclassical economics, it was somehow subconsciously accepted to believe that empirical research (and especially experiments with real people) is a less “serious” activity than “high” theory. Neoclassical economists preferred to deal with more “serious” things, increasingly understanding by the progress of science more and more sophisticated formal constructions within the framework of their scientific tradition, based on the model of Homo eosopoticus from the point of view of the standard theory, this rational economic agent had to subordinate all feelings and emotions to precise calculation , have absolute memory and computing abilities, always be well aware of your interest (preferences) and act in accordance with it. The formal description of rational behavior in the theory was preceded by a number of assumptions (such as convexity, continuity, monotonicity and transitivity of individual preferences), which made it possible to represent the preferences of individuals as a real-valued utility function and to use a powerful analytical tool of mathematical and functional analysis. Moreover, in full agreement with the positivist methodology, the theory argued that even if the agent does not consciously solve the maximization problem, he still acts as if he were solving it. This happens if only because systematic deviations from such behavior would inevitably lead to losses expressed in money, and if they are systematically repeated, to the bankruptcy of the “irrational” agent.

Reality, however, stubbornly refused to fit into the “Procrustean bed” of canonical schemes, no matter how convenient they were analytically. Back in the 1950s, American economist and psychologist Herbert Simon convincingly showed that real people making decisions behave completely differently than described in economics textbooks. Limited cognitive abilities do not allow real people in practice to find solutions that are optimal from a theoretical point of view. If so, then the concept of substantive rationality, adopted in standard models, should give way to the concept of bounded rationality as more correct from a descriptive point of view.

Simon's work, awarded the Nobel Prize in 1978 "for pioneering research into decision-making processes in economic organizations," could not yet be included in the scientific arsenal, and was probably perceived by most economists as a side and insignificant branch of science. However, over the past twenty years the subject and method of economics have changed, if not radically, then quite significantly. University programs have firmly included such fundamental empirical phenomena as Allais paradoxes or the “framing effect” in the theory of individual behavior under risk conditions.

Over the same years, experimental economics finally took shape as an independent field of economic research, with its own methods, principles and traditions. Being the flesh and blood of economic theory, an economic experiment serves primarily to test specific models in theoretically unambiguous circumstances, as well as to accumulate new information about the properties of known economic institutions. It turned out that many of the premises of economic models - such as perfect or imperfect competition, incomplete information or preliminary communication (cheap talk) - can be reproduced in a classroom or a computer local network. Preferences can also be formed quite carefully by paying participants in experiments a certain reward depending on their results. Principles have also been developed for selecting participants (most often these are university students), drawing up instructions, and conducting an experiment. Finally, it is also important that economic experiments are usually conducted with real money. Although this fact does not always affect the results, it increases their persuasiveness for economists who are accustomed to meeting experimental refutations of their theoretical achievements with a certain degree of skepticism.

As a result, economic experiment has become a generally accepted, if not the only, way to test the widest class of behavioral economic theories: from individual behavior to the theory of public choice, from game theory to the theory of financial markets. The scientific community eventually remembered that economics is meant to be nothing more than the science of human behavior in real life and of man in interaction with his fellows. But if this is so, then the empirical study of such behavior itself should be considered not an eccentricity of a marginal group of scientists, and not even just one of the ways of testing existing theories, but also the most important method for collecting specific material about the behavior of people, which it is intended to describe and explain. Today, the best economic theorists are increasingly proposing new concepts and extensions of classical models based not on the convenience of mathematical constructs, but on empirical evidence about human behavior, revealed by experiment.

As a consequence of all these shifts, the number of publications on experimental and behavioral economics has grown exponentially in recent years - not a single volume of world economic journals can do without them, including such as Econometrica, American Economic Review, Journal of Economic Perspectives", "Journal of Political Economy", "Quarterly Journal of Economics", "Economic Journal". A number of specialized academic publications have appeared - for example, "Journal of Behavioral Decision Making", "Journal of Economic Behavior and Organization", "Journal of Risk and Uncertainty", "Journal of Economic Psychology", "Experimental Economics", "Journal of Psychology" and Markets", and the best universities in the world consider it good form to have one or two experimental economists among their staff (1).

Interim results of the development of experimental economics were summed up in a fundamental reference publication, the Handbook of Experimental Economics, published in 1995, edited by John Kagel and Alvin Roth. This book, which instantly became the bible of experimental economics, gave new impetus to research in this field throughout the world.

In light of these changes, the awarding of the Nobel Prize to two eminent representatives of psychological and behavioral economics was only a formal recognition of the fundamental and ever-increasing role that empirical work plays in increasing man's knowledge of his own nature and cognitive abilities in connection with the economic (and not only) behavior. With its decision, the Nobel Committee on Economics once again confirmed not only the principled line of awarding the prize to real pioneers, but also the ability to adequately take into account the accumulation of scientific knowledge and even revises the established ideas that were once considered the immutable postulates of economic sciences.

Biographical information. Daniel Kahneman was born in Tel Aviv in 1934. Israeli-American psychologist, one of the founders of psychological (behavioural) economic theory. The life of D. Kahneman clearly demonstrates the cosmopolitanism of modern scientists. Having begun his studies at the Hebrew University of Jerusalem (1954 - bachelor's degree in psychology and mathematics), Kahneman completed it at the University of California Berkeley (1961 - doctorate in psychology). Over the next 17 years, he taught at the Hebrew University of Jerusalem, combining this with work at a number of universities in the USA and Europe (Cambridge, Harvard, Berkeley). Since the late 1970s, Kahneman temporarily retired from work in Israel, engaging in joint scientific projects with American and Canadian scientists in research centers in these countries. Since 1993, he has been working as a professor at Princeton University in the USA, and since 2000, in parallel, he has been teaching again at the Hebrew University of Jerusalem (13).

The first economist to truly discover and demonstrate the enormous potential of experimental methods for testing social scientific hypotheses was the famous French scientist Maurice Allais, Nobel laureate in 1988 for “pioneering contributions to the theory of markets and the efficient use of resources.” However, back in the early 1950s, he first offered his colleagues a number of simple examples that refuted the then new theory of choice under risk, formulated by John von Neumann and Oskar Morgenstern. This expected utility theory states that a rational individual, choosing the most desirable of risky alternatives (lotteries, that is, probability distributions over a set of monetary winnings), seeks to maximize the expected value of his utility function.

For the case of a finite set of outcomes, the maximized functional is written as U(p) = Ui(x)p x, where x are winnings (monetary values), and p x are the probabilities of receiving them. This simple functional form allows us to represent the utilities of any uncertain prospects in the form of mathematical expectations of some well-defined functions, that is, to describe behavior under risk using standard methods of mathematical analysis and probability theory. In addition, the existence of the utility function u(x) itself is derived from a number of simple axioms, which are actually endowed with normative status and serve as a criterion for “rational” behavior. A fundamental requirement of this kind is the axiom of independence, which is written as follows:

This axiom means that any linear combination of lottery p and some lottery r must be preferable to the same combination of lottery q and lottery r if and only if p itself is preferable to q.

An example similar to that formulated by Allais was experimentally studied by Daniel Kahneman and Amos Tversky. Respondents were asked to choose the most preferred one in each of two pairs of lotteries described in Table 1:

Table 1. Pairs of lotteries offered to respondents

It is easy to see that the lotteries in the second pair (C and D) are a linear combination of lotteries from the first pair (A and B) with weight a = 0.25 and the (degenerate) lottery [O, 1]. This means that, in accordance with the axiom of independence, an individual who chose lottery A (respectively B) from the first pair must choose lottery C (respectively D) from the second. Kahneman and Tversky's experiment showed that 88% of respondents chose A in the first pair and 83% chose D in the second, thereby violating the axiom of independence and making a universal representation of utility in the von Neumann-Morgenstern form impossible.

Kahneman and Tversky also offered one of the first explanations for the Allais paradox and other empirically documented phenomena. In contrast to a series of other generalizations of the theory of expected utility (of which there are already dozens these days), they directly derived their theory of prospects from empirically identified and documented features of the behavior of real respondents under risk conditions. Instead of the von Neumann-Morgenstern functional linear in probabilities p, they proposed using a nonlinear function of probabilistic weights, representing lottery utilities in the form and at the same time changing the interpretation of the utility of outcomes, represented by the value function v(x i). The latter was defined not in terms of absolute monetary values, but in terms of deviations from the point of the individual's initial wealth. In addition, it was assumed to be concave (convex up) for wins and convex (convex down) for losses, which means risk averse for wins and risk averse for losses. The reader can reconcile these facts with his own intuition: if a lottery looks less attractive than a degenerate lottery, meaning a certain win of a value equal to its mathematical expectation, then the individual is not inclined to take risks when winning. However, when faced with a mirror example for losses [-10, 0.5; 0, 0.5], individuals, as a rule, prefer to play the lottery than to certainly part with an amount equal to 5, that is, they show a propensity for risk. Additionally, Kahneman and Tversky's research suggests that the value function has a steeper slope for losses than for wins. A typical value function that satisfies these conditions is shown in Fig. 1.

The truly innovative role of Kahneman and Tversky consisted in a different, unusual for economists, way of constructing a theory: not from a convenient formal construction to the axioms of rationality, but from the observed features of behavior to its formal description and then to the axioms. Apparently, for this reason, the 1979 work became not only a canonical example of experimental research on individual behavior, but also, according to economists themselves, the most cited work ever published in one of the world's most prestigious economic journals, Econometrica. This fact is all the more striking since both authors of the article are professional psychologists, and the article itself is distinguished by an unusually low degree of formalization for this journal, which is generally very “mathematical.” The entire analytical toolkit of the “prospect theory” does not go beyond the four operations of arithmetic, and its axiomatics were only outlined in the 1979 article (it was fully formulated more than ten years later). But it was the 1979 article that was and remains perhaps Daniel Kahneman's most significant work.

Speaking about this, as well as many other works of Kahneman, one cannot fail to mention separately the outstanding role played in the development of cognitive and experimental psychology and their economic applications by his senior colleague and long-time co-author, professor of psychology at Jerusalem and Stanford universities, Amos Tversky. Without exaggeration, we can say that in terms of his creative potential, versatility of talent and breadth of scientific views, Tversky was one of the most outstanding psychologists of the last century, along with such giants as Jean Piaget, Lev Vygotsky or Kurt Lewin. In addition to many original and pioneering experimental works, he alone proposed a number of theoretical explanations for the discovered phenomena of individual behavior - including such as the intransitivity of preferences, the theory of aspect-based exclusion, the psychological theory of similarities, the theory of the choice of alternatives characterized by varying degrees of importance (prominence effect), and a lot others. Of particular note is the work on the foundations of the theory of measurement in the natural sciences and human sciences, written by Amos Tversky in collaboration with David Krantz, R. Duncan Luce and Patrick Sapps - a work that is probably destined to be one of the cornerstones of fundamental scientific research for many years to come. knowledge (8).

Yet Tversky's main collaborator was Kahneman. Together they published about 30 scientific papers, which played a decisive role in disseminating the results of psychological research in related disciplines, primarily economics. After the death of Amos Tversky in 1996, a number of academic publications and journals dedicated special sections and issues to his memory. Amos Tversky certainly had every reason to share the 2002 Nobel Prize with Daniel Kahneman and Vernon Smith, but, unfortunately, Nobel Prizes are not awarded posthumously.

Kahneman and Tversky's most notable contribution to economic theory is, of course, prospect theory. However, this theory was only a small part of the impressive research program for the experimental study of human behavior that these authors implemented over almost thirty years of joint work. The semantic core of their joint research program was a fundamental and long-term project to study the heuristics and biases of individual judgments and observed behavior relative to the normative standard accepted in economic theory. Homo oeconomicus from traditional economics textbooks is not just a rational being, but a hyper-reflective one: not only is he endowed with ordered preferences, phenomenal memory and other advantages of a consumption machine, but he is also organically incapable of acting “on instinct”, making mistakes when assessing the most desirable from available options and make logically contradictory judgments. However, these virtues are not typical for the majority of living people, who are inclined to systematically make decisions, guided not by rational, but by intuitive considerations, which Kahneman and Tversky called behavioral heuristics.

As a simple example, consider the following problem, which the reader can try to solve himself - but he must solve it as quickly as possible: “A pen and notebook cost $1.10, and the pen costs $1 more than the notebook. How much does the notebook cost?” . If the respondent really answers without thinking, then he will most likely give the intuitively obvious answer of $0.10. Naturally, this answer is incorrect, but the ease with which 1 is subtracted from 1.10 itself pushes the respondent to solve the problem by turning to intuitively attractive heuristics rather than logically correct calculations.

Since the early 1970s, Kahneman and Tversky have discovered and described a wide range of phenomena of this kind. Let us cite just one of them - the famous “Linda problem” proposed to American students: “Linda is 31 years old, unmarried, sociable and very bright young woman. She graduated from the Faculty of Philosophy, and has always taken issues of discrimination and social justice seriously. In During her student years, she took an active part in anti-nuclear demonstrations."

Respondents who received this information were asked to rank the following statements about Linda in order of likelihood:

1. She works as a teacher in a kindergarten.

2. She works in a bookstore and does yoga.

3. She is a feminist activist.

4. She is a social worker.

5. She is a member of the League of Women Voters.

6. She is a bank employee.

7. She is an employee of an insurance company.

8. She is a bank employee and a feminist activist.

More than 80% of respondents (including graduate students at Stanford University specializing in decision theory) considered option 8 more likely than options 3 and 6. This relationship contradicts the principles of probability theory: event 8 is the intersection of events 3 and 6, and, therefore, the probability of event 8 cannot exceed any of the probabilities of events 3 and 6 taken separately.

The psychological explanation for this phenomenon, given by Kahneman and Tversky, is called the representativeness heuristic; Linda's description is more typical (representative) of a bank employee and a feminist than of just a bank employee (not a feminist) and just a feminist (not a bank employee). Such typicality comes to the fore when answering, as if convincing the respondent of the uselessness of logical reasoning on this matter (1).

Another phenomenon they discovered is called the availability heuristic; people tend to consider a phenomenon that is in sight or hearing (regardless of its reasons) more likely than something about which they think or know relatively little. A typical example is the subjective assessment of the comparative danger associated with various types of death threats. Thus, after the Chernobyl disaster, European respondents were most afraid of accidents at nuclear power plants, although according to statistics, the probability of dying from such an accident was hundreds of times lower than the probability of dying in a car accident.

Kahneman and Tversky identified many other examples of misconceptions associated with biased perceptions of the likelihood of certain events. It turned out, for example, that from the point of view of normal (and even educated) people, the probability that the average height of n randomly selected men will exceed the average for a given country is perceived as the same for n = 10, 100 and 1000. The tendency of people to transfer characteristics of a population on the properties of small samples Kahneman and Tversky called the law of small numbers. They also found that people systematically underestimate the importance of prior information when estimating conditional probabilities.

Thus, any reasonable respondent can easily answer the question, what is the probability that a randomly selected respondent is an engineer or a lawyer, if it is known that a given sample of people consists of 30% engineers and 70% lawyers. However, the result will change if the same reasonable respondent is read a neutral (meaningless) characteristic of this randomly selected person, for example: “Dick is 30 years old, he is married, but he has no children. He undoubtedly has good abilities, is highly motivated and has a brilliant career prospects in his field. He is appreciated and loved by his colleagues." Hearing this description, the typical respondent states that there is a 50% chance that Dick is a lawyer - and this despite the fact that 70% of the sample is lawyers! This and other similar examples show that completely reasonable people, when making decisions in such cases, are usually guided by available heuristics, and not by the laws of conditional probability.

Kahneman and Tversky conclude that "the fundamental concepts of statistics are obviously not among the intuitive tools of human judgment." This conclusion, in particular, calls into question the use of Bayes' rule in the dynamic modeling of individual behavior, which until very recently was perceived as normative, almost the only condition for the rationality of an economic agent.

The last example shows that the judgments, preferences, and therefore decisions of real people depend significantly on the context, that is, on the specific way the problem is formulated. One example of such a dependence is the phenomenon of preference reversals in choice problems under risk conditions. This phenomenon, which has not yet been satisfactorily explained in the literature, is that the revealed preference relationship between two risky prospects (lotteries), generally speaking, depends on the way this preference is revealed. For example, if an individual is asked to choose one of two lotteries, then the choice will be in favor of one of them; but if you ask him to name their reliable equivalent (the minimum amount of money for which the same individual would agree to sell the right to play these lotteries), then the other one will be valued higher. Another example of such a dependence is the well-known framing effect (9).

In one experiment, respondents (clinicians) were asked to choose one of two possible treatment strategies for patients suffering from cancer:

"Formulation of Survival"

Surgical intervention: out of every 100 patients undergoing surgery, 90 will survive the operation, of which 68 will be alive one year after the operation, and 34 five years after the operation.

Radiation therapy: out of every 100 patients treated with radiation, all will remain alive during treatment, 77 patients will be alive after a year and 22 after five years after treatment."

In this formulation, only 18% of subjects were in favor of radiation therapy. In parallel, the same respondents were offered the following description of alternatives:

"Formulation of Mortality"

Surgical intervention: out of every 100 operated patients, 10 will die during surgery and in the postoperative period; a total of 32 patients will die within a year, and 66 patients will die within five years.

Radiation therapy: out of every 100 patients who have undergone a course of radiation, no one will die during treatment; within a year after treatment, a total of 23 patients will die, and within five years - 78 patients."

With this formulation, the number of supporters of radiation therapy more than doubled - to 44%. At the same time, as is easy to see, from a formal point of view, both formulations are absolutely identical.

The significance of both phenomena goes beyond the scope of cognitive psychology, that is, the study of the actual processes of making individual judgments and decisions. Both the reversal of preferences and the framing effect pose serious problems for economic theory - much greater than the same Allais paradox. The latter can be explained using generalized expected utility theories, for example by allowing for nonlinear probability weights. At the same time, the described phenomena mean that in principle it is impossible to construct a single and well-ordered preference index (utility function) for all occasions - any such index will depend on the method of obtaining it. It follows from this that the applications of utility theory to specific economic problems cannot be considered a priori unconditional.

Take, for example, such a hot topic as environmental public goods. If the average person is asked which birthday gift he would prefer - a new VCR or the closure of a plant that pollutes the entire neighborhood - it is very likely that the choice will be made in favor of the first good. However, if the same respondent is asked how much he is willing to pay for both, the VCR will win, as they say, with a clear advantage. The Russian reader probably had to deal with other manifestations of design effects, such as a public opinion poll or a national referendum, the results of which can not only be predicted, but also planned using “correctly” posed questions. Even before the Nobel Committee, domestic political strategists were already able to appreciate the practical significance of Kahneman’s works.

Finally, it is impossible not to mention that the work of Kahneman and other economic psychologists makes it possible to consider a whole range of purely economic phenomena from a new perspective. One of the most striking examples of such research is the experimental verification of the famous Coase theorem on the optimal allocation of resources. To test this theorem, Kahneman and his colleagues performed the following experiment. A group of subjects (students at Cornell University) were divided into two subgroups, one of which was given mugs with the symbols of the university, sold in a nearby store for $6. The owners of the mugs were given the opportunity to sell their mugs to those who did not get them. For this purpose, the owners of the mugs informed the organizers of the minimum price at which they were willing to sell them, and potential buyers - the maximum price at which they were willing to sell them; the market price was formed as the intersection point of the resulting supply and demand lines. Given that the initial distribution of the mugs was random, the supply and demand curves would have to be symmetrical, which means, according to the Coase theorem, approximately half of those who received the mugs would have to exchange them for higher amounts of money. In the control group, where cash prizes were awarded instead of mugs, this is exactly what happened; However, in the case of circles, actual trading volumes turned out to be three to five times lower than predicted by economic theory, and the average bid and offer prices differed by more than two times. This empirical refutation of the Coase theorem is called the endowment effect: the very fact of owning a thing increases its value in the eyes of the owner, blocking the possibility of exchange even where there are no problems with either property rights or transaction costs.

It would be an exaggeration to claim that these and other works by Kahneman, Tversky, and their fellow psychologists revolutionized the economic view of the nature of human judgment and behavior. However, it was they who established among economists the understanding that a good theory should not only not be refuted by facts, as required by the positivist approach, but also be based on the fundamental observable properties of the object that it is intended to describe. Today, no economist writing about individual behavior can do without considering the psychological characteristics of the decision-making process. Economic psychology itself and its applications have already developed into a special branch of economic knowledge today - the so-called behavioral economics, which confidently masters the widest range of economic problems - from the theory of individual behavior itself to problems of public choice and financial economics.

The above, of course, does not mean that all the problems of this young science have already been finally solved. Behavioral economics is just entering its maturity, formulating a research program at the intersection of economics, psychology, mathematics and even philosophy. Nobel laureate Daniel Kahneman continues to make significant contributions to the development of this discipline: in recent years, his focus has been on the problem of utility, dating back to I. Bentham and D.S. Millu. In their classical understanding, this term meant actual pleasure or pain experienced by an individual in the process of “consuming” a thing. Extensive empirical evidence accumulated in recent years convincingly shows that this experienced utility cannot be identified with the utility that an individual predicts (predicted utility), has in mind at the time of decision making (decision utility) or at the time of remembering experiences. , experienced at the time of consumption of the good (remembered utility).

Despite the naturalness of such a distinction, until recently it was not made in the economic literature, and yet it confronts economic researchers with a difficult choice. Which of these utility concepts should be used in, say, evaluating alternative social programs or urban improvement measures? Is it possible to believe that the marginal utility of public goods for rich people is really noticeably lower than for poor people, or is this difference due only to the former getting used to a higher level of consumption, despite the fact that the basic utilities experienced are actually equal? It is clear that these and similar questions, which were included “at the instigation” of Kahneman into the research program of psychological and behavioral economics, are not only not divorced from real life, but are of the most concrete, even applied nature.

On the other hand, studies of experienced utility have shown that, unlike other concepts, it is not only measurable, but can also be described mathematically 26 . This clearly proves that psychological economics is not limited to stating experimental facts, but strives to give them strict explanations. Having adopted the results of research by cognitive psychologists, economic science does not abandon the fundamental focus on describing the patterns of individual behavior. On the contrary, it fills modern theories with new content based directly on empirical data, and enriches its methodological arsenal with a deeper understanding of the nature of human rationality - even to the point of actually abandoning the homo oeconomicus model in favor of more realistic and empirically correct specifications. Nobel laureate Daniel Kahneman and his long-time collaborator Amos Tversky played a key role in this process: they not only built bridges between psychological experiments and economic theory, but also laid the foundation for a future unified theory of human behavior that rejected any claims of any one social scientific doctrine to an unconditional monopoly on truth (8).

The main content of the research program of economic psychology (and behavioral economics in general) is successfully summarized by them themselves: “The idealized premise of rationality adopted in economic theory is usually justified in two ways. First, it is argued that only rationally acting individuals can survive in a competitive environment. In - secondly, it seems likely that behavior not based on this premise will inevitably turn out to be chaotic and not amenable to any scientific explanation. Both arguments are dubious. On the one hand, many empirical evidence clearly confirms that people can live their entire lives in a competitive environment. , never having learned to apply linear weights or to bypass the effects of design. But even more important is the fact that human choice often turns out to be orderly, although not necessarily rational in the traditional sense of the word. Clarification of the concept of rationality and its scientific description should, in all likelihood, become a central component of the research program of economic psychology and behavioral economics in the foreseeable future.

October 2002 The Nobel Committee announced the award of its Memorial Prize in Economics to two outstanding scientists: Daniel Kahneman of Princeton (USA) and Jerusalem (Israel) universities "for the integration of the results of psychological research into economic science, especially in the areas of judgment and decision making under conditions of uncertainty" and Vernon Smith from George Mason University (USA) - "for establishing laboratory experiments as a tool for empirical analysis in economics, especially in the study of alternative market mechanisms."

The nomination of Kahneman and Smith, although expected several years ago, served as a formal recognition of the fact that within the framework of the economic discipline such independent fields as experimental economics, economic psychology, and behavioral economics have emerged and taken shape. However, the 2002 nomination means something more. Firstly, the Nobel Prize in Economics, awarded to a representative of psychological science, obviously confirms the principled course of the world scientific community towards the integration of research programs of various human sciences. Secondly - and this is perhaps even more important - the very recognition of the importance of the psychological characteristics of individual behavior by professional economists marked and recorded a significant shift in the approaches and problems of the entire economic science. In essence, this fact means the recognition of not only the expediency, but also the need to go beyond formal axiomatic models that are weakly related to the real behavior that these models are intended to describe. Economic sciences are entering an era of gradual revision of established methods and doctrines, starting with the basis of the fundamentals - the model of Noto oesopoticus, the rational economic man. The fundamental empirical material for this revision comes from psychological research in which Daniel Kahneman plays a major role; and the main tool for accumulating such material was experiment as a special method of increasing scientific knowledge, which entered the arsenal of economic sciences thanks to the pioneering work of Vernon Smith.

In the tradition of neoclassical economics, it was somehow subconsciously accepted to believe that empirical research (and especially experiments with real people) is a less “serious” activity than “high” theory. Neoclassical economists preferred to deal with more “serious” things, increasingly understanding by the progress of science more and more sophisticated formal constructions within the framework of their scientific tradition, based on the model of Homo eosopoticus from the point of view of the standard theory, this rational economic agent had to subordinate all feelings and emotions to precise calculation , have absolute memory and computing abilities, always be well aware of your interest (preferences) and act in accordance with it. The formal description of rational behavior in the theory was preceded by a number of assumptions (such as convexity, continuity, monotonicity and transitivity of individual preferences), which made it possible to represent the preferences of individuals as a real-valued utility function and to use a powerful analytical tool of mathematical and functional analysis. Moreover, in full agreement with the positivist methodology, the theory argued that even if the agent does not consciously solve the maximization problem, he still acts as if he were solving it. This happens if only because systematic deviations from such behavior would inevitably lead to losses expressed in money, and if they are systematically repeated, to the bankruptcy of the “irrational” agent.

Reality, however, stubbornly refused to fit into the “Procrustean bed” of canonical schemes, no matter how convenient they were analytically. Back in the 1950s, American economist and psychologist Herbert Simon convincingly showed that real people making decisions behave completely differently than described in economics textbooks. Limited cognitive abilities do not allow real people in practice to find solutions that are optimal from a theoretical point of view. If so, then the concept of substantive rationality, adopted in standard models, should give way to the concept of bounded rationality as more correct from a descriptive point of view.

Simon's work, awarded the Nobel Prize in 1978 "for pioneering research into decision-making processes in economic organizations," could not yet be included in the scientific arsenal, and was probably perceived by most economists as a side and insignificant branch of science. However, over the past twenty years the subject and method of economics have changed, if not radically, then quite significantly. University programs have firmly included such fundamental empirical phenomena as Allais paradoxes or the “framing effect” in the theory of individual behavior under risk conditions.


Psychologist Daniel Kahneman is one of the founders of psychological economic theory and perhaps the most famous researcher of how people make decisions and what errors based on cognitive distortions they make. For his study of human behavior under conditions of uncertainty, Daniel Kahneman received the Nobel Prize in Economics in 2002 (this is the only time a psychologist has received the Nobel Prize in Economics). What did the psychologist manage to discover? Over many years of research that Kahneman conducted with his colleague Amos Tversky, scientists found out and experimentally proved that human actions are guided not only and not so much by the mind of people, but by their stupidity and irrationality .

And, you see, it’s hard to argue with this. Today we bring to your attention 3 lectures by Daniel Kahneman, in which he will once again go through the irrational human nature, talk about cognitive distortions that prevent us from making adequate decisions, and explain why we should not always trust expert assessments.

Daniel Kahneman: “The mystery of the experience-memory dichotomy”

Using examples ranging from our attitudes toward vacations to our experiences with colonoscopies, Nobel laureate and pioneer of behavioral economics Daniel Kahneman demonstrates how differently our experiencing selves and our remembering selves perceive happiness. But why does this happen and what are the consequences of such a splitting of our “I”? Find the answers in this lecture.

Now everyone is talking about happiness. I once asked a man to count all the books with the word “happiness” in the title published in the last 5 years, and he gave up after the 40th, but of course there were even more. The rise in interest in happiness is enormous among researchers. There are many trainings on this topic. Everyone wants to make people happier. But despite such an abundance of literature, there are certain cognitive distortions that practically do not allow us to think correctly about happiness. And my talk today will mainly focus on these cognitive pitfalls. This applies both to ordinary people thinking about their happiness and to the same extent to scientists thinking about happiness, since it turns out that we are all equally confused. The first of these pitfalls is a reluctance to acknowledge how complex this concept is. It turns out that the word “happiness” is no longer such a useful word because we apply it to too many different things. I think there is one specific meaning that we should limit ourselves to, but in general it is something that we will have to forget about and develop a more comprehensive view of what well-being is. The second trap is the confusion between experience and memory: that is, between the state of happiness in life and the feeling of happiness about your life or the feeling that life suits you. These are two completely different concepts, but both of them are usually combined into one concept of happiness. And the third is the illusion of focus, and it is a sad fact that we cannot think about any circumstance that affects our well-being without distorting its significance. This is a real cognitive trap. And there is simply no way to get it all right.

© TED conferences
Translation: Audio Solutions company

Read material on the topic:

Daniel Kahneman: "The Study of Intuition" ( Explorations of the Mind Intuition)

Why does intuition sometimes work and sometimes not? For what reason do most expert forecasts not come true and can we even trust the intuition of experts? What cognitive illusions prevent you from making an adequate expert assessment? How does this relate to the specifics of our thinking? What is the difference between “intuitive” and “thinking” types of thinking? Why may intuition not work in all areas of human activity? Daniel Kahneman talked about this and much more in his video lecture Explorations of the Mind Intuition.

*Translation starts at 4:25 minutes.

© Berkeley Graduate Lectures
Translation: p2ib.ru

Daniel Kahneman: "Reflections on the Science of Well-Being"

An expanded version of Daniel Kahneman's TED talk. A public lecture given by a psychologist at the Third International Conference on Cognitive Science is also devoted to the problem of two “I” - “remembering” and “present”. But here the psychologist considers this problem in the context of well-being psychology. Daniel Kahneman talks about modern research on well-being and the results that he and his colleagues have been able to obtain recently. In particular, he explains on what factors subjective well-being depends, how our “real self” affects us, what the concept of utility is, which influences decision-making, how much the assessment of life affects experienced happiness, how attention and pleasure are interconnected, what we experience from something, and how much do we exaggerate the meaning of what we think about? And, of course, the question of what significance studies of experienced happiness have for society does not go unnoticed.


People are stupid.
For this discovery, Israeli scientist Daniel Kahaneman, who works in, you guessed it, the United States, received the Nobel Prize in Economics for 2002.

(However, this does not apply to you, dear reader. This is about others. :)
Through a series of precise scientific experiments, Kahneman was able to prove that most people do not use common sense in their daily lives. Even mathematics professors rarely resort to elementary arithmetic operations in everyday life.

Kahneman was the first to introduce the concept of the human factor into economics and combine psychology and economics into a single science. Before him, economists wondered why the models they calculated gave sudden failures, why people did not behave as they should according to theory? Why does the stock exchange suddenly fall or why do people suddenly rush to the bank to withdraw deposits and exchange one currency for another?

The most interesting thing is that the Nobel Prize winner in economics never studied economics, but spent his entire life studying psychology. In this case, the psychology of choosing everyday economic decisions.

All economists before Kahneman, starting with Adam Smith, made the same mistake - they assumed that a person is guided by elementary logic and his own benefit - buys where it is cheaper, works where they pay more, and chooses the one from two goods of the same quality. which is cheaper.

Kahneman's research has shown that things are not that simple. People, it turns out, don’t want to think. They are guided not by logic, but by emotions, random impulses; what we heard yesterday on TV, or from a neighbor, established prejudices, advertising, etc.

Here's an example. It turns out that if you lower the price, the product will not necessarily start selling out faster. Some will think that this is simply a markdown of the goods due to poor quality. The same thing - if the price is increased, people will think that they are being offered a better product than before.

Economists before Kahneman naively believed that if a piecework worker's wages were increased, he would do a better job. It turns out that this is not always the case. Some - yes, really better. Others do the same: why give all their best if, with the same productivity, they will still get more than before? Still others will begin to work more slowly in order to earn the same amount as before, with less labor input.

The reasoning of economists before Kahneman resembled the reasoning of scientists before Galileo. After all, for thousands of years, all great minds assumed that a heavy object dropped from a height would reach the ground faster than a light one. Children today think so too. For thousands of years this was taken for granted, and no one before Galileo thought to check it. Imagine Galileo’s surprise when he established that a wooden and an iron ball dropped from the Leaning Tower of Pisa reach the ground in the same time.

So, according to Kahneman, in their everyday lives people are not guided by elementary logic and elementary arithmetic.

I decided to check it out. To follow, so to speak, the path of Galileo.

Agripas Street in Jerusalem. On one side is the Mahane Yehuda bazaar. On the other side there is a row of shops.

The store sells eggs. A package of 10 eggs costs 12 shekels.

On the contrary, at the market, they also sell eggs. A package of 30 eggs costs 18 shekels.

The problem for a first-grade student is that an egg costs 1.20 in a store, and 0.60 in a market. Exactly twice cheaper. When buying one tray of eggs in a store, a person loses 6 shekels. Buying two trays - 12.

I stood outside the store and asked those who were buying eggs the same question - why did you do this? Don't you see that across the street it's half the price?

The answers were distributed as follows:

1.Fuck you... - 75%

2.What's your business? Where I want, I buy there - 75%

3. The eggs in the store are better. (Eggs are the same, I checked) - 8%

4.What's the difference? Will I be frivolous? — 6%

5.I always buy everything in this store. It’s more convenient for me - 9%

The fact that the answers add up to more than 100% means that one person could have given more than one answer.

To those who responded to item 4, I offered:

— Having bought two trays of eggs in a store, you lost 12 shekels. If this amount does not matter to you, give me the same amount. The answer to this proposal - see paragraph 1

Here are more examples that confirm, from my point of view, Kahneman’s theory.

A man goes to a restaurant and pays 100 shekels for a steak. Whereas a kilo of exactly the same steaks in the store costs 25 shekels. Five pieces. The difference is 20 times! The purchased steak just needs to be put in the oven. For many, apparently, this is too much work. People are standing in line at a restaurant. And from this restaurant they take food out in trays to the soup kitchen across the street. Where they give the same thing for free...

Kahneman is right.

It turns out that the joke is: “You bought this tie for $100? Idiot, there are similar ones around the corner for 200!” - has a completely accurate economic justification. People believe that if a product is more expensive, it means it is better.

A person strives to get rid of money. And he goes to a restaurant, where a stranger will bring him food prepared from unknown products in an unknown way by another unknown person who does not know the tastes and requests of the client. For this he will pay 10 times more than the food costs, and he will also be rude.

The restaurant is a place of economic and culinary hooliganism. The restaurant’s task is to “promote” the client. Therefore, restaurant kitchens use the most uneconomical and harmful methods of cooking. The main thing is that the dish looks beautiful when served. Although in a second all this beauty will disappear.

At the exit from the supermarket they sell hot sausages for 5 shekels each, which inside the same market cost 10 shekels for 20 pieces. The difference is 10 times!

Isn't Kahneman right?

The best psychologists in the world are racking their brains over how to give a person something he doesn’t need. 98% of Pepsi-Cola's turnover goes to advertising. A person does not buy sweet syrup, but a lifestyle that has been drilled into his head.

A watch that costs 50 shekels shows the time exactly the same as a watch that costs 10 thousand. A person does not buy a watch, a suit, furniture - he buys self-respect.

Kahneman is right. For some reason people don’t want to admit simple and obvious things:

All kinds of parties, “Forums” and “Assistance Associations” help only those who create them and work in them. This is why they are created.

Here are simple rules for those who do not want to be deceived (I understand that thousands of agents of various companies who earn their living through hard work may be offended by me)...

Anyone who calls you or stops you on the street hoping to sell you something is a scammer.
Anyone who tries to enter your home in hopes of selling you something is a scammer.

Anyone who tells you that you won a lottery you didn't play is a scammer.

Anyone who offers goods and services “for free” is a scammer.

Anyone who takes money from a client for employment is a scammer.

Anyone who promises 100% recovery from all diseases is a fraud.

Anyone who sends out emails with get-rich-quick recipes is a scammer.

And now - about the main thing.

If you really want to learn English and Hebrew in a day, be cured of all diseases, get a well-paid job without any specialty, find out your future, lose 40 kg in a month without diets and pills - only contact the author of this article. Pay in advance.

Share with friends or save for yourself:

Loading...