Thinking, Fast and Slow: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 99:
🦄 '''30 – Rare Events.''' When rare hazards dominate the news, as with suicide bombings in Israel in the early 2000s, many people shun buses or public places despite tiny absolute risks—a social amplification Kuran and Sunstein describe as an “availability cascade.” Laboratory studies show the same psychological signature: when asked separately about many unlikely outcomes, people overestimate each and give the set a total probability far above 100%; when asked to choose, they also overweight those slim odds in decisions. Vivid descriptions, striking images, and repeated coverage make the unlikely feel more plausible, while the “non‑occurrence” of the event has no equally gripping story to tell. Prospect theory separates two steps that move in the same direction: the judged probability of a rare event is inflated, and the decision weight assigned to it is amplified even more. People are also insensitive to gradations among tiny risks—differences between 0.001% and 0.00001% barely register—so campaigns that highlight any small chance can trigger big protective responses. This mix explains why jackpots sell tickets and why very low deductibles and extended warranties remain popular even when they are poor value. Ignoring rare dangers is also common when they are hard to imagine or not made salient, producing a flip from exaggeration to neglect. In the book’s terms, the fast system locks onto concrete, imaginable bad outcomes and treats their mere possibility as decisive; the slow system must force side‑by‑side comparisons, specify the alternatives, and check whether a vivid story is standing in for arithmetic.
 
🛡️ '''31 – Risk Policies.''' In one experiment, University of Chicago students were offered a series of bets—winning $10 or losing $5, repeated 100 times. Most refused a single bet but accepted the series, demonstrating that aggregation over time transforms a risky prospect into a near certainty of profit. The pattern mirrors real life: people are myopic loss-averse, overweighing each small setback instead of viewing the total return. The same bias shows up in investment behavior, where daily monitoring of portfolios amplifies anxiety and discourages optimal risk taking. Institutions such as insurance companies and pension funds handle risk better by treating it in portfolios rather than as isolated gambles. Kahneman and Lovallo describe “decision isolation” as the habit of evaluating choices one by one instead of under a consistent rule. Setting a risk policy—rules for repetition, thresholds, and acceptable losses—allows decisions to be made once, in calm reflection, instead of anew under stress. The mechanism is that distance and aggregation move the problem from the fast, emotional system to the slower, calculating one. In the book’s larger logic, wisdom lies in designing environments where System 1’s fear of loss cannot sabotage rational, long-term outcomes. *“You win a few, you lose a few. Keep the big picture in mind.”*
🛡️ '''31 – Risk Policies.'''
 
🏅 '''32 – Keeping Score.''' The “Asian disease problem,” first published in 1981, revealed that identical statistics can lead to opposite preferences depending on framing: when told a medical program will “save 200 of 600 lives,” most participants choose it, but when told it will “let 400 die,” they favor the alternative gamble. The numbers are the same, yet the gain frame attracts risk-aversion while the loss frame invites risk-seeking. The same distortion governs daily choices about investments, budgets, and performance, where outcomes are mentally coded in separate “accounts.” Richard Thaler’s work on mental accounting shows that people open and close these accounts selectively—treating tax refunds, windfalls, or project budgets as different pots of money even when fungible. Investors “narrow frame” by focusing on short-term fluctuations instead of overall wealth; households overspend windfalls and guard “principal” with irrational care. The mind keeps score in gains and losses, not total assets, and System 1’s reference dependence makes those ledgers stubbornly local. Understanding which account an outcome belongs to can flip feelings of satisfaction or regret without changing reality. Within the book’s frame, real rationality means redefining the scoreboard—measuring progress by lifetime outcomes rather than by moment-to-moment wins and losses.
🏅 '''32 – Keeping Score.'''
 
🔃 '''33 – Reversals.''' A recurring pattern in decision research is “preference reversal,” first documented by Sarah Lichtenstein and Paul Slovic in 1971, where people price risky gambles differently depending on how they are asked—valuing high-probability bets when choosing but favoring high-payoff bets when pricing. The contradiction exposes that choice and valuation draw on separate mental systems: intuitive judgment of attractiveness versus deliberate computation of worth. Similar reversals appear in public policy surveys, where support swings when questions move from percentage of lives saved to probability of death, or from willingness to pay to willingness to accept. Monetary incentives and consistent logic fail to eliminate the shift because the underlying feelings about loss and risk are reference-based and context-sensitive. The effect underscores how System 1 constructs preferences on the spot, shaped by salience and framing, rather than retrieving a stable scale of value. The broader lesson is that coherence is not natural; it must be imposed by rules, markets, or feedback mechanisms that anchor evaluation. *“Our preferences are not about what we want, but about how we frame what we want.”*
🔃 '''33 – Reversals.'''
 
🖼️ '''34 – Frames and Reality.''' In the mid-1980s, Amos Tversky and Daniel Kahneman collaborated with doctors to test medical framing: when surgery was described as having a 90% survival rate, most patients accepted it; when described as having a 10% mortality rate, most refused. The two statements describe the same reality, yet the emotional tone of words—survival versus death—swings judgment. Frames act as windows that select some features of a situation and ignore others, guiding attention and emotion before reason begins. Governments and marketers exploit this by naming taxes as “fees,” job losses as “restructuring,” or subsidies as “relief.” Framing also affects moral and political choice: labeling a program “helping the poor” evokes different support than “redistribution.” Awareness of framing does not neutralize it; System 1’s immediate associations come first, and System 2 often rationalizes them after the fact. The key to better judgment is to recognize alternative frames and force side-by-side comparison, so that logic and values—not words—determine the outcome. The chapter closes the section by showing that perception, emotion, and decision share the same architecture: what we see depends on the frame we look through. *“Reality is defined by the way we frame it.”*
🖼️ '''34 – Frames and Reality.'''
 
=== V – Two Selves ===
 
🫂 '''35 – Two Selves.''' In 1993 at the University of California, experiments by Daniel Kahneman, Barbara Fredrickson, Charles Schreiber, and Donald Redelmeier had volunteers endure two versions of a cold‑pressor task: one hand submerged in 14 °C water for 60 seconds, and the other for 60 seconds followed by 30 seconds as the water was warmed slightly to 15 °C; most chose to repeat the longer trial because it ended less painfully. In 1996, Donald Redelmeier and Kahneman tracked real‑time pain in 154 colonoscopy and 133 lithotripsy patients and found that remembered pain depended mainly on the peak and the final moments, not on total duration. A later randomized trial with more than 600 colonoscopy patients showed that adding a few minutes of milder discomfort at the end led people to rate the entire procedure as less unpleasant and to be more willing to return. These results expose “duration neglect” and the “peak‑end rule”: the mind stores a sketch built from the most intense moment and the ending. The same split appears in ordinary life—two weeks of vacation can feel twice as good while lived, yet the story kept in memory is dominated by highlights and how it finished. Because choices are made on remembered utility, people often act to improve the story rather than the stream of moments. The two protagonists are a fleeting experiencing self that lives each second and a remembering self that keeps score and decides. That division explains why endings loom large and why we can mismanage pain, pleasure, and regret. In the book’s terms, fast, associative memory compresses experience into a tidy narrative that the slow system must learn to question and, when it matters, to redesign.
🫂 '''35 – Two Selves.'''
 
📖 '''36 – Life as a Story.''' Ed Diener, Derrick Wirtz, and Shigehiro Oishi (University of Illinois) asked respondents in 2001 to judge “wonderful lives” that ended abruptly versus those with extra years of mild happiness; many preferred the shorter life—a “James Dean effect” showing the dominance of endings in global evaluations. The same logic explains why a symphony spoiled by a scratch at the end is remembered as “ruined” despite a long stretch of enjoyment. Laboratory work on the peak‑end rule aligns with this narrative bias: when people summarize experiences, they weight a few snapshots—peaks and the final scene—over duration. In life reviews, distinctive moments—awards, failures, breakups, recoveries—become chapter headings that overshadow long, ordinary stretches. The remembering self smooths plot lines, resolves contradictions, and privileges closure, which is why people will accept more total discomfort for a better ending. That storytelling habit brings meaning and coherence but also distorts the arithmetic of lived time. The practical consequence is that we plan, choose, and judge with an eye to how the story will read later, not how it will feel most of the time. In the broader framework, a fast system stitches stories that feel complete; a reflective system can make better choices by noticing the storyteller’s shortcuts and, where possible, engineering good endings without ignoring the hours in between.
📖 '''36 – Life as a Story.'''
 
🙂 '''37 – Experienced Well‑Being.''' To measure how days actually feel, the 2004 Science article introducing the Day Reconstruction Method (Kahneman, Krueger, Schkade, Schwarz, Stone) had 909 employed women reconstruct the prior day in episodes and rate their affect, a diary‑like approach that reduces memory distortions. This work led to the U‑index (Kahneman & Krueger, 2006), the share of time spent in unpleasant states, a practical yardstick for comparing policies and jobs. Using large U.S. surveys, Daniel Kahneman and Angus Deaton (2010) found a divergence between two kinds of well‑being: “life evaluation” (how your life is going overall) rises with income across the range, while day‑to‑day emotional well‑being improves with income up to a comfortable level and then levels off. The split highlights two questions with different answers: “How satisfied are you with your life?” versus “How did you feel yesterday?”. Commuting, time pressure, and social contact show up cleanly in the episode data, revealing where misery concentrates during a typical day. Because felt experience depends on context and time of day, reforming schedules, workflows, and social supports can reduce the U‑index without changing income at all. The core idea is that what we live and what we remember are distinct, so measurement must match the target—episodes for feelings, global judgments for life appraisal. In the book’s theme, slowing down to measure experience directly counters the fast mind’s tendency to let vivid life stories masquerade as evidence about how days actually go.
🙂 '''37 – Experienced Well-Being.'''
 
🤔 '''38 – Thinking About Life.''' David Schkade and Daniel Kahneman’s 1998 Psychological Science paper asked Midwesterners and Californians about life satisfaction; actual ratings were similar, yet both groups predicted Californians would be happier, a focusing illusion driven by salient weather. The same mechanism exaggerates the importance of income, health scares, or a move: when one factor is top‑of‑mind, people misread its weight in a life lived across thousands of hours. Life evaluation is also hostage to current mood and recent events unless surveyors neutralize those cues; by contrast, well‑designed episode measures resist such drift. Because attention anchors the story of a life to a few highlighted features, gains in those features can disappoint when the rest of daily experience is unchanged. The antidote is side‑by‑side framing: list the many determinants of well‑being and consider how often each actually matters during a week. Remembering that adaptation dulls the impact of many changes further protects against overpaying for upgrades with little daily effect. The main point is that what we think about most is not necessarily what matters most to the experiencing self. Within the book’s framework, the fast mind seizes salient cues to answer a hard question; a slower audit restores balance by broadening attention to the full ecology of a life.
🤔 '''38 – Thinking About Life.'''
 
== Background & reception ==