← LOGBOOK LOG-186
EXPLORING · PSYCHOLOGY ·
BEHAVIORAL-ECONOMICSKAHNEMANTVERSKYPROSPECT-THEORYCOGNITIVE-BIASDECISION-MAKINGHEURISTICS

Kahneman, Tversky, and the Heuristics and Biases Program

Prospect theory replaced expected utility. System 1 and System 2 replaced the rational agent. The Kahneman-Tversky research program didn't just find anomalies in economic behavior — it rebuilt the psychological foundations of decision theory.

The Problem with the Rational Agent

Classical economics is built on the rational agent: a decision-maker who has well-defined preferences, processes information accurately, updates beliefs via Bayes’ theorem, and maximizes expected utility. The rational agent is not assumed to be a genius — just consistent, coherent, and self-interested in a stable way.

The rational agent is useful. It generates tractable models with clean predictions. It works reasonably well for large, competitive markets where irrational agents face selection pressure and their errors get exploited. And it is often a good enough approximation when stakes are high, the decision is repeated, and feedback is clear.

It is also systematically wrong in predictable ways. Daniel Kahneman and Amos Tversky began documenting the systematic ways people’s decisions deviate from rational choice theory in the 1970s — not random errors, but structured, reproducible departures from rationality that suggested the underlying cognitive architecture was doing something different from expected utility maximization.

Their research program — the heuristics and biases approach — produced a body of findings that eventually earned Kahneman the 2002 Nobel Prize in Economics (Tversky died in 1996 and Nobels are not awarded posthumously). It remains one of the most productive and contested programs in psychology and economics.

Prospect Theory

The centerpiece is prospect theory, published in Econometrica in 1979 — one of the most cited papers in economics. It was a direct replacement for expected utility theory as a descriptive theory of how people actually make decisions under risk.

Expected utility theory says people maximize the expected value of a utility function over outcomes. Two features of prospect theory deviate from this:

Reference dependence. People evaluate outcomes relative to a reference point (usually the status quo or an expectation), not as absolute levels of wealth. A gain of ₹10,000 feels like a gain because it’s above the reference point; the same amount feels like less of a gain if you were expecting ₹20,000. The utility function is not over final wealth states — it is over changes from a reference point.

Loss aversion. The value function is steeper for losses than for gains. Losing ₹1,000 hurts roughly twice as much as gaining ₹1,000 feels good, on average. The ratio varies across individuals and contexts, but the asymmetry is robust. This is loss aversion — the single most replicated finding in behavioral economics and arguably the most consequential for understanding financial behavior.

Two additional features: the value function is concave in gains (diminishing sensitivity — the difference between gaining ₹1,000 and ₹2,000 feels larger than between ₹10,000 and ₹11,000) and convex in losses (people become risk-seeking when facing certain losses, preferring a gamble to a sure loss of the same expected value). And probability weighting: people overweight small probabilities and underweight medium-to-large probabilities, which explains why people simultaneously buy lottery tickets and insurance.

The Heuristics

Alongside prospect theory, Kahneman and Tversky documented cognitive heuristics — mental shortcuts that produce systematic errors.

Representativeness. People judge probability by how well something matches their mental prototype of a category, ignoring base rates. The “Linda problem”: Linda is described as an outspoken feminist who studied philosophy. Which is more probable — “Linda is a bank teller” or “Linda is a bank teller and active in the feminist movement”? Most people say the conjunction. This is logically impossible — P(A and B) ≤ P(A). But “Linda as feminist bank teller” is more representative of the description than “Linda as bank teller,” and representativeness overrides probability.

Availability. People judge the probability of events by how easily examples come to mind. Deaths by airplane crash are estimated as more likely than deaths by car crash because airplane crashes get extensive coverage. Deaths by lung disease are underestimated relative to dramatic accidents. The availability heuristic produces systematic miscalibration of probability estimates correlated with media salience rather than actual frequency.

Anchoring. Judgments are pulled toward an initial number, even when that number is obviously arbitrary. Spin a wheel of fortune to get a random number (rigged to land on 10 or 65), then estimate the percentage of African countries in the UN. People anchored on the wheel’s number give estimates pulled systematically toward it. Anchoring operates even when the anchor has no plausible relevance to the judgment.

System 1 and System 2

Kahneman’s later framework, developed in Thinking, Fast and Slow (2011), synthesizes the heuristics and biases research into a dual-process model of cognition. System 1 is fast, automatic, intuitive, and effortless — it operates continuously and generates the immediate impressions and intuitions that System 2 takes as inputs. System 2 is slow, deliberate, effortful, and rule-following — it’s the part of thinking that “feels like thinking.”

The naming is intentional but imprecise. System 1 and System 2 are not separate brain regions or modules — they’re descriptions of two modes of processing that shade into each other. The framework’s value is descriptive: it organizes the heuristics and biases research into a coherent picture of why rational deliberation doesn’t always override intuition. System 2 is lazy — it tends to endorse System 1’s outputs rather than scrutinizing them, especially when System 1’s answer feels fluent and coherent.

The practical implication: most cognitive errors come from System 2 failing to catch System 1’s mistakes, not from System 1 generating the wrong answer per se. Slowing down and deliberately checking intuitions — the core of what decision quality improvement involves — is activating System 2 to override System 1 when the domain is one where intuition is unreliable.

Where the Research Has Been Challenged

The replication crisis in psychology hit behavioral economics hard in some areas. Many classic priming effects that appeared in early social priming research have not replicated, and some of the ego depletion findings (the idea that willpower is a depletable resource) have weakened substantially.

The core heuristics and biases findings — loss aversion, anchoring, representativeness, availability, framing effects — have generally replicated, though sometimes with smaller effect sizes than originally reported. Prospect theory’s basic structure is well-supported across many cultures and contexts, though the precise parameters vary.

The most substantive criticism is about ecological validity: the heuristics produce errors in laboratory tasks constructed to expose them, but in real environments where people have feedback and experience, the errors may be less prevalent. Gerd Gigerenzen has argued that heuristics are often adaptive — they work well in the environments in which they evolved, even if they fail in decontextualized lab tasks. The debate between Kahneman and Gigerenzen about whether heuristics are generally beneficial or generally erroneous is productive but unresolved.

Why Finance Is the Right Domain for This

Financial markets are where behavioral biases have the clearest, most measurable consequences. Loss aversion explains the disposition effect — the tendency of investors to sell winners too early (locking in gains, consistent with the concave gain region of prospect theory) and hold losers too long (hoping for recovery, consistent with risk-seeking in the loss domain). Mental accounting explains why people treat different pools of money differently based on how they got it — spending a windfall more freely than wages of the same amount. Overconfidence explains excessive trading and underperformance of active fund managers relative to passive benchmarks.

Prospect theory in markets also generates asset pricing implications. If investors are loss averse, assets with high downside risk require higher expected returns as compensation — the loss looms larger than the gain. If investors engage in mental accounting and evaluate their portfolio position-by-position rather than as a whole, they’ll hold suboptimal portfolios. The behavioral asset pricing literature has spent thirty years trying to formally incorporate these effects.

The Kahneman-Tversky program’s lasting contribution is not a list of biases to memorize. It is a revised understanding of what human decision-making is: not optimization under constraints, but pattern-matching, reference-dependent evaluation, and heuristic judgment — processes that are generally adaptive but systematically fail in predictable ways when the domain is unfamiliar, the stakes are abstract, or the feedback is delayed.