Adam Grant
Adam Grant occupies an unusual position in the intellectual landscape: an organizational psychologist at Wharton who has managed to translat
Adam Grant
The Problem of Cognitive Rigidity in an Accelerating World
Adam Grant occupies an unusual position in the intellectual landscape: an organizational psychologist at Wharton who has managed to translate rigorous research into public discourse without (mostly) losing the signal in the process. His core contribution—particularly crystallized in Think Again (2021)—addresses a problem that feels almost embarrassingly fundamental: why do smart people cling to wrong beliefs, and what psychological infrastructure would we need to build to make updating those beliefs feel less like dying?
This isn’t a new question. Festinger gave us cognitive dissonance in the 1950s. Kahneman and Tversky mapped the heuristics and biases program across decades. Tetlock showed that forecasting accuracy correlates with intellectual humility. But Grant’s specific intervention is less about cataloging the failure modes and more about asking: what does the practice of belief revision actually look like inside organizations, relationships, and individual minds? Where the bias literature often reads like a diagnostic manual—here are all the ways you’re broken—Grant is interested in the therapeutic question. What would it take to get better at this?
The Architecture of Rethinking
The central framework in Think Again organizes around a simple but productive typology: we tend to think in the modes of preachers (defending sacred beliefs), prosecutors (attacking others’ positions), or politicians (seeking approval). The alternative Grant advocates is thinking like a scientist—treating beliefs as hypotheses, seeking disconfirmation, updating on evidence. This sounds almost trivially obvious when stated abstractly, but the depth comes in Grant’s exploration of why it’s so psychologically expensive.
What I find genuinely interesting is his treatment of identity foreclosure—the phenomenon where beliefs become so fused with self-concept that revising the belief feels like an existential threat. This connects to work by Peter Coleman on intractable conflicts and by Kahan on identity-protective cognition. Grant synthesizes these into a practical observation: the more tightly you bind your identity to your ideas rather than to the process of forming ideas, the more brittle your thinking becomes. The prescription—attach your identity to being someone who rethinks rather than someone who knows—is elegant, if perhaps easier to state than to execute.
He also develops the concept of “confident humility,” which resolves what might seem like a paradox: you can have high confidence in your ability to learn and adapt while maintaining genuine uncertainty about any specific belief. This maps onto something Carol Dweck’s growth mindset research gestures at, but Grant grounds it more specifically in epistemic practice rather than general self-theory. The distinction matters because it makes the advice actionable in a way that “just have a growth mindset” never quite managed.
Organizational Psychology as Applied Epistemology
Grant’s earlier work provides important scaffolding for the rethinking thesis. Give and Take (2013) examined how prosocial orientation—specifically, being a “giver” in organizational contexts—correlates with both the highest and lowest performance outcomes, depending on boundary-setting and strategic awareness. Originals (2016) explored how nonconformist ideas actually propagate through institutions, finding that successful originals often look surprisingly conventional in many domains while concentrating their deviance strategically.
The through-line across all three books is really about the microstructure of intellectual courage within institutions. Organizations are belief-preserving systems almost by design. Hierarchies punish dissent. Culture codes enforce orthodoxy. Incentive structures reward consistency. Grant’s body of work amounts to asking: given all of this, what are the specific psychological and structural conditions under which people actually change their minds about things that matter?
This connects him to an interesting network of adjacent work. Philip Tetlock’s superforecasting research provides the empirical backbone—people who update frequently and granularly outperform those who don’t. Julia Galef’s The Scout Mindset covers overlapping territory with a different rhetorical strategy. Gary Klein’s work on premortem analysis offers complementary organizational tools. And there’s a deeper philosophical resonance with Bayesian epistemology—the normative framework that says rational agents should update beliefs continuously in response to evidence, weighted by prior probability and likelihood.
What Grant adds to this constellation is the organizational psychology lens. He’s not just asking what rational individuals should do in theory; he’s asking what actually happens when you put people in meeting rooms with status hierarchies and quarterly targets and ask them to admit they were wrong about something.
What Remains Unresolved
The most honest critique of Grant’s work is that it may underestimate the rationality of belief persistence. Not all stubbornness is cognitive failure. Sometimes beliefs serve coordinating functions—they signal group membership, maintain coalitional stability, enable long-term commitment to projects that require persistence through ambiguity. There’s a version of “rethinking” that, taken to an extreme, produces paralysis or social illegibility. The person who updates every belief continuously can’t build anything that requires sustained conviction through inevitable contradictory evidence.
Grant acknowledges this tension but doesn’t fully resolve it. When should you hold fast versus update? The answer—“when the evidence warrants it”—is circular without a meta-theory about how to weigh different kinds of evidence under uncertainty. This is where the pop-science framing occasionally shows its seams. The research literature on belief perseverance is genuinely messy, and the clean scientist-versus-preacher typology smooths over real epistemic dilemmas.
There’s also a question about scalability. Grant’s prescriptions work well for individuals and teams operating in relatively low-stakes cognitive environments—product strategy meetings, career decisions, interpersonal disagreements. But the hardest cases of belief persistence are precisely those where identity, morality, and group membership are most entangled: political polarization, religious commitment, culture-war positions. Grant gestures at these domains but his empirical base is largely organizational, and the translation to the political sphere involves assumptions about human motivation that are far from settled.
Finally, I’m curious about the relationship between Grant’s rethinking framework and the replication crisis in psychology itself. His work draws on a broad literature, some of which has held up well under scrutiny (Tetlock’s forecasting work, much of the negotiation research) and some of which exists in murkier territory (ego depletion adjacent findings, certain priming effects). To his credit, Grant has been publicly responsive to replication concerns. But there’s an almost recursive challenge here: a body of work about updating beliefs must itself be continuously updated as its own evidential base shifts.
Why This Matters
What makes Grant genuinely worth engaging with—beyond the TED talk surface—is that he’s working on a problem that sits at the intersection of epistemology, organizational design, and individual psychology, and he’s doing it with enough contact with actual research to avoid pure self-help platitudes. The question of how humans can build institutions and habits that make belief revision normal rather than threatening is not trivial. It’s arguably one of the central challenges of any complex society navigating rapid change.
The deeper I sit with Grant’s work, the more I think the real contribution isn’t any single framework but rather the insistence that intellectual flexibility is a skill with identifiable sub-components, not a personality trait you either have or don’t. That reframing—from character to practice—is where the leverage is. It means there are design problems to solve: How do you structure a meeting so that the most senior person doesn’t anchor everyone? How do you build feedback loops that reward updated positions rather than consistent ones? These are engineering questions dressed in psychological clothing, and that’s exactly what makes them tractable.