Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World
Cade Metz is not, at his core, writing a book about artificial intelligence. He is writing a book about people — their vanities, their rival
The Central Argument
Cade Metz is not, at his core, writing a book about artificial intelligence. He is writing a book about people — their vanities, their rivalries, their hunger for recognition — and using AI as the theater in which all of that plays out. The central argument of Genius Makers is implicit rather than stated: the shape of modern AI was not determined by logic or by the inevitable march of compute and data, but by a handful of idiosyncratic individuals whose personal obsessions, institutional loyalties, and competitive instincts bent the technology in directions it might never have gone otherwise. The “genius” in the title is doing double duty — it refers to the makers themselves, but also to the particular genius loci of Silicon Valley, the spirit that haunts a place and determines what is possible there.
This is a book that forces you to reckon with the contingency of history. We talk about deep learning as though it were destined to arrive in 2012, as though AlexNet were an inevitability waiting to be discovered. Metz corrodes that comfortable narrative. What emerges instead is something messier: a story of Geoffrey Hinton working for decades in the intellectual wilderness, sustained by a kind of monastic conviction, while the mainstream of AI research dismissed neural networks as a dead end. The triumph was not inevitable. It was stubborn.
The Context That Makes It Necessary
We are living inside the consequences of the decisions this book describes. Every recommendation algorithm, every large language model, every autonomous vehicle navigating a parking lot carries the fingerprints of the people Metz profiles. And yet most of us — even those who work in adjacent technical fields — have only a cartoon version of how these systems came to exist and who made the choices that shaped them. The hagiographies are everywhere: TED talks, Forbes profiles, breathless magazine features. What Metz offers instead is something closer to investigative biography, a genre that takes seriously both the genius and the dysfunction, both the breakthrough and the deal-making that followed.
There is also a political urgency here that Metz wisely does not oversell. The question of who controls AI — whether it sits inside Google’s advertising machine, or Facebook’s engagement optimization engine, or OpenAI’s peculiar nonprofit-turned-capped-profit structure — is not a technical question. It is a question about power. By grounding that question in specific people making specific decisions under specific pressures, Metz makes it legible in a way that abstract policy discussions rarely manage.
The Key Insights
The portrait of Hinton is the book’s spine. What strikes me most is not his intellectual capability, which is well-documented elsewhere, but his relationship to institutional belonging. He cycled through Carnegie Mellon, UCSD, and then Toronto, always slightly outside the mainstream, funded in part by Canadian government money that American academics regarded with mild condescension. His eventual acquisition by Google — through the purchase of a shell company he created with his students after winning ImageNet — reads less like a triumphant homecoming and more like a colonization. The technology was absorbed, but so was the man, and Metz is quietly attentive to the costs of that.
The rivalry between Yann LeCun and Hinton threads through the entire narrative, and it is more philosophically substantive than mere personality conflict. They represent genuinely different intuitions about how intelligence works and therefore how it should be engineered. LeCun’s skepticism about large language models — his insistence that next-token prediction is an insufficient architecture for real understanding — is not just contrarianism. It reflects a deeper disagreement about the nature of cognition, one that remains unresolved. Metz does not adjudicate between them, which is the correct journalistic instinct, but the debate itself is where the intellectual meat is.
The Google Brain and DeepMind tension is equally revealing. Two organizations under the same corporate roof, competing for talent, credit, and the ear of the same executives, developing genuinely different research cultures. DeepMind’s AlphaGo chapter is covered with appropriate drama, but what interests me more is what that achievement revealed about the organizational dynamics: DeepMind was willing to pursue moonshots while Google Brain was closer to the product pipeline, and that difference in mandate produced different science.
Connections to Adjacent Fields
The dynamics Metz describes are not unique to AI. The sociology of science literature — Kuhn on paradigm shifts, Latour on laboratory life — maps almost perfectly onto this story. Hinton’s long period in the wilderness is a textbook case of what Kuhn would call working within a pre-paradigm period: the anomalies are accumulating, the dominant framework is creaking, but the gatekeepers of funding and publication are still committed to the old way. The 2012 ImageNet result is a paradigm shift in the technical sense — not just a better result, but a result that made the old research program look retroactively confused.
There is also a rich connection to the economics of talent and location. The concentration of deep learning expertise in Toronto, Montreal, and a handful of American universities before the gold rush began looks, in retrospect, like a classic pre-diffusion state: knowledge clustered in a few nodes, not yet commodified, still held in the minds of specific researchers rather than encoded in industrial pipelines.
Why It Matters
The lesson I keep returning to is about the relationship between individual belief and collective outcome. Hinton believed in neural networks when the field had largely abandoned them. That belief was not irrational — he had reasons — but it was not supported by the consensus either. The history of AI, as Metz tells it, is a history of bets made under genuine uncertainty by people with strong prior convictions. Some of those bets paid off spectacularly. The systems we are now debating regulating, fearing, and depending on grew from that combination of stubbornness and luck. Understanding that origin is not just historically interesting. It is a warning about how much depends, at any given moment, on who happens to be in the room and what they happen to believe.