← LOGBOOK LOG-023
EXPLORING · SOFTWARE ·
ALGORITHMSAUTOMATIONLABORDECISION-MAKINGTECHNOLOGYEXPERTISEECONOMICS

Automate This: How Algorithms Came to Rule Our World

There is a particular vertigo that comes from reading Christopher Steiner's account of algorithmic takeover — not because the facts are shoc

The Invisible Hand Has Been Coded

There is a particular vertigo that comes from reading Christopher Steiner’s account of algorithmic takeover — not because the facts are shocking in isolation, but because the accumulation of them produces something like a reckoning. We have spent decades congratulating ourselves on human ingenuity, on the irreducible spark of creativity and judgment that separates us from machines, and Steiner’s project is to dismantle that comfort systematically, domain by domain, with the patience of a good engineer. His central argument is deceptively simple: algorithms did not merely enter human professional life, they replaced the human judgment we thought was most resistant to replacement, and they did so quietly, faster than anyone announced.

The book is less a polemic than a historical excavation. Steiner traces the lineage of automated decision-making from the early quants on Wall Street back through Claude Shannon’s information theory and further still, building a case that the algorithmic conquest of professional domains follows a recognizable pattern. A domain is first considered too complex, too intuitive, too deeply human to be modeled. Then someone, usually an outsider to that domain, decides to model it anyway. The model fails, gets refined, fails differently, gets refined again — and eventually it does not merely compete with human experts, it embarrasses them.

The Pattern Nobody Wanted to Name

What makes this argument necessary is precisely the denial it punctures. Professionals in every field have a vested psychological interest in believing their expertise is non-fungible. Steiner is particularly sharp when he examines how this belief operates as a kind of collective self-deception. The stock trader, the music executive, the radiologist, the journalist — each constructs an account of their work that emphasizes the ineffable, the relational, the contextually sensitive dimensions that no algorithm could capture. Steiner does not call them liars. He calls them understandably human. But the historical record he assembles suggests that whatever feels most irreducibly expert about a job is usually just the part nobody has tried hard enough to formalize yet.

The Wall Street material is where this argument hits hardest. The story of algorithmic trading is not simply one of speed or efficiency; it is a story about epistemic displacement. When high-frequency trading algorithms began operating on timescales measured in microseconds, they did not just outpace human traders — they made the entire conceptual vocabulary of floor trading (intuition, feel for the market, experience reading a room) not merely slower but categorically irrelevant. The human expertise did not become less skilled; it became less real, in the sense that the environment it was adapted to had been abolished underneath it.

Creativity as the Last Redoubt

Where the book becomes philosophically most interesting is in its treatment of creative domains. Music recommendation, literary composition, news generation — these have long been held up as the final frontier, the places where human interiority is so irreducibly present that mimicry cannot substitute. Steiner is appropriately careful here. He does not claim that algorithms are creative in any philosophically robust sense. He claims something more troubling: that in many contexts, we cannot tell the difference, and more importantly, that we stop caring once the output reliably serves our needs.

This is the knife-edge insight the book keeps returning to. The question is not whether an algorithm can be a music journalist or a portfolio manager in some deep ontological sense. The question is whether it can produce outputs that satisfy the functions we previously assigned to those roles. And as those functions become more clearly specified, the answer keeps coming back yes. What looked like rich professional judgment turns out, on inspection, to be a pattern-matching operation over historical data — exactly the thing algorithms do well.

Adjacent Pressures: Economics, Labor, and Epistemology

Steiner’s argument opens into several neighboring territories he only partially explores, which is itself an invitation to think further. The labor economics implications are obvious and have since become a major field of study. Less discussed is the epistemological problem: if algorithmic systems routinely outperform domain experts, what happens to the social authority of expertise itself? We built institutions — law, medicine, journalism, finance — around the assumption that trained human judgment is the appropriate locus of consequential decisions. When that assumption erodes, the institutions built on it face a legitimacy crisis that is distinct from any individual job loss. The doctor who is outperformed by a diagnostic algorithm does not just lose income; she loses the normative basis for her social role.

There is also a feedback question Steiner gestures toward without fully developing: algorithms trained on human decisions inherit human biases, and then those algorithmized biases get laundered through the appearance of mathematical objectivity. The automation of human judgment does not sanitize it. It ossifies it and strips it of accountability.

Why It Still Matters

Reading this now, several years after its publication, what strikes me is how prophetic the underlying argument was even as the specific examples have aged. The book matters not as a catalogue of which professions have been disrupted — that list is already outdated — but as a model for how to think about disruption before it arrives. The key move is to resist the comfortable assumption that complexity equals safety. Every domain that has fallen to algorithmic replacement was once described as too complex, too human, too rich in tacit knowledge to be modeled. The right response to that description, Steiner implies, is not reassurance. It is to start asking what the model would look like.