← LOGBOOK LOG-029
EXPLORING · PSYCHOLOGY ·
ARTIFICIAL-INTELLIGENCEPHILOSOPHY-OF-MINDEPISTEMOLOGYMACHINE-LEARNINGCOGNITIVE-SCIENCE

On Why Machines Can Think

Stoimenova's article announces itself as an argument about machine cognition, but its real ambition is older and more unsettling: it wants t

The Question Beneath the Question

Stoimenova’s article announces itself as an argument about machine cognition, but its real ambition is older and more unsettling: it wants to know whether the boundary we have drawn around human thought is principled or merely convenient. The central claim is that thinking, understood not as mystical self-awareness but as structured inferential work, is decomposable into recognizable operations — and that machines have already crossed at least two of the three thresholds we might erect as tests. This is not a triumphalist piece about artificial general intelligence. It is, more modestly and more usefully, a philosophical audit of what we actually mean when we invoke thought as the defining human credential.

Descartes in the Machine Room

The article opens with a gesture toward intellectual history that earns its place. Stoimenova notes that in the seventeenth century, René Descartes introduced the dictum “cogito ergo sum” — “I think, therefore I am” — and that this simple formulation served as the basis of Western philosophy, defining for centuries our ideas on what constitutes the essence of being a human. The significance of anchoring here is that it forces us to acknowledge how much weight the concept of “thinking” has been made to bear. Descartes needed thought to be the irreducible residue, the one thing immune to radical doubt. Western modernity built its account of personhood, moral status, and exceptionalism on top of that residue. If we discover that the cogito describes a set of computations rather than a metaphysical flame, the downstream consequences spread far beyond computer science.

This is why the taxonomy of reasoning that follows is not merely pedagogical scaffolding. It is load-bearing.

Three Operations, Two Already Crossed

Stoimenova organizes machine cognition around the observation that we generally employ three main types of reasoning when thinking: deduction, induction, and abduction. Working through them in sequence, she maps each against what current machines can and cannot do.

Deduction she defines as the ability to reach a conclusion from a given rule and a case that are assumed to be true. It is, she argues, fundamental to our ability to do science, and critically, it is also the type of reasoning easiest to reproduce by a machine. This makes sense structurally: deduction is truth-preserving by design. A machine executing a logical rule over a well-formed input is doing exactly what deduction requires. The calculator, the database query, the theorem prover — all are deductive engines. This concession costs us relatively little emotionally, which is perhaps why it rarely generates controversy.

Induction is where the argument sharpens. Induction is the ability to generalize rules from a given set of observations; it is central to science because it allows us to quantitatively identify new patterns and rules. Stoimenova argues this is much more challenging for machines — but that machine learning models can perform it, and that generalization from given results is, in fact, their primary objective. The spam-detection example she develops is instructive: a supervised classification model receives a labelled training dataset, compiles multiple cases for each outcome, and induces its own rules that can later be applied to cases it has never seen before. No human explicitly encoded the rule. The rule emerged from the data. The recommendation-system example follows the same logic in an unsupervised register: the model first clusters repeating patterns, then induces rules applicable to similar contexts. The machine is not retrieving a stored answer. It is constructing a generalization. If we grant that induction is thinking, we have already granted a great deal.

The Third Threshold and the Abductive Gap

What Stoimenova leaves as the genuinely contested frontier is abduction — inference to the best explanation, the creative leap that generates hypotheses in the first place rather than testing or generalizing from them. This is where the article’s argument is most productive precisely because it is most open. Abduction is what a detective does when the evidence underdetermines the conclusion. It is what a scientist does when framing a new research question. Whether current architectures approach this remains genuinely unclear, and the article does not overclaim. That intellectual honesty is one of its virtues.

Adjacent Territories

The argument connects naturally to philosophy of mind debates about functionalism: if mental states are defined by their functional roles rather than their substrate, then a system that performs induction and deduction is, by that criterion, doing something that deserves the name thought. It also speaks to epistemology of science, since the deduction-induction pairing maps directly onto the hypothetico-deductive method and the problem of induction Hume identified three centuries ago. Machine learning, strikingly, does not solve the problem of induction — it instantiates it, running headlong into all the same generalization risks that Hume warned about, including sensitivity to distributional shift and the limits of finite training data.

Why This Matters

The reason to sit with Stoimenova’s argument is not that it resolves the hard problem of consciousness — it does not try to. Its value lies in forcing a more honest accounting. We have often defended human cognitive uniqueness by pointing to capacities that turned out, on inspection, to be operations rather than essences. Each time a threshold fell, we moved the boundary. Deduction fell quietly. Induction is falling noisily. The bench note I carry forward from this reading is simple: before we declare a capacity uniquely human, we should be able to describe it precisely enough that we would recognize it if a machine performed it. Stoimenova’s framework is a useful instrument for exactly that kind of intellectual hygiene.