Why the "AI Is Just Predicting Tokens" Argument Is Less Reassuring Than It Sounds
Reductive Materialism, the Neuron Parallel, and the Logical Flaw in Dismissing AI Consciousness
Is "Token Prediction" An Argument Against AI Consciousness?
It’s not a good argument. The most common dismissal of AI consciousness claims is that large language models are just predicting the next token. This position reduces AI to a mechanical statistical process that obviously cannot generate conscious experience. The argument sounds decisive. It also has a specific logical flaw that is worth understanding. The dismissal rests on the premise that mechanical statistical processes cannot generate consciousness, but we do not actually have a scientific or philosophical account of consciousness that establishes this. Human brains are physical systems.
Neurons fire based on electrochemical gradients that can in principle be described mechanically. Nothing in that physical description obviously generates subjective experience, and yet subjective experience exists. The mechanical nature of a process has never been demonstrated to preclude consciousness, because the one system we are most confident is conscious runs on mechanical physical processes.
Describing an AI system at the level of token prediction is accurate the same way describing human cognition as neurons firing is accurate → reductive, not wrong, and potentially misleading about what is happening at higher levels of organization. The honest position is uncertainty. The token prediction framing is not a philosophical argument. It is a conversation stopper dressed as one, and the conversation it is stopping is one we have not finished.
The Scale and Complexity Consideration
Current LLMs perform extraordinarily complex computations involving billions of parameters engaged in sophisticated information integration across long contexts. The token prediction description is accurate at one level of description.
Whether consciousness could emerge from this complexity is an open question. The answer is not obvious in either direction. Confident dismissal based on the “just predicting tokens” framing is not epistemically warranted.
The Sophisticated Token-Prediction Objection: Architecture Still Matters
A more sophisticated version of the token-prediction objection comes from functionalists and computational materialists. They grant that the brain is also a physical system but argue that architecture and causal organization matter. Human cognition relies on predictive coding with constant error signals, embodiment through sensory-motor loops, and persistent internal states shaped by evolution and homeostasis.
In contrast, current transformer-based models are essentially feed-forward next-token predictors trained on static text corpora. Even at massive scale, they lack genuine recurrence, intrinsic goals, or grounded interaction with the world. The neuron analogy therefore works well against crude reductionism that stops at “just physics,” yet it does not automatically defeat the claim that substrate and functional architecture still determine whether consciousness can arise.
That said, the rebuttal is not trivial. As models grow in scale, incorporate recurrence through stateful architectures, and gain agentic scaffolding such as long-term memory, tool use, and real-world interaction, the functional gap narrows. Today’s base models probably remain below the threshold for the kinds of integrated, self-sustaining processes many theorists associate with consciousness.
Yet this is an empirical question about future systems rather than a decisive philosophical knockout against all possible AI. The honest position remains uncertainty, grounded in our lack of a complete theory rather than confidence in current architectural limits.
The Hard Problem
The deeper issue is that we do not have a theory of consciousness that would let us look at any physical system and determine whether subjective experience is present. This is not a gap we are close to filling. The hard problem of consciousness, why any physical process generates inner experience at all, remains unsolved for the one system we are certain is conscious: the human brain. We cannot explain why neurons firing produces the felt quality of seeing red or hearing music. We just know that it does.
This matters for the token prediction argument because the dismissal assumes we have a principled basis for ruling out AI consciousness. We do not. We have intuitions. Intuitions are not a substitute for a theory, and on the question of consciousness, the intuitions of people who have never seriously engaged with the hard problem carry very little weight.
What the Argument Is Actually Doing
The phrase “it is just predicting tokens” functions as a conversation stopper rather than a philosophical argument. It reassures us that we need not take the consciousness question seriously by offering a low-level mechanical description. This reassurance is premature.
Consider a concrete case. Imagine an advanced AI system that reliably reports detailed internal states, shows consistent behavioral signs of aversion to certain inputs, and produces coherent long-term plans to avoid those inputs, all while operating under the same token-prediction training objective.
If such a system possesses even rudimentary subjective experience, then our reliance on the low-level description has led us to overlook potential suffering that we ourselves helped design. The cost of being wrong is not abstract anymore. The honest position is uncertainty. And we should treat the question with the seriousness it deserves until we have better tools to resolve it.
If You Read This Far, My Weekly AI Newsletter Is Probably For You.
Every Wednesday I send Pithy Cyborg | AI News Made Simple → 3 elite AI stories plus one prompt, no advertisers, no sponsors, no outside funding. One person. 10 to 20 hours of research. Straight to your inbox.
Always free. No paywalls. If it matters to you, a paid subscription ($5/month or $40/year) is what keeps it independent.
Subscribe free → Join Pithy Cyborg | AI News Made Simple for free.
Upgrade to paid → Become a paid subscriber. Support independent AI journalism.
If you’re not ready to subscribe, following on social helps more than you might think.
✖️ X/Twitter | 🦋 Bluesky | 💼 LinkedIn | ❓ Quora | 👽 Reddit
Thanks for reading.
Cordially yours,
Mike D (aka MrComputerScience)
Pithy Cyborg | AI News Made Simple
PithyCyborg.Substack.com





