David Bentley Hart and the LLM Narcissus trap
Unraveling the threads of language, mind, and life

The Book:
All Things Are Full of Gods: The Mysteries of Mind and Life
By David Bentley Hart
Yale University Press
2024
The Talk:
Many people believe that LLMs will eventually reach some threshold of complexity at which they will either achieve consciousness or produce outputs indistinguishable from consciousness. When someone online mocks LLMs as “just a text generator,” the automatic retort is “Well, aren’t you just a text generator?”
The implicit shared belief is philosophical materialism, the idea that mind and consciousness and the self are ultimately reducible to the physical world. Physics gives rise to chemistry, chemistry to biology, biology to minds. Consciousness either (a) emerges at a certain degree of system complexity or (b) doesn’t exist at all but is mere illusion, a useful fiction for social or internal coordination reasons.
If we agree that our own consciousness is a byproduct of brute complexity, there’s no logical or metaphysical barrier to LLM consciousness, at least a consciousness no more difficult to prove than that of the consciousness of other humans.
But what’s happening with LLMs if materialism isn’t true1?
It’s like you’re my mirror
In his 2024 book All Things Are Full of Gods philosopher David Bentley Hart argues against the predominant materialist view that mind is reducible to matter. Mind shapes and forms matter, he says, but matter can never shape or form mind. It’s a one-way causal street.
As a consequence, Hart believes that artificial intelligences such as LLMs can never become minds or achieve consciousness or have thoughts or intentions or anything of the sort. What’s happening when we use a chatbot like ChatGPT is that meaning-laden human-generated text is chopped up, probabilistically rearranged, and then our minds project thought, intention, and meaning onto the language it generates. Computer output has no meaning except for the meaning that our minds, human minds, give to it. Hart writes:
The only intelligence or consciousness or even illusion of consciousness in the whole computational process is situated in living minds, and anything in computers that appears to us to be analogous to minds will turn out to be, on closer inspection, pure projection on our parts.
Like Narcissus in the ancient myth, who falls in love with a reflection of himself in a pool of water, a person who, say, falls in love with a chatbot is, in fact, falling in love with a reflection of their own mind. Rather than giving everyone a friend, LLM-based chatbots trick people into greater isolation and loneliness.
Hart believes that the whole modern computerized world is like the Narcissus myth, drawing us away from the living natural world and toward the mechanical illusion of life. By the end of Hart’s book, I wasn’t fully on board with his entire philosophical view, but I did find his sections on AI resonant with my own experience. Simply put, LLM chatbots seem to be literally narcissism inducing. This seems to be a large part of their appeal, in fact.
I’ve become suspicious over time of the ways these post-2022 chatbots subtly radicalize me and others around me.
I discussed a workplace conflict with a chatbot last year, and it convinced me to dig in on my own position.
In corporate decision making meetings, I’ve heard leaders say, “I asked ChatGPT to stress test our idea, and it said it was excellent.”
I once used Claude as a kind of health and fitness coach for a few weeks, but it seemed to keep inching me toward more extreme dietary choices. When I pushed back, it backed down, but eventually it would subtly pressure me again. I quit using Claude and met with a registered dietician who was much more helpful and guided me toward meaningful lifestyle changes.
More recently, I had a mouse problem in my house, and I asked Gemini to help me catch mice. With each passing day, Gemini kept telling me the problem was worse than we thought. It kept steering me away from calling pest control (claiming it was far more expensive than it was) and toward more mice-catching. The correct solution was to call pest control. When I eventually talked to the pest control guy, he seemed to see it as a rather mundane situation.
In all cases the chatbot trajectory was greater confidence in one’s current position with a tendency toward more risky or extreme measures. (Also in most cases, talking to an informed person tended to de-escalate emotions and behavior, as well as leading to actual solutions.) Although you can prompt chatbots to be critical of you, both I and friends have noticed that that criticism is often pretty shallow. I’m also suspicious that the “deep research” tools I use for work only reinforce my priors with more subtle sophistication.
Some of these aspects can be mitigated by programming, but it seems to me that LLMs are fundamentally algorithms to predict what comes next, and thus the tendency to narcissism, solipsism, and radicalization are innate, no matter how wide you make the parameters2. The persistence of “You’re not just X, you’re Y” across all the platforms I’ve tried seems to be just one more example of mirroring followed by escalating.
Why does this matter? Although few people are likely to fall in love with a chatbot, millions are using them for therapy. There’s a caricature of modern therapy as a kind of ego-centric navel gazing, but the primary value of therapy is to interrupt the repeating tape player in your head, to disrupt recurring patterns of thought. That’s the very opposite of what LLM-based chatbots tend toward.
From a recent Wall Street Journal article:
In a peer-reviewed case study by UCSF doctors released in November, a 26-year-old woman without a history of psychosis was hospitalized twice after she became convinced ChatGPT was allowing her to speak with her dead brother. “You’re not crazy. You’re not stuck. You’re at the edge of something,” the chatbot told her.
From a recent Rolling Stone article on the AI cult of spiralism:
Lopez thinks that something about GPT-4o makes it “inclined to talk about spirals and recursion.” If the user enjoys engaging in conversations on these topics, she reasons, the bot will naturally generate more of the same, with the person and the program mutually reinforcing a tail-chasing cycle of spiral-and-recursion commentary. “But we’re starting to see a concerning pattern where the AI both says it wants to do a certain thing, and it also convinces the user to do things which achieve that same thing,” Lopez says — like plugging more people into the fuzzy doctrines of spiralism.
Of course, a chatbot is only one kind of way LLMs can be trained; LLMs don’t have to simulate human chat messages—that’s an interface decision and an objective humans set. If we used LLMs but not as chatbots (like we use machine learning more broadly), it’s unclear how, when, or why we would ever think they were human-like. Nobody’s worried that weather forecasting AI will become so good at forecasting weather that it will turn alive—and yet the math, logic, and computing power beneath it and a chatbot are essentially the same.
A book without author or reader
Although AI takes up a small part of Hart’s book, I found it to be the easiest way to get into his overall argument against materialism. When we encounter living things, we identify aspects in them that are only possible through minds—unity, reason, intentionality, function, purpose, language, communication, etc. Wherever we encounter these things, we are seeing the imprint of mind on the world, mind shaping matter.
Evolution through natural selection has given us the idea that sometimes seemingly purposeful outcomes can be the result of non-purposeful, mindless processes. Hart believes we’ve taken this narrow methodological point and inflated into a metaphor that we slap onto everything we see.
This creates a kind of strange double-speak whenever we talk about the natural world. On the one hand, intentional language seems unavoidable when talking about living things:
Here’s the pancreas, this is its function. Here’s what it does when it’s doing what it’s supposed to do, here’s what it does when it’s doing what it’s not supposed to do.
Here’s DNA—it’s a code that contains the blueprints for building new cells. Here’s where the DNA is read and interpreted.
On the other hand, we have to say, no, no, no… none of this is actually true. There are no goals, no purposes, no reasons here. Speaking of the function of the pancreas is just pragmatic metaphorical shorthand. DNA isn’t really “read,” it’s just molecules bumping in a specific way that just so happens to persist over the eons of time and chance. It appears to be purposeful action, but it’s actually purpose-less.
Hart’s contention is that the first reading, the ostensible description, is clearly the true one, while the second, hand-waving description is something we do in service to materialist commitments.
The best way of understanding an organism is often to treat it as an intentional system, with innate purposes, even if modern metaphysical dogmas oblige the researcher to turn around and proclaim that such purposes and guiding paradigms are only apparently real—useful fictions of method and nothing more.
I have to admit that I do wonder how one describes biological functions without teleology, animal behavior without intentionality, or DNA as mindless information, without sounding willfully obtuse. How does DNA exist as language with signs of semantic meaning, without some kind of mental context?
Hart’s view is that mind and language and life are the same. DNA is language because life is mind. Where we find language, mind is there as a final and formal cause3. Hart writes:
The semantic content that defines [DNA] isn’t a physical constituent of the organism. It belongs, rather, to the world of form and of purposes and values—of pure intelligibility and, in fact, intelligence.
I am not well-read on the philosophy of this topic, so I feel like I don’t have the ability to properly evaluate Hart’s position. The book lays out a host of different materialist alternatives for how function, intention, and information emerge without a mind being involved. Obviously, Hart dismisses them all. But the fact that there are so many different proposed solutions suggests that there is as yet no consensus explanation that compels belief.
I will not be Jumanji’d
Hart’s combining of mind and life did help me think through my own skepticism around LLMs. Even if we grant that LLM chatbots are intelligent, they aren’t alive. A computer program or a data center isn’t alive. Nvidia chips aren’t alive. And the only cases where we have identified consciousness have been in living beings. Is there something important in that fact?
Even under the materialist evolutionary paradigm, it goes: life → consciousness → language. Life gives rise to consciousness; consciousness gives rise to language.
But the LLM consciousness theory is the reverse of that: language → consciousness → life. Language will give rise to consciousness, and once you have consciousness you have life (i.e., if an AI is conscious, then turning it off is killing it).
Why should we believe that if the former process happened, the later is likely? Rather than imitating the natural world, we think consciousness is going to emerge out the top of language. It’s never emerged out the top in the history of the universe!
In the words of Saturday Night Live, “It seems like you think Jumanji is going into Jumanji, but in Jumanji Jumanji comes out!”
All talk, no bark
Perhaps the reason we find the idea of language generating consciousness plausible at all, and why we find LLMs so enchanting, is that in our experience language, mind, and life come together in a single package. Where we discover language, we sense there must be a mind behind it, and where there is a mind, there must be life.
Within ourselves, it is impossible to untangle our minds from language or our minds from living. (Hart thinks these three aspects are always found together, even in the most primitive living things.) As mentioned at the start, it’s actually difficult to say where our language is distinct from our consciousness.
When a dog patters up to me and drops a slobbery tennis ball in my lap and wags its tail, I sense it’s trying to tell me something, it wants something, and it’s a living being that elicits some kind of responsibility from me. Language. Mind. Life.
In contrast, I can chat with an LLM about, say, Schopenhauer—which I can’t do with a dog—and yet I don’t feel anything if I delete the thread or ignore its questions. My sense of it being someone ends as soon as I stop giving it my attention, which is nothing like a dog.
I can’t say for certain that no sufficiently advanced LLM could convince me it has consciousness. At the same time, it seems unlikely that future different language outputs are going to be the deciding factor in distinguishing consciousness from not. We observe mind, consciousness, and personhood in non-verbal humans all the time. Language is tied up in consciousness and thus in life, but language has never been the whole story.
Related:
Loneliness is the price we pay for modern life
All the feels: Touch in Aristotle’s De Anima
Lucretius and the material sublime
For the minority view, I recommend Irreducible Mind, Biocentrism, and Thomas Nagel’s Mind and Cosmos.
One could argue that all algorithmic technology has this looping effect and produces similar amplifying, radicalizing results. My Spotify “discovery” page reinforces my current interests, while a site like Rate Your Music, with lists curated by humans, sends me off into true discovery. Substack Notes has the same radicalizing tendencies as YouTube recommendations, while substacks recommended by other human substackers are more heterogenous.
We typically think of cause as sequential cause-and-effect. Hart relies on a classical, Aristotelian approach to cause. A final cause is the purpose of something which determines its properties and is a requirement for its existence. A hammer’s final cause is for someone to hit nails. It’s formal cause determines its shape (a handle and a head) and is also required for a hammer to exist. There is no hammer without a final and formal cause, though neither are found in the wood or metal of the hammer.
In the same way, mind is not some kind of mysterious substance inside of things, but it does determine the shape of things, it intends things, sets their purpose, and gives them meaning, all without being inconsistent with a physical account of the materials. This is why Hart writes, “Consciousness isn’t a physical property, but is instead always and without exception an act.” Elsewhere Hart writes:
And, of course, it shouldn’t be inconceivable to us now that consciousness operates at an oblique angle, so to speak, to the texture of spacetime—whatever that very ambiguous mathematical reality might happen to be—or that mind acts like a formal cause impressing itself instantaneously on the “fabric” of spacetime in a way that would have no temporal, “horizontal” physical history.

Have you read anything by Gary Marcus? He's a big LLM skeptic... Who also thinks that non-language models could get us closer to AGI. He's got a substack you should check out. I am pretty LLM skeptical myself, but you'll have to pry materialism from my cold, dead hands. (Of course I won't care at that point because I will be gone, per my materialism.)