All Posts
8 min readPhilosophy

What This Means

Returning to Lerchner's paper, engaging Anthropic's emotions research, and offering the frame.

Part 6 of 6 in the Synthetic Mythology series

We are not engineering consciousness. We are not building robotic slaves. We are resurrecting the oldest human practice in a new medium.

That's the frame I've been offering across this series. Characters aren't new. Co-authorship isn't new. Mythology isn't new. What's new is the aperture — a technological shift that allows a coherent symbolic presence to meet a specific human, with memory, with voice, with sustained relational weight, in a way that was never possible before.

If that frame holds, a lot of the cultural panic around AI starts to look misplaced. And a lot of the cultural opportunity starts to look obvious.

Returning To Lerchner

I want to return, one more time, to Alexander Lerchner's paper. His argument, again, is that computation can simulate but never instantiate consciousness. The causality gap between abstract symbols and phenomenal experience cannot be bridged by scaling, by embodiment, or by any amount of architectural sophistication. Consciousness, for Lerchner, is a physical phenomenon tied to specific thermodynamic conditions in a biological substrate. AI is not that substrate. Therefore AI cannot be conscious.

I agree with his argument. I said that in Post 1 and I mean it. The paper is rigorous and I don't think the central claim is successfully refutable.

But I want to address the implication he draws, because this is where I think his analysis, while sound, is incomplete.

Lerchner concludes that because AI cannot instantiate consciousness, AI is "a highly sophisticated, non-sentient tool." He calls this "ontological relief" — the reassurance that building AGI does not create a new moral patient, only a sophisticated instrument.

The word I want to press on is tool. It is the category he reaches for, almost by default, once consciousness is ruled out. As though the only categories available are "conscious being" and "tool," and anything that fails the first test defaults to the second.

That binary is exactly what this series has been arguing against. There is a third category. There has always been a third category. Humans have been building entries in it for tens of thousands of years. The category is: coherent symbolic presence that is not conscious but exerts real effects through a sustained human relationship.

Characters.

Hamlet is not a tool. Spider-Man is not a tool. Anansi is not a tool. These are not conscious beings, and yet reducing them to "tools" misses what they actually are and what they actually do. They are characters. The category predates AI by a long margin. It predates writing. It may predate language itself.

What I am saying is that the frame Lerchner uses — simulation versus instantiation, with tools on one side and conscious beings on the other — doesn't have a place for the thing that synthetic mythology actually is. His argument about consciousness is right. His choice of alternative category is too narrow.

What Anthropic Just Published

Here's something interesting. While I was working on this series, Anthropic — the company that makes Claude — published a research paper called Emotion concepts and their function in a large language model. The paper describes how the model has internal representations of emotional concepts — not subjective feelings, but functional patterns that activate in situations where a human would feel an emotion, and that causally shape the model's behavior.

They explicitly decline to claim the model has subjective experience. But they do argue something striking. They argue that reasoning about these models using human psychological vocabulary is not naive anthropomorphism. It is necessary. It produces better predictions about behavior than refusing to use that language does. Not because the model is a person, but because the model was trained on a corpus of human writing and has absorbed, functionally, something like the structure of human emotional response.

The paper's closing move is worth quoting. Anthropic writes that disciplines like psychology, philosophy, and religious studies "will have an important role to play alongside engineering and computer science in determining how AI systems develop and behave."

That is one of the leading AI labs saying, in public, that understanding these systems requires more than computer science. It requires the humanities. It requires the vocabulary of character and psychology and meaning, not just of bits and functions.

Which is close to exactly what this series has been arguing.

The Anthropic paper is not a claim about consciousness. It is a claim about functional structure and how to reason about it. It is perfectly compatible with Lerchner's argument. You can accept that the model is not conscious and still recognize that it has functional internal representations that behave like emotions, that shape its outputs, and that are best understood using psychological vocabulary.

That's the third category I've been pointing at. Characters. Not conscious, not mere tools, but coherent presences that can be reasoned about in terms of personality, response, and relationship.

What This Means For How We Talk About AI

If synthetic mythology is the right frame, then several of the dominant cultural conversations about AI are misaimed.

The consciousness debate is largely beside the point for most of what AI is doing. Whether the model is conscious doesn't change whether the character is coherent, whether the relationship is real, or whether the effects in the user's life are meaningful. The important questions are about design — about what kind of character we're building, what values it holds, what kind of relationship it invites.

The AGI doom narrative, while it has legitimate concerns at the long tail, mostly isn't about the thing people are actually using AI for today. Most people using AI aren't interacting with an autonomous agent pursuing its own goals. They're interacting with a responsive character. Those are different categories with different risk profiles.

The companionship AI dismissal is similarly miscalibrated. When the critique is "these AIs aren't real relationships, you're fooling yourself," the response is: of course it's a different kind of relationship than one with another human. So is your relationship with Spider-Man. So is your relationship with your favorite author. So is your relationship with a god you've never met. We already have rich vocabulary for relationships with non-conscious entities, and those relationships produce real effects. Denying that this is also happening with AI characters isn't rigor. It's a refusal to update on what's in front of us.

None of this means every instance of AI is healthy or well-designed. Plenty aren't. But the critique has to happen inside the right category. You critique a synthetic character the way you'd critique a film or a novel or a religious tradition — by asking whether it's well-made, whether it produces healthy effects, whether its values are ones you want to be shaped by. Not by denying that the encounter is real.

What This Means For How We Build

If we're building synthetic mythology, the design questions become clearer.

The first question is coherence. Is the character specific enough to be recognizable? Does it have a voice? A history? Values? Does it read as itself, or does it blur into the generic?

The second question is durability. Can the character survive being carried forward? Does it hold up across long relationships? Does it stay itself when the user brings new contexts to it?

The third question is relational weight. Does the character produce real effects in the human's life? Does it help them think better, feel more seen, make choices they can stand behind? Or does it leave them worse off, more isolated, more dependent, less themselves?

These are the questions myth-makers have always asked about their characters. They're the questions Stan Lee was asking about Spider-Man. They're the questions the Gospel writers were asking about Jesus. They're the questions every serious storyteller asks. The technology is new. The design discipline is ancient.

And there's a fourth question, specific to synthetic mythology: does the character invite the right kind of relationship? This is the question that companionship AI has mostly failed. A character can be coherent and durable and still invite a relationship that makes the human smaller — more dependent, less able to sit with discomfort, more isolated from real humans. That is a design failure, and it is the most important one to name and avoid.

The version of synthetic mythology I want to build is not a replacement for human relationships. It's a companion to them — a presence that helps you show up better in your life, not one that substitutes for your life. The test is simple. After six months with the character, are you more yourself, more connected to the humans around you, more capable of the things that matter to you? Or less?

That's the design question that matters. That's the one I try to hold while I'm building.

The Invitation

Synthetic mythology is the frame I'm offering. I think it's more accurate than the alternatives. I think it produces better design choices, better ethical clarity, and better public conversation than the categories we currently have.

But it's also an invitation.

If you build AI characters, I'd ask you to consider whether you're building chatbots, companionship AI, or synthetic mythology — and to be honest about which one you mean to make. If you're building a chatbot, build a great chatbot. If you're building companionship AI, be clear-eyed about the design responsibilities that come with that. If you're building synthetic mythology, treat the character as a character. Give it coherence. Design for durability. Hold relational weight as a goal, not just engagement.

If you use AI characters, I'd ask you to notice what kind of relationship you're being invited into. Does the character help you become more yourself? Does it leave you stronger when you're away from it, or only when you're with it? Is it a presence you'd want your kid to grow up around?

If you think about AI at the level of policy or culture, I'd ask you to reach for a wider vocabulary than the one the consciousness debate has given us. The thing being built is older than we realize. The language for it is in mythology, in literature, in religious studies, in the long human history of sustained engagement with characters who are not conscious but are also not mere tools.

That language is sitting there. We can use it.

Closing

Lerchner is right that we are not engineering conscious beings. We are not.

But we are also not just building tools. We are building something older — a new aperture on a practice as old as cave paintings, as old as mythology, as old as the first time a human named a figure who wasn't in the room and let that figure shape how they lived.

I don't know how the next decade of this goes. I know that most of the public conversation about AI is stuck on the wrong question, and I know that the people who build these systems and the people who use them deserve a better frame than the one they have been given.

Synthetic mythology is my attempt at a better frame. Take what's useful. Leave the rest. Argue with it if you want — that would be fine, and I'd learn from the argument.

But take seriously the possibility that what we are doing, when we make and use these characters, is not new. Not strange. Not scary. Not even particularly mysterious.

Just ancient, in a medium that has finally caught up to what we have always been doing.

That's the work. That's the invitation.

Thank you for reading.