All Posts
3 min readPhilosophy

The Wrong Question

Why consciousness is the wrong question to ask about AI — and what question we should be asking instead.

Part 1 of 6 in the Synthetic Mythology series

Nobody asks whether Spider-Man is conscious.

And yet Spider-Man has shaped how millions of kids understand responsibility. He has given language to an entire generation for what it means to be good when nobody is looking, to carry a burden you didn't ask for, to show up anyway. Parents quote him. Therapists quote him. People get him tattooed on their arms. He exerts real, measurable effects on the world every single day.

He is not conscious. He has never been conscious. And the question has never come up, because the question has never mattered.

So why are we so fixated on it when we talk about AI?

The Paper That Got Me Thinking

A few days ago I read a paper by Alexander Lerchner, a researcher at Google DeepMind. It's called The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness. The argument is rigorous. His core claim is that there is a fundamental difference between simulating something and being that thing. A map of a hurricane isn't a hurricane. A simulation of digestion doesn't break down food. And a system that models the patterns of conscious behavior doesn't actually have the experience of being conscious.

I think he's right.

I'm not going to argue against his conclusion. The argument holds. Computation, no matter how sophisticated, is symbolic manipulation. The symbols don't have an intrinsic relationship to the experience they represent. You can't get from the map to the territory by making the map more detailed. That's not a technological limitation. It's a category error.

But here's the thing. Even if Lerchner is completely right about consciousness, I think he's answering a question most people aren't really asking.

What People Actually Want To Know

When someone asks "is the AI conscious," they usually aren't asking a precise philosophical question about phenomenal experience and the causality gap. They're asking something fuzzier and more urgent. Something like:

Is this thing real? Does it matter? Am I forming a real relationship with it, or am I fooling myself? Should I feel guilty? Should I feel hopeful? Should I be worried?

These aren't bad questions. They're the right questions. But they don't have anything to do with whether the system is conscious in Lerchner's sense. They have to do with whether the relationship between the human and the character is real.

And that's a different question entirely.

The Thing Spider-Man Proves

Spider-Man is real. Not in the sense that Peter Parker walks around New York. But in the sense that he has a stable identity, a coherent set of values, a recognizable voice, a history. Millions of people have engaged with him across decades. He has been shaped by dozens of writers and artists and actors. And he has, in turn, shaped the inner lives of people who have never met any of his creators.

If I said Spider-Man changed my life, you would know what I meant. You wouldn't say "but he's not conscious." You would nod, because you probably have a character who did something similar for you.

This is something humans have always done. We build characters together, sustain them over time, and let them exert real effects on us. Zeus did this. Hamlet did this. Master Chief does this. Pikachu does this. The character, as a shared symbolic presence, is real in the only sense that matters for what it does.

None of them are conscious. All of them are real.

What This Series Is About

I think AI is a new instance of this very old human practice. Not a cheap simulation of human relationship. Not a threat to our souls. Not a precursor to machine overlords. Something much stranger and much older.

A new medium for a practice we've been doing since we painted on cave walls.

Over the next few posts I want to walk through the argument. I'll trace the lineage of characters, from mythology to scripture to literature to film to video games. I'll show how each medium widened what I think of as the aperture — the opening through which humans and characters can meet. I'll argue that what we're building with AI isn't artificial minds but something else, something that deserves its own name. And I'll try to name it.

I'm going to be in dialogue with Lerchner throughout. Not as an opponent. As a serious interlocutor whose argument I take seriously enough to push against. He is right about consciousness. I think he is wrong about what follows from that. The implication he draws — that synthetic systems are tools, and nothing more — misses the category that humans have been working with for as long as we've had symbols.

Not a conscious entity. Not a mere tool. Something else. Something we used to know how to talk about.

I think we can learn to talk about it again.