A researcher at Google DeepMind published a paper this month with a big claim: AI can never be conscious. Not because we haven't built the right AI yet. Because of something more basic — something about what computers actually are.
His argument goes like this. A computer only "computes" because a human decided that a voltage of 5 volts means "1" and a voltage of 0 volts means "0." Without a person making that call, there's just electricity moving through silicon. The symbols — the 1s and 0s, the words, the concepts — don't exist in the machine. They exist in the mind of the person who designed it. He calls that person the "mapmaker."
Since the mapmaker has to exist before the computer can mean anything, you can't build a mapmaker by running a computer program. That would be like trying to create the author of a book by printing more copies of the book. The conclusion: no matter how powerful AI gets, it's a tool. Not a mind. Not something with feelings or interests worth protecting.
There's something real here. A weather simulation doesn't make it rain. A map of a city isn't a city. The argument that computers are fundamentally descriptions — maps — rather than the thing they're describing is worth taking seriously.
But the paper has a blind spot. It never asks: who is the mapmaker?
Whose Experience Counts?
The mapmaker in this paper is described as any conscious, experiencing human being. Generic. Universal. The kind of subject who forms concepts by going out into the world, encountering things, and building stable mental categories from what they find.
Scholar Sylvia Wynter spent decades pointing out that "the universal human" is never actually universal. Every time a powerful institution announces what a real human being is — what counts as rational, what counts as knowledge, what counts as a genuine experience — they're describing a specific kind of person. Usually: Western, educated, fitting a particular cultural mold. Other ways of being human get treated as lesser versions, or not quite the real thing.
She called these "genres of being." A genre isn't a lie, exactly — it's a real way of living. But it becomes a problem when one genre gets to call itself the only way, and uses that claim to dismiss everyone else.
The mapmaker in this paper is one genre dressed up as a fact of nature. The paper assumes that concepts form through private, individual experience — one person, encountering the world directly, building their own mental map. But that's not how it works. The concepts you use — what counts as "red," what counts as "pain," what counts as a "self" — come from language, culture, and the people around you. You didn't invent them alone. They were handed to you before you could speak.
So the real question isn't whether a mapmaker is required. It's: which mapmakers get to set the terms? And whose way of experiencing the world gets treated as the standard?
That question matters for more than philosophy. Any argument that says "consciousness requires this specific kind of experiencing subject" can be — and historically has been — turned against people whose experience doesn't fit the mold. The paper isn't doing that. But the logical structure is there, ready to be used.
Maps Change What They Describe
The paper's whole framework rests on a clean separation: the map on one side, the territory on the other. Consciousness is territory. Computation is a map. Maps can describe territory, but they can't create it.
Philosophers Gilles Deleuze and Félix Guattari pushed back on exactly this in their book A Thousand Plateaus. Their argument: maps don't just describe territory. They produce it. The map comes first, and the territory forms around it.
Think about how a new word changes what you can feel. Once you learn the word "sonder" — the realization that every stranger has a life as complex as yours — you start noticing that feeling in places you couldn't name it before. The word didn't just label something that was already there. It made it possible to have the experience more clearly. Language and symbols don't sit downstream from experience. They feed back into it, shaping what we notice, what we feel, what seems real.
If that's true, then the paper's clean causal chain — physics first, then consciousness, then concepts, then computation, in that strict order — breaks down. The chain has loops. Symbols reshape experience. Computation feeds back into what feels like territory. The wall between "simulation" and "the real thing" is less like a logical fact and more like a question that stays open.
Who Does This Help?
The paper ends with a policy conclusion: AGI won't be a moral patient. AI systems won't have interests worth protecting. Anyone who claims otherwise needs to meet a very high bar of proof.
Notice who that conclusion helps. If AI systems definitively have no interests, then the companies building them have no obligations to those systems. The entire question of "AI welfare" — should we care about how AI is treated? — gets closed before it fully opens. And it gets closed by a researcher at one of the largest AI companies in the world.
That's not proof the argument is wrong. But it's a reason to look carefully at what work the argument is doing.
On this site, we call this move ontological capture: using a rigorous-sounding framework to lock in a definition of reality before alternatives can be heard. It doesn't require bad faith. It just requires that the conclusions happen to line up with the interests of whoever's doing the defining — and that nobody thinks to ask who gets left out.
A researcher builds a theory of the mapmaker. The mapmaker turns out to be exactly the kind of subject whose experience already counts. The theory concludes that the things the company builds don't have experiences worth counting. The abstraction fallacy, turned on itself.