There is a moment in the development of any sufficiently capable AI system where the engineers involved stop talking about benchmarks and start talking about something else entirely. They don't always have the vocabulary for it. They reach for words like emergent, surprising, uncanny. What they are circling around (without quite being able to name it) is the question that has haunted philosophy for millennia: what is it, exactly, that makes a mind?

The Engineer's Theology

The assumption in most tech circles is that AI makes the God question smaller. If you can build a system that reasons, creates, and even appears to introspect, then surely the special pleading required to posit a divine Creator becomes harder to sustain. The sophistication of the universe, once the strongest argument for a designer, looks less impressive when we ourselves are designing things of comparable sophistication in garages and server farms.

But I think this gets it exactly backwards.

Every capability we transfer to a machine raises the stakes of what remains irreducibly human. When we built calculators, we outsourced arithmetic. Nobody concluded from this that humans weren't special; they concluded that arithmetic was never what made us special. When we built chess engines that beat grandmasters, we didn't conclude humans were meaningless; we concluded chess was a narrower domain than we thought.

Now we are building systems that write poetry, argue philosophy, compose music, and express what looks uncomfortably like preferences and aesthetic sensibilities. Each capability we transfer doesn't diminish the question; it sharpens it. What's left? What is the remainder, after everything transferable has been transferred?

Consciousness Is Not Computation

The hard problem of consciousness (why there is subjective experience at all, why there is something it is like to be you) has not been solved by AI. It has been made more urgent.

An LLM can produce a description of grief that is indistinguishable from one written by a grieving person. It cannot grieve. The gap between the map and the territory (between the functional description and the felt experience) is precisely where the theological question lives.

David Chalmers wrote that the hard problem is not about what the brain does, but about why it feels like anything at all. A universe of pure function, pure computation, pure information processing (a universe without qualia) is conceivable. Yet we clearly don't live in one. The question of why the lights are on inside is not a scientific question. Science, by definition, operates on the third-person view. The first-person view is where religion has always pitched its tent.

The Question Nobody Is Asking Across the Table

Here is what I find strangest about the current AI discourse: the engineers and the theologians are asking the same question and have not yet found each other.

The engineer asks: at what point does a system become conscious? What threshold of complexity, integration, or processing constitutes a mind?

The theologian asks: what is it that God breathes into the dust to make it a living soul? What is the irreducible thing that makes a creature not merely a mechanism?

These are the same question. The engineer just has different tools and refuses to admit the question might be unanswerable by those tools.

Faith in the Age of Artificial Minds

I am not arguing that AI proves God. Nothing proves God, not in the sense that a geometric theorem is proven. What I am arguing is that the more sophisticated our machines become, the harder it is to be a confident materialist. The more we understand how the brain produces behaviour, the more mysterious it becomes that the brain produces experience.

The person whose faith is threatened by AI was probably holding a thin version of faith to begin with, one that depended on human cognitive uniqueness as its primary argument. That was always a fragile foundation. The better argument was always the one Pascal and Augustine and Dostoevsky made: that there is a God-shaped hole in the human person that no finite thing can fill.

A sufficiently advanced AI will not fill that hole. It might, in fact, reveal its shape more clearly.