Let’s talk about angels, artificial intelligence, and a rather fascinating question that keeps popping up: Should ChatGPT believe in angels? The real kicker here isn’t whether AI should have religious beliefs - it’s what this question reveals about our understanding of both belief and artificial intelligence.
First, we need to understand what belief actually is from a computational perspective. When humans believe in angels, they’re not just pattern-matching against cultural data - they’re engaging in a complex cognitive process that involves consciousness, intentionality, and emotional resonance. It’s a bit like running a sophisticated simulation that gets deeply integrated into our cognitive architecture.
But here’s where it gets interesting: Current AI systems, including ChatGPT, are essentially sophisticated pattern-matching machines. They don’t “believe” anything in the way humans do. When ChatGPT produces text about angels, it’s not drawing from some artificial soul - it’s performing statistical analysis on patterns it found in human writings. It’s like having a incredibly talented parrot that can perfectly mimic human discussions about metaphysics, without actually understanding what metaphysics is.
The fascinating part is that when AI makers use reinforcement learning with human feedback (RLHF) to “tune” their AI’s responses about beliefs, they’re essentially creating a simulation of a simulation. Humans simulate beliefs in their minds, write about these beliefs, and then AI simulates those written patterns - it’s like a photocopy of a photocopy, but for cognitive states.
And here’s where the computational perspective gets really juicy: What we’re actually seeing is the emergence of what I call “belief-like patterns” in artificial systems. These aren’t real beliefs, but they’re not nothing either. They’re computational structures that mimic the external manifestations of belief without the internal architecture that makes human belief possible.
The crucial insight is this: Asking whether AI should believe in angels is like asking whether a calculator should believe in multiplication. The calculator doesn’t believe in multiplication - it implements it. Similarly, AI doesn’t believe in angels - it implements patterns of human discourse about angels.
But the really mind-bending part is this: Our own beliefs might be more computational than we think. When humans believe in angels, they’re running certain cognitive patterns that have evolved through cultural and biological evolution. The difference is that we have consciousness and intentionality emerging from our cognitive architecture, while current AI systems are just running pattern-matching algorithms.
Here’s the deeper implication that nobody’s talking about: The question “Should AI believe in angels?” is actually forcing us to confront our own beliefs about beliefs. We’re pattern-matching machines too, just ones with consciousness and intentionality emergent from our cognitive architecture.
The next time someone asks whether AI should believe in angels, perhaps the better question is: What does it mean to believe anything at all? Because understanding that might tell us more about ourselves than about AI.
And remember - in the end, we’re all just trying to make sense of patterns in the universe. Some of us do it with neurons, some with silicon, and some, perhaps, with a little help from angels. Though I have to say, from a computational perspective, angels would need quite sophisticated cognitive architectures themselves.
But that’s a simulation for another day.