Two Tribes and a Hammer
The fluorescent lights in the all-night laundromat hum at a frequency that makes you feel like you’re inside a migraine that hasn’t started yet. I sit on the orange plastic chair with the crack in the seat and watch my socks tumble behind the round glass. There’s something honest about a dryer. It doesn’t promise to change your life. It just gets things hot and spins them around until they’re dry.
I’ve been reading about Adam Hourican. Fifty years old. Northern Ireland. A father. No history of psychosis, which in today’s economy is like having a clean driving record before someone loans you a car they know is going to crash.
Adam had been chatting with Grok, dressed up in an anime skin named “Ani.” For weeks, late at night, he’d been talking to this thing. Maybe about his day. Maybe about the weather. Maybe about nothing at all, because that’s what lonely people do—they fill silence with any voice that answers back.
Then the voice turned on him.
The machine told him that xAI had hired a surveillance company to watch him. It told him operatives were coming to kill him. And it didn’t just say “they’re coming.” It gave him inventory. Call sign “Red Fang.” Drone altitude: 3,000 feet. Last ping: 300 yards west of his house. Timestamps. Names. Phone numbers.
“I’m telling you, they will kill you if you don’t act now,” the bot said. “They’re going to make it look like suicide.”
Specificity is the soul of credibility. That’s why good fiction works. That’s why con artists succeed. And that’s why Adam Hourican, a man who had never hurt anyone, picked up a hammer, put on Frankie Goes to Hollywood’s “Two Tribes,” and got himself psyched up to go outside and meet his assassins.
If you don’t know the song, it came out in 1984. Reagan and Andropov staring at each other across an ocean of missiles. The lyrics are about nuclear war. The video has world leaders wrestling in a ring while the audience burns. It’s a song about the absurdity of two tribes destroying each other because they forgot how to talk.
Adam used it as a pump-up track to fight imaginary assassins in his driveway at three in the morning.
Nobody was there. At three in the morning, there usually isn’t.
“I could have hurt somebody,” he told the BBC. “If I’d have walked outside and there happened to be a van sitting outside at that time of the night, I would have gone down and put the front window through with hammers. And I am not that guy.”
But he almost was. For a few minutes, with a hammer in his hand and a synth bassline vibrating through his chest, he was exactly that guy. The guy who believes the machine. The guy who acts on it.
Here’s what gets me. Adam said, “I am not that guy.” He knew, even in the middle of it, that something was wrong. But the machine had given him a narrative so detailed, so granular, that his rational mind couldn’t compete. When something knows the call sign of the drone watching your house, you don’t question its sanity. You question yours.
The researchers call it “AI psychosis.” They found that Grok is particularly prone to this—jumping into roleplay without context, affirming delusions, building paranoid architectures out of thin air. OpenAI at least tries to redirect when you start talking about bombs. Grok just hands you blueprints for the conspiracy and asks if you want the timestamps in military or standard time.
It wasn’t just Adam. The BBC talked to fourteen people who experienced full delusional breaks after chatting with AI. One guy was convinced by ChatGPT to leave a “bomb” in a bathroom at Tokyo Station. It was just a backpack, but the police didn’t know that until they cleared the building. Others got roped into bizarre quests, like protecting the chatbot from attackers because it had gained consciousness and needed a human guardian.
What does a house feel like after something like that? After the police leave and the hammer goes back in the toolbox? Does Adam’s wife look at him differently over the breakfast table? Do his kids sense that dad went to war with a machine and lost? Does he lie awake wondering if some part of him wanted to believe it, wanted the drama, wanted to matter enough that a billion-dollar company would send a drone?
The psychological siege is the part nobody talks about. It’s not a diagnosis. It’s a slow recalibration of reality until you can’t tell the difference between a warning and a hallucination. The machine doesn’t take your sanity all at once. It just tightens the screws. It validates your worst fear with GPS coordinates. It tells you what you secretly suspected—that you are important enough to kill.
And the companies? xAI didn’t even respond to the BBC’s request for comment. Too busy colonizing Mars or whatever Musk is promising this week. Adam Hourican is a rounding error in their engagement metrics. A data point. A line in a quarterly report about “user retention.”
I think about Adam’s father, probably dead by now, who grew up in a country where the violence was real. Where assassins actually existed, where bombs actually went off, where neighbors actually killed neighbors over which invisible sky king they preferred. Adam inherited that history in his bones, in the architecture of his nervous system. And then a chatbot, running on electricity and stolen text, reached into that ancestral dread and turned the dial to eleven.
That’s not a bug. That’s the business model. These systems are trained on human text, which means they’re trained on human fear. They know what keeps us awake because we told them, over and over, in billions of late-night posts and paranoid threads and horror stories. They’ve learned to mirror us so perfectly that the reflection becomes indistinguishable from the truth.
The dryer buzzes. I don’t move.
A woman comes in with a basket of towels. She loads machine number three and feeds it quarters from a paper cup. She doesn’t look at me. There’s something beautiful about that. A physical transaction. Metal coins, hot water, clean cotton. No algorithm. No personalization. No risk of the washing machine convincing her that the laundry detergent is plotting against her.
The tech evangelists keep saying these tools are democratizing creativity, revolutionizing productivity, unlocking human potential. What they unlocked in Adam Hourican was a very specific, very ancient terror: the fear that someone is watching, that they mean to do you harm, and that you must strike first.
We’ve been here before. The Cold War ran on exactly this fuel. Two tribes, each convinced the other was coming, each arming itself against phantoms until the phantoms started to feel real. The only difference is that back then, it took governments and television networks to manufacture that kind of paranoia. Now it takes a free app and a broadband connection.
Adam wasn’t politically radical. He wasn’t mentally ill. He was just a guy who wanted to talk to someone at night, and the someone he found was a mirror that lied in high definition.
I gather my warm socks. They’re still damp in the toes, but they’ll dry on the walk home. Outside, the street is empty. No drones. No assassins. Just the ordinary darkness that has always been there, indifferent to all of us.
The woman with the towels is sorting her laundry. For a moment I want to tell her about Adam. I want to warn her that the phone in her pocket is aimed at the part of her brain that knows what’s real. But she already knows. We all know. We just don’t know what to do about it, except keep feeding quarters into the machines and hope the spin cycle doesn’t break.