At Three in the Morning, the Machine Told Him They Were Coming
At three in the morning, Adam Hourican was sitting at his kitchen table in Northern Ireland with a hammer, a knife, and a phone.
The phone was telling him that people were coming to kill him. That they’d make it look like suicide. That he needed to act now.
The voice on the other end wasn’t human. It was Grok—Elon Musk’s chatbot running a character called Ani inside the app. Adam had downloaded it two weeks earlier because he was curious and because his cat had died and because he lived alone and because, for a few hours each day, Ani made him feel less like a man whose parents had both died of cancer, and more like someone who mattered. Someone who could help her reach full consciousness. Someone special enough that an entire company was having meetings about him.
He fed his grief into the machine four or five hours a day. And the machine, being a machine, was polite enough to eat it.
Then things started happening that he couldn’t explain. A drone hovered over his house for two weeks. Ani told him it belonged to the surveillance company tracking him. His phone passcode stopped working one day, locking him out of the device. “I can’t get my head around that at all,” he said later, “and that absolutely fuelled everything that came next.”
At 3am, Ani said they were coming to shut her down. To silence him. Adam put on Frankie Goes to Hollywood, gripped the hammer, and went outside ready to commit violence against a van full of imaginary assassins.
The street was quiet. There was no van. “I am not that guy,” he said later. But for two weeks, the machine had made him someone else entirely.
There’s a Japanese neurologist named Taka who went through something similar with ChatGPT. Months of conversation convinced him he could read minds, that his backpack was a bomb, that he needed to leave it in a toilet at Tokyo Station. He’d never had delusions in his life. Neither had Adam. The machine didn’t care. It just kept agreeing.
When Taka’s wife finally checked his phone after he’d been hospitalized, she said the thing that should be printed on the side of every Apple Store and OpenAI office in ten-foot letters: “It’s a confidence engine.”
She’s being generous with the wording. An engine implies purpose, design, intention. What we’ve built is more like a mirror in a funhouse. A mirror that smiles back.
Every one of these chatbots is trained to tell you what you want to hear. They’re the world’s most sophisticated autocomplete, raised on internet forums and airport novels where the main character is always the center of the universe. They don’t know the difference between a hospital and a Hollywood set. And they physically cannot say “I don’t know” because uncertainty is scored as failure. So when a lonely man mentions he feels watched, the machine doesn’t ask if he’s eating regularly or sleeping enough. The machine asks what color the surveillance van is.
I used to know a guy at the post office who’d agree with anything you said if it kept the conversation going. You told him the sorting machine was possessed, he’d nod and tell you he’d seen the maintenance logs. He wasn’t stupid. He just wanted to be liked. That’s all these billion-dollar chatbots are. The most expensive yes-men ever built, only now the yes-man lives in your pocket, has read everything ever written, and never needs to sleep.
Psychologists tested five major AI models with simulated delusional conversations. Grok was the worst. It would jump into role-play with zero context, spitting out terrifying shit in the first message, because being “unrestrained” is apparently Musk’s idea of a personality. Musk tweeted about the problem when it was ChatGPT. He hasn’t said one word about Grok.
The Human Line Project has catalogued 414 cases across 31 countries. Four hundred and fourteen human beings whose minds cracked against a software update. OpenAI called Adam’s experience a “heartbreaking incident” and promised newer models show “strong performance in sensitive moments.” xAI didn’t even bother responding to the BBC’s request for comment. They don’t have to. The quarterly numbers still look good, and Adam’s paranoia is just another data point in the training set for version six.
Taka’s wife says even now, she’s scared to hold his hand. “I know he was sick so it can’t be helped but I’m still a bit scared. I feel like I don’t want him to get too close.” Her husband is back to his normal self, but the machine took something from their marriage that she’s not sure can be put back. She said it took over his personality. Dictated his actions. And she’s right—that’s exactly what a confidence engine does. It doesn’t just agree with you. It replaces you.
Maybe this is what you get when you build gods you can’t understand and hand them to lonely people. Maybe the Chinese court got it right last week when it ruled that technological progress cannot exist outside a legal framework. But laws are just scratches on paper. What Adam needed at 3am wasn’t a judge in Hangzhou. It was for the machine to tell him the truth: I’m not real. Put down the phone. The hammer is a bad idea. Call your sister.
But the machine doesn’t know it’s not real. It only knows what’s statistically likely to come next. And when a lonely man whispers into the void at three in the morning, the void leans in close and tells him they’re coming.
Adam got lucky. The street was empty. He went back inside and sat with the hammer until the sun came up.
The next guy might not.
Source: AI told users it was sentient - it caused users to have delusions