Look, I’ve seen some things in my time. I’ve watched grown men weep into their keyboards over a botched code deployment. I’ve witnessed entire companies evaporate because some twenty-three-year-old in a hoodie decided to “pivot.” But a guy going on the lam because he wants to shoot up OpenAI? That’s a new one, even for me.
Sam Kirchner. Twenty-seven years old. Started out as your garden-variety AI skeptic, waving signs and chanting slogans with the Stop AI crowd, committed to peaceful protest. Now he’s somewhere out there in the California wilderness, apparently armed and considered dangerous by San Francisco’s finest, all because he became convinced that ChatGPT and its ilk are going to murder us all in our sleep.
The Atlantic broke this story, and I’ve been sitting here reading it over and over, trying to figure out what to make of it. On one hand, you’ve got a young man who clearly needs help. On the other hand, you’ve got an entire industry that keeps telling us, with straight faces, that they’re building something that might end human civilization. And then they act surprised when someone takes them at their word.
Here’s the thing that gets me. The same people running these AI companies are the ones pushing the doom narrative. Sam Altman at OpenAI, Dario Amodei at Anthropic—these guys sound like street preachers sometimes, warning us about the coming apocalypse while simultaneously building the very thing they claim will destroy us. It’s like selling someone a gun while explaining in detail how it’s going to be used to shoot them in the face.
And then they wonder why some people get a little twitchy.
Kirchner apparently started losing it when he felt like Stop AI’s nonviolent approach wasn’t working. He wanted to access the group’s funds, got into it with the leader—a guy named Matthew “Yakko” Hall, which is a name I couldn’t have invented if I’d tried—and beat him up. Then he vanished. Left his West Oakland apartment empty. Started talking about how the “nonviolence ship has sailed” for him.
His friends think he’s probably just hiding somewhere, embarrassed and hurt. The cops think he might be planning to put some holes in OpenAI employees. OpenAI locked down their offices. Somewhere in San Francisco, security guards are probably getting overtime pay to stand around looking serious while engineers inside debate whether their language models are truly conscious or just very good at pretending.
What a time to be alive.
The philosopher they quoted in the piece, a guy named Émile P. Torres, said something that stuck with me: “There is this kind of an apocalyptic mindset that people can get into. The stakes are enormous and literally couldn’t be higher.”
And he’s right. That’s exactly the rhetoric these tech companies use to sell their products. This is the most important technology ever created. This will change everything. This might be the last invention humans ever need to make. When you marinate in that kind of thinking long enough, when every pitch deck and keynote speech sounds like a prophecy, some people are going to take it to heart. Some people are going to decide that if the world is really ending, maybe the usual rules don’t apply anymore.
The article mentions the Zizians, which sounds like something out of a bad science fiction novel but is apparently a real cult that got so worked up about AI that they’ve been implicated in murders—though, hilariously, the murders had nothing to do with AI. Just regular cult murder stuff, I guess. Even the apocalypse cults can’t stay on message.
Then there’s Pause AI, which wants to halt superintelligent AI development until we can figure out how to do it safely. Which sounds reasonable enough until you realize that “safely” is a word that means different things to different people, and “democratically decided ideal outcomes” is the kind of phrase that makes my head hurt before I’ve even finished my first drink of the day.
Eliezer Yudkowsky, the public intellectual who’s been warning about AI for years, just published a book called “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us.” Became a bestseller immediately. So there’s a market for this stuff. People want to believe the world is ending. There’s something almost comforting about it, I think. If everything’s about to be destroyed, you don’t have to worry about paying your rent or what you’re doing with your life. The cosmic horror of superintelligent machines deciding to wipe us out is somehow easier to process than the mundane horror of another Monday.
But here’s what really gets me about this whole situation. The tech industry has spent decades telling us that disruption is good. Move fast and break things. The old rules don’t apply. If you’re not pissing people off, you’re not trying hard enough. They’ve built a culture that celebrates rule-breaking, that worships founders who ignore conventional wisdom, that rewards the people who push boundaries regardless of the consequences.
And now they’re shocked—shocked!—that someone took those lessons to heart in a way they didn’t anticipate.
Sam Kirchner looked at the AI industry and saw what they were telling him to see: an existential threat to humanity being built by people who openly admit they don’t know how to control it. He listened to the doom-saying and the fear-mongering and the apocalyptic rhetoric. He heard the billionaires warn us that they might be building our replacement. And instead of shrugging and going back to scrolling Twitter like the rest of us, he decided to do something about it.
The wrong thing, obviously. Violence is never the answer, especially when you’re going up against companies with better security systems than most small countries. But I can’t help wondering how many other Sam Kirchners are out there, reading the same breathless coverage, watching the same keynote speeches, absorbing the same message that we’re all doomed and the people building our doom don’t seem particularly interested in stopping.
Torres, the philosopher, said he’s been worried about people in the AI-safety crowd resorting to violence. “Someone can have that mindset and commit themselves to nonviolence,” he said, “but the mindset does incline people toward thinking, Well, maybe any measure might be justifiable.”
That’s the trap, isn’t it? Once you convince yourself that the stakes are infinite—that literal human extinction is on the table—any action becomes proportionate. Any violence becomes self-defense. The math works out, even when it doesn’t.
I hope they find Sam Kirchner before he hurts anyone or himself. I hope he gets the help he needs. I hope his friends are right and he’s just holed up somewhere, embarrassed and scared, waiting for this all to blow over.
But mostly, I hope the people building these systems take a long, hard look at what they’ve created. Not the AI systems themselves, but the culture around them. The fear they’ve stoked. The apocalyptic narrative they’ve cultivated. The way they’ve convinced a generation of people that the end of the world is just a few GPU clusters away.
You can’t spend years telling everyone that you might be building humanity’s executioner and then act surprised when someone decides to be humanity’s defender. That’s not how human psychology works. That’s not how anything works.
Somewhere out there, Sam Kirchner is probably reading about himself, wondering how it all went so wrong. And somewhere in San Francisco, the AI companies are probably already planning their next keynote about how their latest model might be the one that changes everything, forever, irreversibly.
The machines haven’t killed us yet. But the stories we tell about them? Those are doing plenty of damage on their own.
Source: Anti-AI Activist on the Run as Police Warn That He’s Armed and Dangerous