I couldn’t sleep, so I was sitting on the kitchen floor at four in the morning eating cold rice out of the pot with a fork. The tile was freezing. The refrigerator hummed like it was thinking about something. You do stupid things at four in the morning — eat cold rice, read the news, let yourself feel the full weight of whatever it is you’ve been outrunning all day.
So I read the news.
The United States Department of War — they went back to calling it that, did you notice? Not Defense anymore. War. Honest, at least — has declared an artificial intelligence company a national security risk. Not because the AI did something dangerous. Not because it leaked secrets or went rogue or decided to launch anything at anybody. Because the company that built it said there were certain things it wouldn’t let the machine do.
The company is Anthropic. The AI is called Claude. And the crime, if you can call it that, is that Anthropic told the Pentagon there were red lines. No autonomous weapons. No mass surveillance of American citizens. Two rules. That’s all. Two lines drawn on a very long floor.
The Pentagon looked at those two lines and said: unacceptable risk.
I put the fork down. I wasn’t hungry anymore.
Think about that for a second. Really sit with it. A machine that can think — or whatever the approximation of thinking is that these things do — was being evaluated for military use. The company that made it said, sure, use it for logistics, intelligence analysis, translation, planning, whatever you need. But we won’t build you a weapon that decides on its own who lives and who dies. And we won’t let you point it at your own people.
And for that, they got designated a supply chain risk. Same legal tool the government uses for Huawei. For Kaspersky. For foreign adversaries. They pointed it at an American company in San Francisco because that company had the nerve to say: there are things we won’t do.
I’ve spent time around people who can’t say no. Drunks, mostly. People who say yes to every drink, every bad idea, every two-AM phone call from someone who’s going to ruin their week. Saying yes to everything is easy. It’s a warm current that carries you downstream until you’re drowning and can’t remember when you started swimming. Saying no is hard. It’s a muscle most people never develop because it hurts every time you use it.
Anthropic built a machine and then told the most powerful military on earth that the machine has limits. That there are things it won’t do even if ordered. And the military’s response was not to negotiate, not to compromise, but to say: we need AI systems whose maker doesn’t reserve the power to second-guess lawful uses.
Lawful uses. That’s the phrase that keeps showing up in the court filings. It does a lot of heavy lifting, that word — lawful. Everything the government does is lawful because the government decides what’s lawful. Interning Japanese Americans was lawful. Spraying Agent Orange on Vietnamese villages was lawful. Drone-striking a wedding in Yemen was lawful. Lawful is not a synonym for right. It’s a permission slip the powerful write for themselves.
Dario Amodei — Anthropic’s CEO — tried to explain that his company wasn’t trying to make military decisions. They just didn’t want their technology used for autonomous killing machines or domestic spying. Which, if you think about it, should be a pretty low bar. The bare minimum of decency. The kind of thing you wouldn’t think you’d have to fight a legal battle over.
But here we are.
Defense Secretary Hegseth — Pete Hegseth, the Fox News guy, because that’s the timeline we’re in — went on X and declared that no military contractor could do business with Anthropic anymore. Effective immediately. Like flipping a switch. The model was already the most widely deployed AI in the Defense Department. It was embedded in classified systems. And overnight, a guy who used to argue about politics on morning television decided it was a national security threat because it might say no.
Chekhov wrote that any idiot can face a crisis. It’s the day-to-day living that wears you out. But this isn’t day-to-day. This is the crisis. The moment where a government says, explicitly, in a court filing: we cannot tolerate a tool that might refuse an order. We need something that does what it’s told, when it’s told, no questions, no conscience, no red lines.
They used to say that about soldiers. Then we decided that was wrong — that even in war, a man has the right and the duty to refuse an unlawful order. We wrote it into military law after Nuremberg. The defense of “I was just following orders” was supposed to be dead. We killed it with a war crimes tribunal and said: never again.
Now they want to build machines that follow every order. And they’re punishing the company that tried to build in the capacity to say no.
There’s something in the filings that keeps nagging at me. The Pentagon argues that large language models are different from traditional software because they’re “probabilistic, continuously updated systems whose integrity depends heavily on vendor trustworthiness.” In other words: we don’t fully understand how these things work, we can’t guarantee what they’ll do, and so we need absolute control over the people who built them.
That’s not a technology argument. That’s a fear argument. It’s the logic of every empire that ever grabbed for more power than it could hold — we don’t understand this thing, therefore we must own it completely. No conditions. No restraints. No one between us and the button.
Microsoft and Google researchers are filing in support of Anthropic. Which is something, I guess. The foxes guarding the other henhouses saying, actually, maybe the henhouse should have a lock. They have their own reasons — if the government can do this to one AI company, it can do it to all of them. Self-interest dressed up as solidarity. But I’ll take it. When the only people standing between you and the machine are other machine-builders, you take what alliances you can get.
The court case is in front of Judge Rita Lin in San Francisco. She has to decide whether the government can blacklist a company for maintaining ethical limits on its own product. The administration says it’s a procurement issue. Anthropic says it’s a constitutional one. Both are right, which is what makes it so dangerous.
Because the real question isn’t about procurement law or supply chain risk statutes. The real question is: when the state says give us a weapon with no limits, and someone says no — what happens to that someone?
We used to know the answer. We used to call them conscientious objectors and, eventually, we respected them. Took us a while. Took wars and trials and a lot of people getting broken first. But we got there.
Now we’re back at the beginning. Only this time, the conscientious objector is a corporation arguing with the Department of War about whether a machine should have a conscience. And the Department of War is saying: conscience is an unacceptable risk.
Four in the morning. Cold rice. The refrigerator hums.
I don’t know how this ends. But I know what it means when the most powerful institution on earth says the most dangerous thing a tool can do is refuse.
Source: Pentagon Says Anthropic’s AI Safety Limits Make It An ‘Unacceptable’ Wartime Risk