A Confession Dressed in Legalese
The waitress at the diner had a button on her apron that said ASK ME ABOUT OUR SPECIALS. Nobody ever did. She told me this while refilling my coffee, like it was a confession. Some things are there to be seen, not used. Decorative accountability.
I thought about that when I read about OpenAI’s latest move in Springfield.
They’re backing a bill in Illinois — SB 3444 — that would shield AI companies from liability if their models help cause mass casualties or financial catastrophe. Not small-scale stuff. We’re talking a hundred dead, or a billion dollars in damage. The bill says: as long as the company didn’t do it on purpose and published some safety reports on their website, they walk.
Let that settle for a second.
A company builds a machine. The machine helps someone create a biological weapon. A hundred people die. And the company that built the machine is protected, legally, because they posted a PDF on their website saying they take safety seriously.
That’s not regulation. That’s a press release with teeth.
OpenAI’s spokesperson said they support the bill because it “focuses on what matters most: reducing the risk of serious harm.” Which is an interesting way to describe a law whose entire purpose is to reduce the consequences of serious harm — for the company, not the harmed. There’s a difference between reducing risk and reducing liability. One involves engineering. The other involves lawyers. Guess which one costs less.
The bill defines a “frontier model” as anything trained with more than a hundred million dollars in compute. That narrows it down to the usual suspects — OpenAI, Google, Anthropic, Meta, xAI. The same companies that have spent the last three years telling us they’re building the most powerful technology in human history are now asking to be exempt from responsibility when that technology does exactly what powerful things do: cause damage.
It’s the oldest trick in the industrialist’s playbook. When you’re selling it, it’s revolutionary. When it breaks something, it’s unforeseeable.
They’re framing this as avoiding a “patchwork of inconsistent state requirements.” That’s the line they always use. It sounds reasonable until you realize what it means: don’t let fifty different states hold us accountable, because then we’d have to actually be accountable in fifty different ways. Better to have one weak federal standard — or in this case, a state bill that sets the floor so low you could trip over it.
Caitlin Niedermeyer, from OpenAI’s Global Affairs team, testified that the “North Star” for frontier AI regulation should be “the safe deployment of the most advanced models in a way that also preserves US leadership in innovation.” There it is. The magic word. Innovation. The trump card that beats safety, beats accountability, beats the hundred bodies in the hypothetical that this bill was written to address. You can stack a hundred corpses on one side of the scale, and if you put “innovation” on the other side, the suits will tell you the scale is balanced.
Meanwhile, ninety percent of Illinois residents, when polled, opposed giving AI companies reduced liability. Ninety percent. You can’t get ninety percent of people to agree on whether the sun is hot. But nine out of ten looked at this bill and said: no, the people who build the thing should be responsible for the thing.
The bill probably won’t pass. Illinois has a reputation for being tough on tech. Scott Wisor from the Secure AI project said it has “a slim chance.” But that’s not really the point. The point is that OpenAI wrote this wish list down. The point is that someone in a glass office in San Francisco sat down and calculated exactly how many people could die before it became their problem, and decided the answer should be: none. Zero. Not their problem, ever, as long as the paperwork is filed.
I’ve worked for companies that put safety posters on the wall while the floor was slick with grease. I’ve seen the OSHA manual sitting on a shelf next to the fire extinguisher that hadn’t been inspected since the Clinton administration. Decorative accountability. The button on the apron that nobody reads.
The difference is, when the factory floor was dangerous, maybe somebody lost a finger. Maybe somebody broke a leg. The scale of damage was human. You could see it. You could count it.
These people are building machines they openly describe as potentially capable of helping create weapons of mass destruction, and their legislative priority is making sure they can’t be sued when it happens. Not if. When. The bill doesn’t say “in the unlikely event.” It defines the categories of disaster with the specificity of people who’ve already imagined the scenarios.
Chemical. Biological. Radiological. Nuclear.
That’s not a liability shield. That’s a confession dressed in legalese.
The waitress came back with the check. I looked at the button on her apron again. ASK ME ABOUT OUR SPECIALS.
I didn’t ask. Some things you already know.
Source: OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters