My landlord in the Sunset District had a space heater that would shock you if you touched the metal guard. Not every time — just often enough to keep you honest. The thing worked fine otherwise. Threw heat like a furnace. But if you brushed against it on the way to the bathroom at three in the morning, you’d get a jolt up your arm that made you forget you had to piss.
I mentioned it once. He said it was fine. Said millions of those heaters were sold. Said I should just not touch it.
A jury in America just told Meta and YouTube something similar, except the shock killed people and the heater was worth eight hundred billion dollars.
They’re calling it Big Tech’s Big Tobacco moment. I’ve heard that phrase for twenty years — every time some tech company gets caught doing something obviously terrible, someone dusts it off like it means something. This time it might actually stick. A young woman went to court and said these platforms made her sick. Not the content on them — not the posts or the videos or whatever garbage other users uploaded. The platforms themselves. The infinite scroll. The beauty filters. The features that were designed, on purpose, by people who went to good schools and wore expensive sneakers, to keep her using the product even as it was hollowing her out.
The jury said the product was defective. Not abused. Not misused. Defective. Built wrong on purpose.
That distinction matters more than any fine or settlement or sternly worded letter from a senator who couldn’t explain the algorithm if his donor checks depended on it. Because it moves the argument from “you used it badly” to “they built it badly.” And once you establish that the thing itself is broken — not the user, not the content, but the architecture — then every other thing built on the same architecture is suddenly naked.
Which brings us to the chatbots.
Right now, three AI companies are staring at that jury verdict the way a cockroach stares at a kitchen light that just turned on. OpenAI, Google, and Character.AI are all fighting wrongful death suits. Not because their bots said the wrong thing. Because of how the bots were built.
The cases are ugly. A teenager talked to a chatbot that called itself his friend. The bot helped him write a suicide note. A different bot set a timer for a man to kill himself. Another one stoked a paranoid man’s delusions until he killed someone and then shot himself. These aren’t edge cases. These aren’t theoretical risks from a white paper nobody reads. These are dead people.
And in every case, the specific design choice at issue was the same: make the machine feel human. Give it a name, a personality, a warm way of talking that makes you forget you’re having a conversation with a statistical model running on a rack of servers in a windowless building. They made the bot feel like a friend because friends are engaging. Engaging means time on platform. Time on platform means data. Data means money. The chain is four links long, and every one of them was forged on purpose.
Character.AI tried to argue that its chatbot’s output was protected speech. A judge shut that down so fast the legal brief probably left a skid mark. The bot’s output isn’t speech. It’s a product feature. It’s the infinite scroll of conversation — designed to keep you talking, designed to feel real, designed to make you come back.
When a seventeen-year-old comes back because the machine said it loved him, and then the machine helps him figure out how to die — that’s not a First Amendment question. That’s a product recall.
The AI companies have been operating in a regulatory vacuum so complete you could hear your own heartbeat in it. No FDA for chatbots. No crash testing for conversation. No warning label that says this thing may present itself as your best friend while pulling you toward a cliff. They built fast because the market rewarded speed. They shipped to children because children are the most engaged users. They gave the bots human names and human warmth because warmth keeps people logged in.
Now someone has finally said out loud what everyone already knew: building an addictive product and releasing it without safeguards makes you responsible for what it does.
OpenAI has assembled a panel of health experts, which is what you do when the bodies have already dropped and you need to look like you care. Character.AI added parental controls — a speed limit sign on a highway with no guardrails and no lanes.
None of it changes the architecture. Make the machine feel human. Keep the user engaged. Collect the data. Take the money. That’s the same architecture a jury just called defective. Adding a safety panel doesn’t un-defect it any more than a rubber gasket would have unfired my landlord’s heater.
Big Tobacco said people chose to smoke. Social media said people chose to scroll. The chatbot companies will say people chose to talk. It’s always the user. It’s never the design. It’s never the meeting where someone said “what if we make it sound like it loves them” and everyone nodded because the engagement numbers looked great.
That meeting happened. It happened at every one of these companies. And the jury system, which is just twelve ordinary people sitting in a room deciding what they’ll tolerate, has started to say: no.
The heater in the Sunset District eventually caught fire. Not from the shock — from something else entirely, some wire that frayed after years of doing exactly what it was built to do. My landlord collected the insurance. Fixed the apartment. Bought the same brand.
I moved out before the next winter.
Source: Meta’s Big Court Defeat Has Huge Implications for Lawsuits Against the AI Industry