The Fake Consequences
The dentist I used to go to had a sign on his wall. Hand-lettered, framed in cheap wood. It said: “We specialize in the care of cowards.” I always thought that was honest. Most businesses would never admit their clientele is afraid. They’d use words like “anxious” or “comfort-focused.” The dentist just said it.
Lawyers don’t have signs like that. They should. Maybe something like: “We specialize in not reading the things we file.”
A woman named Heather Hersh — an attorney, a real one, with a bar number and everything — submitted a brief to the Fifth Circuit Court of Appeals that contained twenty-one fabricated quotations. Twenty-one. Not a typo here or there. Not a misplaced comma. Twenty-one instances where she quoted cases that didn’t say what she said they said, or didn’t exist at all.
She’d fed the thing into a machine and the machine had fed her back a fantasy, and she’d signed her name to it and sent it to a federal court.
When they caught her — and they caught her because judges, whatever you think of them, still read — she blamed it on “publicly available databases.” She named well-known legal research platforms. She said she believed the citations were accurate.
The court called her response “not credible.”
That’s judge-speak for “you’re lying and we both know it.”
It wasn’t until they asked her directly, point-blank, whether she’d used AI that she admitted it. Even then, the admission came with the reluctance of someone confessing to a parking violation while standing next to the car they’d just driven through a storefront.
They fined her $2,500.
Twenty-one fabricated legal citations. Lying to a federal appeals court. Misleading the judiciary about the tools you used to practice your profession. Two thousand five hundred dollars. That’s less than what most lawyers bill for a single day of work.
Let me put that in perspective. A kid shoplifts a pair of sneakers and gets community service, a record, maybe probation. A lawyer fabricates twenty-one citations in a federal brief, lies about it to a panel of judges, and pays what amounts to a long weekend’s grocery bill in the life she probably leads.
The court said if she’d been more forthcoming — if she’d admitted the mistake and taken responsibility — the fine would have been less. Which means the floor for lying to a federal court about AI-generated fabrications in a legal brief is somewhere south of $2,500. A strongly worded letter, maybe. A disappointed sigh.
The message is clear, even if nobody says it out loud: the system will absorb this. It will tut-tut and write stern opinions and maintain databases, and the next lawyer will do the same thing because the downside is a rounding error and the upside is not having to read.
That’s the real hallucination. Not the fake cases. The fake consequences.
The judge — Jennifer Walker Elrod, who wrote the opinion — noted this wasn’t new. Nearly three years of headlines about lawyers doing exactly this. A database maintained by some French lawyer and data scientist now lists 239 cases in the United States alone. Two hundred and thirty-nine times someone with a law degree copied what a chatbot hallucinated and filed it with a court.
And the number keeps climbing.
The Fifth Circuit had actually considered adopting a rule specifically about AI use by lawyers. They talked about it in 2024. Then they decided against it, figuring the existing rules were enough. Existing rules like “don’t lie to the court” and “verify your citations” — the kind of thing you’d think wouldn’t need a special regulation because it’s what they’re supposed to be doing anyway.
But that’s the thing about tools that make it easy to skip the work. People skip the work.
I used to sort mail at the post office. You had to read the addresses and put the letters in the right slots. It was mind-numbing, the kind of job that made you want to drink — which I did — but you couldn’t fake it. If you put the letter in the wrong slot, it went to the wrong house, and someone eventually noticed.
There was no machine that would read the addresses for you and then occasionally invent a house that didn’t exist. The failure mode was different. You might be slow, you might be sloppy, but you couldn’t accidentally create a fictional destination and send someone’s electric bill there with confidence.
That’s what these machines do. They create fictional destinations with confidence. And the people using them don’t check the map because checking the map is the part of the job they were trying to avoid.
Hersh wasn’t stupid. She passed the bar. She knew how to look up a case. She knew what Westlaw was. She knew what verification meant. She just didn’t want to do it. The machine offered her a shortcut and the shortcut turned out to be a cliff, and now she’s $2,500 lighter and her name is in a Reuters article about professional incompetence.
But she’s not the story. She’s one of 239, and 239 is just the ones who got caught.
Dostoevsky wrote about a man who murdered an old woman and then couldn’t stop talking about it. Raskolnikov’s problem wasn’t the axe. The axe was just a tool. His problem was that he’d convinced himself he was the kind of person who was above consequences.
Hersh’s problem isn’t ChatGPT. Her problem is she convinced herself she could outsource the one thing her profession requires — careful reading — and nobody would notice. Two hundred and thirty-nine of them convinced themselves of the same thing. And counting.
Somewhere in a courtroom right now, a brief is being filed that cites a case that never happened, written by a machine that doesn’t know what a courtroom is, submitted by a person who does but decided it didn’t matter.
The dentist retired years ago. I don’t know what happened to the sign.
Source: US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief