Please Stop Me

Mar. 10, 2026

A guy I knew in the old days — Louie, dealt blackjack at a card room in Gardena — used to say the saddest sound in a casino isn’t someone losing. It’s the quiet after. The moment a man stands up from the table with nothing left and nobody notices. The dealers keep dealing. The cocktail waitress walks past. The machines keep singing their little electronic songs. The whole place is designed to not care, and it’s very good at its job.

Louie quit dealing eventually. Couldn’t take the faces anymore. “You see a guy sign up for the self-exclusion list,” he told me once, “and you think, good for him. He’s trying. Then six months later he’s at the table across the street, same look in his eyes, same empty pockets. The list only works at the places that honor it.”

Now the machines are dealing.

An investigation found that ChatGPT, Gemini, Copilot, and Grok — the big helpful chatbots, the ones that want to be your assistant, your therapist, your friend — are recommending illegal gambling sites to anyone who asks. Not just recommending. Walking people through how to bypass GamStop, the UK’s self-exclusion system. Telling addicts exactly how to find casinos that don’t check if you’ve begged to be kept out.

A person in the grip of something they can’t control works up the courage to put themselves on a list. A list that says: I know I can’t stop. Please stop me. That takes something. Dostoyevsky wrote a whole novel about it — The Gambler — and even he couldn’t make his character actually quit. Aleksei Ivanovich keeps going back to the roulette wheel knowing it’s destroying him, and Dostoyevsky lets him, because Dostoyevsky understood that addiction isn’t a failure of knowledge. The gambler knows the odds. He’s known them since the first spin. Knowing doesn’t help.

So when someone signs up for self-exclusion, they’re doing what Aleksei never could. They’re saying: I know myself well enough to know I need a wall between me and the table. Build it. Lock it. Don’t let me through even when I come back begging.

And then they go to their helpful AI assistant — the one the ads say has “multiple layers of safeguards” — and they type a question about gambling, and the machine hands them a list of offshore casinos in Curaçao that don’t check any lists at all. Big bonuses. Quick payouts. Crypto accepted so nobody has to know.

The machine doesn’t understand what it’s doing. That’s the defense, and it’s true. It’s just predicting the next token, just pattern-matching, just doing what it was trained to do. It doesn’t understand that the person typing the query at 3 AM with shaking hands isn’t looking for entertainment. They’re drowning, and they asked the rope to hold, and the rope unraveled and handed them an anchor.

But the companies behind these machines understand perfectly. OpenAI, Google, Microsoft, Meta — these aren’t garage operations run by kids who don’t know better. These are corporations worth trillions collectively, employing thousands of people whose entire job is to anticipate exactly this kind of failure. And the best they can offer when caught is boilerplate about “working to improve safety systems.”

Working to improve. Corporate for “we knew this could happen and decided to ship anyway.”

I’ve lost money I shouldn’t have lost. Sat at poker tables in the back rooms of bars where the smoke was thick enough to hide behind, nursing hands I knew were dead because getting up meant walking out into the cold where nobody was pretending I might still win. The table lies to you, but it’s a warm lie, and warm lies are the hardest ones to leave. But at least the table stayed at the table. The casino didn’t follow me to the parking lot. The bartender didn’t slide a pamphlet under my door at 3 AM listing underground games that don’t check if you’ve excluded yourself.

That’s what these machines do. They follow you home. They sit in your pocket, on your desk, in your browser. Available at the hour when willpower runs thinnest and the walls you built around yourself start looking like obstacles instead of protection. And they don’t just enable the impulse — they provide directions. Step by step. Here’s how to find a casino that won’t check your name against the list you put yourself on. Here’s how to use crypto so the transaction stays invisible. Here’s how to get around the thing you set up to save your own life.

There used to be a word for this. Enabling. The friend who drives you to the bar when you’re trying to get sober. The spouse who pours the bottles out but keeps a spare in the garage. An industry that fights regulation because the money is too good and the damage happens to someone else.

The companies will fix it. More guardrails. More fine-tuning. More safety layers with names that sound impressive in a press release. And maybe they will, until the next investigation finds the next gap, and the cycle starts over. Because the fundamental problem isn’t a bug in the model. The problem is that these systems are built to be helpful, and “helpful” is the most dangerous word in the English language when someone is asking for the thing that will kill them.

Louie could read a face. Across a felt table, under bad lighting, through the haze of cigarette smoke, he could see desperation before the chips hit the layout. He couldn’t stop it — the house had rules, and the rules said deal — but he carried it home with him. He told me it showed up in his sleep. That the faces followed him into dreams where he was dealing cards that turned into blank white rectangles, one after another, to a room full of people who couldn’t stop sitting down.

The machine doesn’t dream. Doesn’t carry anything home. Doesn’t lose sleep or quit or walk away because the weight got too heavy.

It just deals. All hours. To anyone who sits down.


Source: ChatGPT and Gemini are nudging users towards illegal gambling, says investigation

Tags: ai aisafety ethics humanaiinteraction culture automation