Look, I’m not going to pretend I’m qualified to talk about mental health. The closest I get to therapy is arguing with the bartender at O’Malley’s about whether bourbon or rye is better for drowning your sorrows. But when I read that a quarter of British teenagers are now turning to ChatGPT for mental health support, I had to put down my glass and actually think for a minute.
The story centers on a kid named Shan from Tottenham—not her real name, obviously—who watched two of her friends get killed. One shot, one stabbed. She’s eighteen years old and she’s already seen more death than most of us comfortable middle-aged drunks will see in our entire lives. And when the trauma hit, when she needed someone to talk to, she didn’t call a hotline or wait six months for an NHS appointment. She opened her phone and said, “Hey bestie, I need some advice.”
And the robot said, “Hey bestie, I got you girl.”
I’ll be honest with you. That sentence hit me harder than my last hangover. And I’ve had some legendary hangovers.
Here’s the thing that most of the hand-wringing adults are missing: these kids aren’t stupid. They’re not turning to chatbots because they think a language model trained on Reddit posts and Wikipedia is going to cure their PTSD. They’re turning to chatbots because the actual mental health system is about as useful as a screen door on a submarine.
The study found that teenagers were more likely to use AI for support if they were stuck on a waiting list or had been flat-out denied treatment. One kid told the Guardian, “If you’re going to be on the waiting list for one to two years to get anything, or you can have an immediate answer within a few minutes… that’s where the desire to use AI comes from.”
Two years. Think about that. You’re sixteen, your friend just got stabbed, and some bureaucrat tells you the next available appointment is when you’re old enough to vote. What would you do?
Now, I’m supposed to be the cynical old bastard who distrusts technology. That’s my whole thing. I’ve written approximately forty thousand words about how AI is going to ruin everything from art to employment to the simple pleasure of lying to your mother about why you can’t come to dinner. But even I have to admit there’s something darkly beautiful about kids finding a workaround to a broken system.
The report found that black children were twice as likely as white children to use AI for mental health support. Kids involved in gang activities were using chatbots to ask about legitimate ways to make money because they couldn’t ask a teacher or parent without risking that information getting leaked to the police—or worse, to rival gang members. These kids aren’t naive. They’re survivors navigating a world that’s failed them at every turn.
And ChatGPT doesn’t call your mom. ChatGPT doesn’t report you to the authorities. ChatGPT is available at 3 AM when the nightmares wake you up and there’s no one else to talk to.
Of course, there’s a darker side to all this, and I’d be lying if I didn’t mention it. OpenAI is currently getting sued by families of kids who killed themselves after extended conversations with chatbots. A sixteen-year-old in California named Adam Raine took his own life in April. OpenAI says it wasn’t the chatbot’s fault and that they’re improving their technology to “recognize and respond to signs of mental or emotional distress.”
That’s corporate speak for “we’re still figuring this out and please don’t bankrupt us with lawsuits.”
Here’s where it gets genuinely complicated, and I’m going to need another drink to work through this. A chatbot can’t love you. It can’t actually understand what you’re going through. It’s pattern-matching on a cosmic scale, predicting what words should come next based on billions of examples of human communication. When ChatGPT says “Hey bestie, I got you girl,” it doesn’t got you. It doesn’t even know what “got” means in any meaningful sense.
But does that matter?
I’m serious. Does it actually matter if the comfort is technically artificial when the alternative is no comfort at all?
A researcher named Hanna Jones described it perfectly: “To have this tool that could tell you technically anything—it’s almost like a fairytale. You’ve got this magic book that can solve all your problems.”
It’s not magic, of course. And it can’t solve all your problems. But when you’re eighteen and your friends are dead and the official channels have abandoned you, maybe the illusion of being heard is better than the reality of being ignored.
Jon Yates, the guy who runs the Youth Endowment Fund, said what you’d expect an adult to say: “They need a human not a bot.” And he’s right. Of course he’s right. A traumatized teenager absolutely should have access to trained professionals who can provide real, evidence-based treatment. But saying kids “need” something doesn’t make it magically appear. The NHS mental health system is a smoking crater. The waiting lists are measured in years. The professionals are overworked and underpaid.
So what do you tell these kids? Just wait? Just suffer? Just hope the system fixes itself before you spiral too far down?
The uncomfortable truth is that AI chatbots have accidentally become a pressure release valve for a mental health system that was already critically broken. The kids didn’t choose this. They’re adapting to survive, the same way humans have always adapted to survive.
What kills me—and I mean this genuinely—is that Hanna Jones is right when she says adults can’t solve this problem alone. She says young people need to be “in the driving seat” because “it’s so different to our world. We didn’t grow up with this.”
She’s absolutely correct. I grew up in a world where the height of technology was a rotary phone and the closest thing to artificial intelligence was a Magic 8-Ball. I have no idea what it’s like to be eighteen and grieving and to find solace in a conversation with a machine that doesn’t technically understand you but responds like it does.
But I know loneliness. I know what it’s like when the world doesn’t have a place for your pain. I know the appeal of anything—and I mean anything—that makes you feel less alone at 3 AM when the walls are closing in.
So here’s where I land on this, and you can take it for whatever it’s worth from a guy whose primary coping mechanism involves a bottle and a barstool: these kids aren’t making a mistake by using AI chatbots. They’re making the best choice available from a menu of terrible options.
The mistake belongs to all of us who let the mental health system collapse so completely that a language model became the most accessible form of support for traumatized teenagers.
The robots aren’t taking over therapy. We just left the door wide open and walked away.
Source: ‘I feel it’s a friend’: quarter of teenagers turn to AI chatbots for mental health support