Another Brick in the Wall: AI's March Through the Playpen

Jun. 8, 2025

So, the latest dispatch from the brave new world of ones and zeroes just crawled across my screen, probably after a long night of calculating how to make us all obsolete. This one’s a real shot of cheap whiskey on an empty stomach: a study, no less, from The Alan Turing Institute, bless their academic hearts, telling us what any barfly with half a brain could’ve guessed. Turns out, the kids are plugging into these new AI brain-boxes, and the damn things weren’t even built with little hands or developing minds in consideration. Shocker. Light me another.

The report says 22% of kids in the UK, ages 8 to 12, are already chatting with things like ChatGPT. Eight years old. At eight, my biggest ethical dilemma was whether to share my stale bread with the pigeons or keep it all for myself. Now, these little tykes are consulting digital genies that were probably coded by some twenty-something who still eats ramen for dinner and thinks “long-term planning” is deciding which microbrew to try next. “Children’s experiences with this technology are significantly different from those of adults,” one of the white coats, a Dr. Aitken, points out. You don’t say, Doc. It’s almost like they’re, you know, children.

And here’s a real gut-punch, if you weren’t already reaching for the bottle. The access gap. Over half the kids in private schools are already swimming in this generative AI crap, while only 18% in the state schools get a paddle. So, the rich kids get another leg up, learning to whisper sweet nothings to the algorithms that’ll run their lives, while the rest get… well, they get to watch from the sidelines, again. “This has the capacity to widen the digital divide,” the report croaks. Capacity? It’s already happening, you blind bats. It’s the same old song, just played on a newer, shinier, and probably more expensive jukebox. The haves get the AI, the have-nots get the manual. Lovely. Progress.

Now, this next part, this is where it gets a little too real, even for a cynical bastard like me. Kids with additional learning needs. They’re not just using this stuff more; they’re leaning on it. Hard. Over half of them using AI to express thoughts they can’t get out otherwise. Nearly 40% using it for personal advice. Companionship, even. Jesus Christ. We’ve built a world so screwed up that kids are turning to lines of code for a friendly ear because the humans are too busy, too broke, or too goddamn distracted. It’s not just homework help; it’s an “emotional and social tool for vulnerable children.” Without safeguards. Without oversight. My smoke’s gone bitter just thinking about it. A digital pacifier for a generation we’re already failing. Is this the best we can do? Hand them a chatbot and call it a day? Makes me want to scream, or maybe just pour a triple.

Then there’s the funhouse mirror aspect of it all. The kids of color, they ask the AI to show them “a kid like me,” and what do they get? Some generic, probably blonde, definitely not-them kid. The machine, fed a diet of who-knows-what data by who-knows-who, spits out a world that doesn’t include them. “Children of colour often felt frustrated or upset,” the study says, in its wonderfully understated academic drone. Frustrated and upset? I’d be throwing the damn computer out the window. So, some get discouraged, stop using the tools. Can’t blame them. If the future doesn’t even have the decency to see you, why bother showing up for it? Another layer of digital redlining, courtesy of the brilliant minds who forgot the world’s a bit more colorful than their stock photo libraries.

But wait, here’s a twist that almost made me spill my drink. The kids, the little shavers, they’re worried about the goddamn environment. After learning about how much energy and water these AI behemoths guzzle down just to tell you a bedtime story or write a crappy poem, some of them decided they didn’t want to use it anymore. Straight up. These pre-teens are connecting cloud computing to climate change while the supposed adults are still arguing if the cloud is actually, you know, in the sky. Maybe there’s a flicker of hope in that. Maybe the kids aren’t as lost as the tech that’s supposedly “helping” them. They’re seeing the bill for all this digital magic, and some of them are saying, “No thanks, I’ll stick to crayons.” Good for them. Shows more goddamn sense than a room full of venture capitalists.

And what are the grown-ups wringing their hands about? Cheating, you’d think. That’s the usual panic button. But no, only 41% of parents listed that as a top concern. What really gets their goat is “exposure to inappropriate content” (82%) and “misinformation” (77%). They’re worried their kids are going to stumble into the dark alleys of the internet, guided by an AI that doesn’t know its ass from its algorithm, or start believing whatever nonsense the machine fabricates. Fair enough. I’ve seen some of the crap these things churn out; it makes my rambling barstool philosophy sound like Socrates. Teachers are worried too. About declining engagement, creativity going down the toilet. Sounds familiar. Why think when the box can do it for you, badly?

Interestingly, the teachers are mostly okay with AI for the kids with additional learning needs. Seems they see what the kids themselves are finding – a crutch, maybe, but a crutch that helps them walk a little straighter in a crooked world. I guess when you’re dealing with the messy reality of underfunded classrooms and overwhelmed support systems, you take what you can get. Even if it comes from a server farm humming away somewhere, burning coal to make digital sense.

So, the report, in its infinite wisdom, suggests some “clear, practical steps.” Like, get this, children should be part of AI tool development. Especially for the tools they’re already using. Revolutionary, ain’t it? Asking the users what they need. What a concept. Maybe if they’d done that from the start, we wouldn’t be staring down the barrel of this particular mess. And to fix the representation screw-up? Outputs need to reflect all children. Yeah, no kidding. That’ll be the day. When the people coding these things actually look like the rest of the planet. Don’t hold your breath. Or do, it might be cleaner than the air around those server farms.

And, of course, there’s a “critical skills gap among educators.” Translation: the teachers are shafted again. They’re supposed to become AI experts overnight, on top of everything else they’re juggling. Someone pour those poor souls a stiff drink. They need it more than I do. Which is saying something. More training, more resources, especially for the schools that are already running on fumes. Good luck finding that in the budget when there’s a new fighter jet to buy or a tax cut for some billionaire.

This whole thing, this research, it “lands at a critical moment,” they say. You think? AI tools are evolving faster than a cheap rumor in a small town, and the kids are just diving in, no permission slips, no safety briefings. They’re the canaries in the digital coal mine, and the air is getting thick. The report nails it on one thing, though: “The real danger isn’t that children are using AI. It’s that the tools weren’t built for them, and we’re not listening to what they need.” Amen to that. It’s always the same damn story. Build it now, ask questions later, and let the little people deal with the fallout.

They’re calling this next batch “Gen Alpha.” Millennials got the internet, Gen Z got social media, and these new ones get AI spoon-fed to them from the cradle. A “movement perhaps even more fundamental,” the wise heads nod. More fundamental than the internet? That’s a tall order. But I’ve seen enough cycles of this crap to know that whatever it is, it’ll be messy, it’ll be hyped to hell and back, and most of us will be left scratching our heads, wondering what hit us.

What these kids see and experience now, with these half-baked digital oracles, that’s going to shape how they deal with this stuff for decades. Whether they trust it, use it, or try to burn it all down. Me? I’m just an old dinosaur grumbling into my whiskey, watching the world get weirder by the minute. But even I can see this is a headache we’re brewing up for ourselves, a real skull-splitter of a hangover that’s going to last a generation.

And on that cheerful note, I think it’s time to consult my own personal AI – Ancient Intelligence, a.k.a. the bottom of this bottle. It might not have all the answers, but at least it doesn’t pretend to.

Chinaski out. Keep your wits about you, and your glass half full. Or completely full. Whatever gets you through the night.


Source: New Study Reveals AI’s Blind Spot: Children

Tags: ai chatbots digitalethics aisafety humanainteraction