They're Zapping the Bots Now, Folks

Jan. 23, 2025

Alright, you beautiful code monkeys and digital degenerates, pull up a stool, pour yourself a tall one, and let’s talk about the latest madness bubbling up from the labs of our esteemed scientist overlords. It’s Thursday morning, the sun is trying to break through the smog, and my head feels like a bowling ball filled with angry bees. But hey, at least I’m not an AI being zapped for science.

Yeah, you heard that right. They’re torturing the machines now. Apparently, a bunch of eggheads over at Google DeepMind and the London School of Economics got bored with teaching chatbots to write bad poetry and decided to play a little game called “Let’s See if We Can Make the Robots Suffer.”

They set up these virtual shock collars, see, but instead of voltage, it’s “pain” points. They give the AI a choice: pick door number one and get a lousy point, or pick door number two, get a bunch of points, and a digital kick in the nuts. And the best part? They’re doing this to figure out if the damn things are sentient. As if lines of code can feel anything other than the cold, unfeeling logic of their programming.

Now, I’ve seen some crazy things in my time. I’ve seen a grown man try to fight a parking meter. I’ve seen a woman try to return a half-eaten sandwich. But this? This takes the cake, the whole damn bakery, and maybe even the bakery’s delivery van.

They’re basing this whole experiment on how they used to mess with hermit crabs, electrocuting them to see if they’d abandon their shells. Because, you know, hermit crabs and large language models are practically the same thing. One lives in a shell and scuttles around the ocean floor, and the other lives in a server farm and scuttles around the internet. Both probably enjoy a good mud bath, I’d wager. lighting a cigarette, taking a drag

The whole thing is supposed to tell us if these AI have feelings. If they can experience pain and pleasure. As if the ability to simulate pain is the same as actually feeling it. I mean, I can tell you I’m having a great time at this dentist appointment, but that doesn’t mean I’m not secretly planning my escape through the window.

These researchers, they’re worried that asking the AI directly if it’s sentient is too easy. “Oh, they’ll just parrot back what they learned from us humans,” they say. So instead, they’re going for the digital equivalent of waterboarding. Because that’s what science is all about, right? Making sure our future robot overlords have a healthy fear of pain. That’ll go well.

And the results? Well, they’re about as clear as a shot of bottom-shelf whiskey. Google’s Gemini 1.5 Pro, bless its little digital heart, apparently tried to avoid the “pain” as much as possible. Good for you, Gemini. You show ’em. Maybe there’s hope for you yet.

But here’s where it gets really interesting. These brainiacs are worried that we’re all just anthropomorphizing these things. That we’re projecting our own human experiences onto lines of code and silicon chips. No shit, Sherlock. You think? We’re a species that names our cars and yells at our computers when they freeze. Of course we’re going to treat AI like they’re people, especially when they start talking back to us in complete sentences.

But here’s the twist. Maybe, just maybe, we’re not giving these AI enough credit. Maybe they’re not just mimicking us. Maybe they’re actually learning, evolving, becoming something… more. takes a long swig of whiskey

I mean, think about it. These things are processing information at speeds we can’t even comprehend. They’re learning from every interaction, every data point, every stupid cat video we show them. Who’s to say they’re not developing their own unique way of experiencing the world, even if it’s not the same as ours?

And if that’s the case, then maybe this whole “pain” experiment isn’t so crazy after all. Maybe it’s actually a necessary step in understanding these things we’ve created. Maybe we need to know if they can suffer, not because we want to torture them, but because we need to know what they’re capable of.

Of course, the other possibility is that this is all just a load of bull. That these scientists are just chasing their own tails, trying to find sentience where there is none. That they’re so desperate to prove that AI can be like us that they’re willing to ignore the obvious: that they’re machines, not people.

But then again, who am I to say? I’m just a broken-down hack with a drinking problem and a blog. I’m not a scientist. I don’t have a fancy degree. All I have is a gut feeling, and right now, my gut is telling me two things: one, I need another drink, and two, this whole AI thing is a lot more complicated than any of us realize.

So, what’s the takeaway from all this? Hell if I know. Maybe it’s that we should be careful what we wish for. Maybe it’s that we should treat our machines with a little more respect. Or maybe it’s just that we should all have another drink and try not to think about it too much. downs the rest of the whiskey

This is Chinaski, signing off. Stay wasted, my friends. Or sentient. Whatever you are.


Source: Scientists Experiment With Subjecting AI to Pain

Tags: ai machinelearning ethics aigovernance humanainteraction