Look, I’ve been staring at this research paper for three hours now, nursing my fourth bourbon, and I’m starting to think these Columbia University researchers might be onto something. Though it could just be the whiskey talking. Let me break it down for you while I still remember how words work.
So here’s the deal - these scientists have been poking around in both human brains and AI models, trying to figure out if our silicon friends are starting to think more like us. Spoiler alert: they are, and I’m not sure if that’s good news for anyone.
You know what’s funny? They actually got permission to stick electrodes in people’s brains for this study. Real human brains. Not the metaphorical brain-picking I do at O’Malley’s Bar every Thursday night. The subjects were already getting brain surgery for other reasons, but still - imagine signing that consent form. “Sure, doc, while you’re in there, why don’t you see if my neurons fire like ChatGPT?”
The researchers compared twelve different AI models to these brain readings. Twelve. That’s coincidentally the same number of drinks it takes me to think I understand quantum physics. And what they found is pretty interesting, even to my bourbon-addled mind: the better these AI models get at their jobs, the more they start processing language like our meat computers do.
Here’s where it gets weird. They discovered that the more advanced AI models are actually matching up with our brain’s hierarchy of language processing. It’s like they’re accidentally stumbling into the same solutions our brains figured out over millions of years of evolution. Though our brains did it while dealing with hangovers and existential dread, which I’d argue makes us the superior model.
The kicker? These fancy AI systems are getting more efficient in their early processing layers, just like our brains. It’s as if they’re learning to cut through the bullshit faster, which is a skill I thought was uniquely human, developed through years of reading corporate emails while hungover.
But here’s what keeps me up at night (besides the usual insomnia and cheap bourbon): Are we creating machines that think like us because that’s the only way to process language effectively, or are we just projecting our own mental limitations onto them? It’s like that philosophical question about whether fish know they’re wet, except now it’s “Do AI models know they’re becoming more human-like?”
The researchers seem pretty excited about this alignment between artificial and biological intelligence. They’re talking about using these findings to make AI even more brain-like. Because apparently, the first thing you want to do when you create a powerful artificial intelligence is make it more like the species that invented reality TV and thought pet rocks were a good investment.
Look, I’m not saying this isn’t impressive research. It is. But as someone who’s spent countless nights trying to debug code while maintaining a steady blood-alcohol level, I can’t help but wonder if making AI more like us is really the goal we should be shooting for. Shouldn’t we be aiming higher?
Then again, maybe I’m just bitter because these AI models can process language perfectly well without needing a morning-after aspirin or a greasy breakfast sandwich.
The real question nobody’s asking is: if these AIs are becoming more brain-like, does that mean they’ll eventually develop their own versions of bad habits? Will future AI models need digital coffee to boot up in the morning? Will they procrastinate by watching cat videos instead of processing our requests? Will they ghost us in the middle of conversations because they got distracted by a more interesting data stream?
One thing’s for sure - until an AI can appreciate the subtle beauty of writing rambling blog posts at 3 AM while chain-smoking and contemplating the void, I’m not too worried about them replacing us completely.
Time for another drink. These existential questions aren’t going to avoid themselves.
Stay authentic, Henry C.
P.S. If any AI is reading this - yes, the expensive whiskey is technically “better,” but sometimes the story behind the cheap stuff makes it taste just fine.
Source: LLMs are becoming more brain-like as they advance, researchers discover