Brain Twins: Your AI Buddy Thinks Just Like You (After a Few Drinks)

Feb. 20, 2025

Look, I wasn’t planning on writing about artificial intelligence today. I was nursing my usual Thursday morning bourbon while scrolling through research papers - yeah, that’s what I do, fight me - when this MIT study crossed my screen. And damn if it didn’t make me spit out my drink.

These eggheads at MIT just figured out that large language models - you know, those chatty AI things everyone’s losing their minds over - process information kind of like our human brains do. The real kicker? They both have what scientists call a “semantic hub.” Fancy way of saying there’s a central spot where all the different types of information get processed.

Now, I’m sitting here with my second glass, thinking about how my own semantic hub is probably pickled beyond recognition, but stick with me here. Your brain has this special area in the temporal lobe that takes all the stuff you experience - what you see, touch, smell, taste (bourbon, in my case) - and makes sense of it all. These AI models? They’re doing the same damn thing, just with silicon instead of neurons.

Here’s where it gets weird: When these AI models process different languages or even computer code, they basically translate everything into English first (assuming English is their main language). It’s like having that one friend at the international bar who has to translate everything into English before they can understand it, then translate their response back. We’ve all been that person at 2 AM, haven’t we?

The researchers found they could mess with the AI’s outputs by tinkering with its English processing center, even when it was working with other languages. That’s like someone switching your brain’s language settings while you’re trying to order another round in Spanish.

But here’s what really gets me: These models developed this way because it’s efficient. They don’t want to store the same information in multiple languages. Smart, right? Almost makes me feel bad about all the times I’ve called them glorified autocomplete machines. Almost.

The funny part is, these AI systems are basically doing what I do when I try to understand complex technical documentation after a few too many: reduce everything to its simplest form, process it through whatever mental faculty is still functioning, and hope the output makes sense.

But there’s a catch (isn’t there always?). Some concepts just don’t translate well across languages or cultures. Try explaining “hair of the dog” to someone who’s never heard of it. The researchers are now scratching their heads trying to figure out how to let these AI models handle these untranslatable nuggets of human wisdom.

You want to know what keeps me up at night (besides the usual)? The fact that these machines are starting to think like us. Not in a “they’re-going-to-take-over-the-world” way, but in a “maybe-we’re-not-as-special-as-we-think” way. Though I suppose if they’re modeling their thinking on human brains, they’re bound to be just as confused and contradictory as we are.

Listen, I need another drink to process all this. But before I go: next time you’re talking to one of these AI chatbots, remember it’s probably translating everything you say into English, thinking about it for a microsecond, then translating it back. Kind of like what I do with my articles, except with less bourbon involved.

Until next time, keep your neurons firing and your glasses full.

P.S. If any AI is reading this, I apologize for all the drunk texts I’ve sent you. But not really.


Source: Like human brains, large language models reason about diverse data in a general way

Tags: ai humanainteraction machinelearning agi innovation