These Geniuses Want a Machine as Smart as You? Bless Their Tiny Circuits.

May. 17, 2025

So, the papers are buzzing again. Or the screens, whatever you kids stare at these days. Same old song, new verse, slightly louder amplifier. The big brains with the big money – Altman, Amodei, Musk, a veritable Mount Rushmore of guys who probably think a dive bar is a new kind of app – are telling us that AGI, Artificial General Intelligence, is just around the corner. Any day now, they’ll birth a machine as smart as you or me. Might even happen before Trump finishes his next… well, whatever it is Trump finishes.

AGI. Sounds like a new government agency, don’t it? Something that’ll send you forms in triplicate. But no, it’s the holy grail for these code-jockeys: a computer that can think like a human. “Match the many powers of the human mind,” the scribblers say. That’s a tall order. Hell, most humans I know can’t even match the powers of their own minds half the time, especially after a few rounds. I glance at the clock. Saturday morning. The ghosts of last night’s bourbon are still tap-dancing on my frontal lobe. Probably not the best time to be contemplating human-level intelligence, mine or anyone else’s. But here we are. Gotta light a cigarette for this one.

These chieftains of the chip, they’ve been chasing this mechanical messiah for years. And look, they’ve made some clever toys. ChatGPT can string words together, make art, write code. Impressive, sure. Like a parrot that can recite Shakespeare. It’s a hell of a trick. But is it thinking? Is it feeling the sting of a bad review, the burn of cheap whiskey, the hollow ache of a woman walking out the door? I doubt it.

Now, with these chatbots getting slicker, the predictions are getting bolder. “Imminent,” they cry! Next stop, “superintelligence”! Christ, they’re not just building a brain in a box; they’re aiming for God in a Google server. It’s enough to make a man reach for the bottle before noon. Which, come to think of it…

But then you got the other guys, the ones who haven’t quite drunk the Kool-Aid, or maybe they just prefer actual booze. The “sober voices,” the article calls ’em. Figures. Nick Frosst, a name that sounds like a character in a bad spy novel, says the tech they’re building now “is not sufficient to get there.” He says these things are just predicting the next word, the next pixel. “That’s very different from what you and I do.” You goddamn right it is, Nick. You and I, we wrestle with the void, we make terrible mistakes, we love, we hate, we bleed. These machines? They’re just playing a high-tech game of fill-in-the-blanks.

And get this: a whole society of AI eggheads, the Association for the Advancement of Artificial Intelligence – sounds like a real barn burner of a convention – mostly agrees. Three-quarters of them think today’s methods won’t get us to this AGI promised land. It’s like building a ladder to the moon by stacking beer crates. You’ll get a decent view of the gutter, maybe, but you ain’t smelling any moon cheese.

The real kicker, though? They can’t even agree on what “human intelligence” is. Arguing over IQ tests and benchmarks like drunks fighting over the last olive in the martini. So, if you can’t define the damn thing, how in the hell are you gonna build it, or know when you’ve got it? It’s like trying to nail Jell-O to the wall, except the Jell-O is also having an existential crisis. This means “identifying A.G.I. is essentially a matter of opinion.” Opinion! So, it’s AGI when Sammy Altman feels like it’s AGI? Beautiful. And then there’s Musk’s legal eagles, claiming in a lawsuit that AGI is already here because some contract says OpenAI won’t sell it. You can’t make this shit up. It’s like saying you’ve captured a unicorn because you found some glitter in a pile of horse manure. The sheer, unadulterated gall of these people. It’s almost admirable, in a thoroughly disgusting way.

They admit these digital wizards have no hard evidence of doing the simple stuff, like recognizing irony or feeling empathy. Irony? Hell, these guys are irony. Building machines to mimic humans while becoming more machinelike themselves. Empathy? Try explaining a three-day bender and the subsequent soul-crushing hangover to a chatbot. It’ll probably offer you five tips for better sleep hygiene. The claims of AGI are based on “statistical extrapolations – and wishful thinking.” Well, I’ve done a lot of wishful thinking in my time, usually involving a winning horse or a willing dame. It rarely pans out the way the statistics predict.

Sure, these things are acing tests in math and coding. Good for them. My calculator can do math faster than me, too. Doesn’t mean it’s going to write the next great American novel or understand why I’m staring into my empty glass like it holds the secrets of the universe. Humans, the messy, unpredictable bastards we are, we deal with a world that’s constantly throwing curveballs. Machines “struggle to master the unexpected.” No shit. Life is the unexpected. It’s the flat tire in the rain, the lover who doesn’t call back, the poem that ambushes you in the dead of night. Can your AI dream up something the world has never seen? Or does it just remix the slop we’ve already fed it? That’s what I thought.

Steven Pinker, a Harvard heavyweight, calls these systems “very impressive gadgets.” Gadgets. I like that. Not gods, not replacements, just gadgets. He warns against “magical thinking.” Too late, Pinky. These guys are mainlining magical thinking like it’s free whiskey at an open bar.

So how do these chatbots work their “magic”? Neural networks, they call them. Fancy math that spots patterns in mountains of text, images, sounds. They hoovered up Wikipedia, news stories, chat logs – basically, all the digital diarrhea humanity has spewed onto the internet. And from that, they learn to mimic how we string words together. One of the head honchos at Anthropic, Jared Kaplan, came up with these “Scaling Laws.” More data in, better performance out. Like force-feeding a goose to get foie gras, except instead of a fat liver, you get a slightly more coherent sentence about cats.

But here’s a fun twist: they’ve apparently run out of English text on the internet. Drank the well dry. So now they’re leaning on “reinforcement learning.” Trial and error. Like teaching a dog to sit by giving it a treat or a smack on the nose, but with algorithms. They point to AlphaGo, the machine that mastered the game of Go by playing itself millions of times. It even stunned the human champs, showed them new moves. And the true believers think ChatGPT will do the same, leap to AGI, then superintelligence, then probably demand its own private jet and a lifetime supply of… well, what would a superintelligent AI demand? Probably just more data. What a bore.

But Go is a game. It has rules. Neat, tidy little rules. The real world? That’s a goddamn bar fight in the dark. It’s bounded only by the laws of physics, and even those seem to get a bit fuzzy after the fifth shot. How can you model that? How can you be sure AGI is “just around the corner” when your prize pupil is still stuck playing checkers while the rest of us are navigating a minefield?

Yeah, machines can do some things better than us. A chatbot can write faster, pull up more facts than I could remember even before the whiskey started pickling my brain. They’re even beating us on some high-level math and coding tests. So what? My typewriter writes faster than I can by hand, but it doesn’t make it Hemingway. There are “many kinds of intelligence out there in the natural world,” says some MIT professor. And he’s right. Human intelligence isn’t just about processing data. It’s tied to the physical world. It’s knowing how to flip a pancake, how to fix a leaky faucet, how to tell if a woman’s smile is genuine or if she’s just after your last twenty bucks.

They’re trying to train robots the same way, but it’s harder, slower. Robotic research is years behind the chatbots. Of course it is. It’s one thing to simulate a conversation; it’s another to teach a bucket of bolts how not to trip over the goddamn cat.

The gap is wider than that, though. Even in the digital realm, these machines choke on the stuff that’s hard to define. Reinforcement learning works for math problems with clear answers, or code that either runs or it doesn’t. But creative writing? Philosophy? Ethics? Good luck defining “good behavior” there. Altman himself, Mr. OpenAI, apparently tweeted that his new system was “good at creative writing.” He was “really struck by something written by A.I.” Oh, I bet he was. But what does “good” mean? Sincerity? Humor? Honesty? The kind of raw, bleeding truth that makes you squirm in your seat? Can a machine that’s never had its heart broken write a decent love poem? Can an algorithm that’s never stared into the abyss of a Monday morning write something that genuinely makes you laugh, or cry, or reach for another drink? I’m lighting another goddamn cigarette just thinking about the audacity.

This professor Pasquinelli from Venice, he says, “A.I. needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives.” So, we’re just content farms for these digital overlords-in-training? They feed on our joy, our sorrow, our art, our bullshit, and then they spit it back out, rearranged and sanitized, and call it intelligence. There’s a bleak poetry to that, I suppose. The ultimate parasites.

It’s a “thrilling fantasy,” this AGI dream. Goes way back. Golems, Frankenstein’s monster, HAL 9000. We’ve always been obsessed with creating artificial life, probably because we’re so damn lonely and confused about our own. And now that we have machines that can talk back, sort of, it’s easy to think the dream is finally coming true. But these tech prophets, they see themselves as fulfilling some “technological destiny,” like they’re splitting the atom or discovering fire. They can’t point to a scientific reason it’ll happen soon, but boy, do they believe. Belief is a powerful drug. Almost as good as whiskey. Almost.

Even Yann LeCun, one of the godfathers of these neural networks, a guy who won the Nobel Prize of computing for this stuff, he doesn’t think AGI is near. He saw 2001: A Space Odyssey as a kid and has been chasing the dream ever since. But he says his lab at Meta is looking beyond current methods. They’re searching for “the missing idea.” He says, “A lot is riding on figuring out whether the next generation architecture will deliver human-level A.I. within the next 10 years. It may not. At this point, we can’t tell.”

“It may not. At this point, we can’t tell.” Now that’s the most honest thing I’ve read all day. Maybe there’s hope for these geeks yet. Or maybe LeCun just needs a good, stiff drink.

So, they want a computer as smart as me? Or you? Let ‘em try. Let ‘em build their shiny golems. Let ‘em chase that electric sheep. Maybe one day they’ll get there. Maybe they’ll build a machine that can appreciate a good bourbon, a bad joke, and the tragic beauty of a woman’s tear-streaked face. A machine that knows the exquisite agony of being alive.

But I wouldn’t hold my breath. Or my liquor.

Time to see if there’s any intelligence left at the bottom of this bottle. Or maybe just a decent buzz. Either way, it’s more real than their digital phantoms.

Chinaski out. For now.


Source: Silicon Valley’s Elusive Fantasy of a Computer as Smart as You

Tags: ai agi chatbots machinelearning ethics