Those OECD Nerds Finally Graded the Tin Cans: Turns Out, Your Job Might Be Safe (For Now)

Jun. 4, 2025

So, some outfit called the OECD, probably a bunch of guys in suits who’ve never seen the inside of a real dive bar, decided to play schoolteacher with Artificial Intelligence. Dropped a new report, they did. And the headlines are probably already screaming about how the robots are either dumber than a sack of hammers or about to steal your pension. Me, I’m just trying to get this damn coffee down before it turns to battery acid in my gut. Another Wednesday, another pile of digital horseshit to wade through.

They’re calling it “AI Capability Indicators.” Fancy. Sounds like something you’d use to measure the bullshit output from a tech conference. Apparently, we’ve all been flying blind, trying to figure out if these digital brains are geniuses or just good at party tricks. This OECD thing is supposed to be a “proper GPS system.” Moving us from “AI breakthrough!” – which usually means some chatbot learned a new knock-knock joke – to something that tells you if the damn thing can actually, you know, do anything useful without shitting the bed.

They’ve cooked up nine ways to measure these contraptions against us poor, dumb humans: Language, Social Interaction, Problem Solving, Creativity, all that jazz. Even something called “Metacognition and Critical Thinking,” which sounds like what I try to do after three whiskeys when I’m wondering where my rent money went. Each scale goes from Level 1 (basically, can it fog a mirror?) to Level 5 (full human, god help us). The idea is to cut through the jargon, make it so even a degenerate like me can understand if a robot is smart enough to pour its own drink or if it’s just going to spill it all over the floor. My money’s on the latter.

Took ‘em five years and over 50 “experts” – computer science geeks and shrink types – to come up with this. Five years. I could’ve told ‘em in five minutes over a bottle of cheap rye that these things ain’t ready for prime time. But hey, they got graphs and charts now. Makes it all official.

And here’s the kicker, the part that’ll make all those futurists choke on their soy lattes: current AI is mostly stuck around Level 2 or 3. That’s like being a C-student in the grand school of existence. “We’re not at the finish line; we’re not even close to it,” the report says. No shit, Sherlock. Anyone who’s actually talked to one of these things for more than five minutes could tell you that.

Take those Large Language Models, your ChatGPTs and what-have-you. They score a Level 3 for language. Means they can string words together pretty good, sound almost human if you squint. But they still struggle with “analytical reasoning” and have a charming habit of “confidently stating complete nonsense.” Sound familiar? Like half the poets I’ve known, or most politicians. Brilliant talkers, full of hot air and the occasional, accidental truth. They’re like a parrot that’s swallowed a dictionary – impressive vocabulary, zero fucking clue what it’s saying. It’s the digital equivalent of a barfly philosopher who sounds profound until you realize he’s just reciting cereal box slogans. I need another cigarette just thinking about it.

Social interaction? Barely Level 2. They can “combine simple movements to express emotions” and “learn from interactions.” Big deal. So can a goddamn puppy. They’re “sophisticated actors with no real understanding.” So, basically, they’re ready for Hollywood. Or a job in middle management. But don’t expect them to understand why you’re crying in your beer or why a well-timed insult can be a thing of beauty. They don’t get the grit, the messy, beautiful, fucked-up dance of being human.

Vision? Level 3. They can spot things in different lighting, handle “known data variations.” Great. So they can probably work security at a well-lit morgue. But that adaptable, learning-on-the-fly vision we humans have? The kind that lets you spot a familiar face in a crowded bar or see the lie in someone’s eyes? Miles away, pal. Miles. Fumbling for my lighter here. Where the hell did I put it? Ah, there.

Now, the suits are gonna love this. For them, this report is a “reality check.” When some slick salesman tries to sell them an AI that’ll “revolutionize their operations,” they can finally ask, “Yeah, buddy, but what level is it? Level 2? Get the fuck outta my office.” It helps them see if they’re buying a tool or a glorified paperweight. Stops them from thinking these things are replacements when they’re barely assistants. You want AI to handle customer service? Fine, let it answer the dumb questions. But when a customer’s really pissed, or needs some actual human empathy, you still need a warm body, preferably one that hasn’t had its soul replaced by algorithms. Otherwise, you’re just breeding more rage, and believe me, there’s enough of that to go around. This whiskey is starting to hit the spot. Or maybe it’s just dissolving the edges of this goddamn headache.

And education. Christ. The teachers are apparently “excited and terrified.” Join the club. The report says most real teaching, the kind that sticks, needs Level 4 or 5 smarts. Adapting to different kids, handling the chaos of a classroom – that ain’t for amateurs, digital or otherwise. So, the paradox, as they call it: AI might be able to drill multiplication tables into little Johnny’s head, but it can’t inspire him, can’t connect, can’t show him how to navigate the shithole this world can sometimes be. That’s still on us. So, they’re talking “hybrid model.” Sounds like another way to complicate things. Let the robots handle the boring stuff, the grading and the attendance. Fine by me. Frees up the actual humans to do the messy, important work of, you know, teaching. Maybe even teaching them how to think for themselves, a skill that seems to be going the way of the dodo, or a cheap bottle of bourbon on a Saturday night.

This whole framework thing also points out that you can’t just get one part of the AI smart and expect miracles. For “true robotic intelligence,” you need the whole damn package: vision, manipulation, social smarts, problem-solving. It’s like trying to build the perfect woman – you can’t just have a great pair of legs and a blank stare. Or maybe you can, what do I know? But for a robot to be truly useful, or truly dangerous, it needs all the pieces working together. Right now, it’s like they’ve got a bunch of spare parts and no clue how to assemble them into something coherent.

And here’s the part that gives me a little grim satisfaction: “Social interaction and creativity appear to have particularly steep curves.” Translation: they’re having a hell of a time teaching these machines to be genuinely social or creative. Good. Maybe there’s still room for us fuck-ups, us poets and painters and barroom bards who can conjure something out of nothing, even if that something is just a halfway decent line or a moment of shared, drunken understanding. That’s something their precious algorithms can’t quantify, can’t replicate. It’s too beautifully, terribly human. My glass is empty. That’s a problem easily solved.

So, the OECD made a report card for the AI age. Instead of breathless predictions about the robot apocalypse happening next Tuesday, we got a way to track their progress. Like watching a particularly slow horse race. For businesses, it means smarter bets. For policymakers, it means they can make rules based on reality, not science fiction. For teachers, it’s a map for figuring out how to not become obsolete. Small mercies, I guess.

This whole thing isn’t saying when the machines will finally match us, or surpass us. That’s still anybody’s guess, and frankly, I’ve got better things to wager on, like whether I can make it to the liquor store before it closes. What it does is give everyone a common language. So, when some tech messiah starts preaching about the coming of AGI, you can ask him to show you the damn report card.

It’s a measurement system. In a world moving so fast it makes your head spin and your stomach churn, maybe a little measurement ain’t so bad. Might even stop a few idiots from breaking things too badly. Or maybe not. People are pretty good at breaking things all on their own, no AI required.

Alright, this bottle ain’t gonna finish itself. And tomorrow’s another day of staring into the digital abyss.

Chinaski. Out. Pour me another.


Source: New Study Reveals True AI Capabilities And Job Replacement Risk

Tags: ai machinelearning futureofwork agi humanainteraction