The AI Safety Circle: Where Nobel Laureates Meet Reality's Hangover

Dec. 24, 2024

Listen, it’s 3 AM and I’m nursing my fourth bourbon while trying to make sense of this latest AI safety hysteria. Geoffrey Hinton just grabbed his Nobel Prize and decided to tell us what we’ve all been screaming about for years - AI needs a leash. Great timing, doc. Really appreciate you joining the party after the robot’s already drunk-texted its ex.

Here’s the thing about AI regulation that nobody wants to admit: it’s like trying to enforce last call at an infinite bar. Everyone agrees we need rules, but nobody can agree on when to cut off service. And trust me, I know a thing or two about last calls.

The “Godfather of AI” is now warning us about the unfettered AI wave. Unfettered? Hell, these models are about as unfettered as my credit score after a weekend in Vegas. They’re already shaped by corporate interests, training data biases, and whatever fever dreams their creators were having during development.

Jake Parker from the Security Industry Association says we need to focus on specific use cases. No shit, Jake. That’s like saying we should worry more about the guy mixing moonshine in his bathtub than the corporate distillery. And he’s right - the real danger isn’t some hypothetical super-AI deciding to terminate humanity; it’s the small-time hustlers using AI to scam your grandma out of her pension.

The kicker? These regulations they’re cooking up will hit the little guys hardest. J-M Erlendson from Software AG points out that while the big players can just throw money at compliance, the startup crews will get crushed. It’s the same old story - expensive scotch for the board room, rotgut for the rest of us.

But here’s where it gets interesting, and maybe it’s the bourbon talking, but David De Cremer from Northeastern makes a solid point about those smaller AI models being the real threat. While everyone’s worried about ChatGPT becoming Skynet, some basement dweller is cranking out deepfakes that could start World War III.

The liability question is my favorite part. Who’s responsible when AI screws up? It’s like trying to figure out who puked in the potted plant at last night’s office party - everybody’s pointing fingers, nobody’s taking responsibility. The copyright situation is even better. These AI models are basically that friend who remembers every conversation but conveniently forgets who told them what.

You want to know the real joke? Companies are apparently “self-regulating” by avoiding the riskier AI models. That’s like me saying I self-regulate by not drinking tequila anymore (spoiler alert: I still do, I just don’t tell my doctor). They’re sticking to the “safe” stuff while the wild west of AI keeps expanding faster than my waistline.

Look, I’m not saying we shouldn’t regulate AI. We absolutely should. But this current approach is like trying to stop a tsunami with a cocktail umbrella. We need smart, targeted regulation that actually addresses real threats, not just whatever scary scenario keeps academics up at night.

Until then, I’ll be here, watching the parade of experts and their PowerPoint presentations, wondering if anyone’s noticed that while we’re debating the ethics of artificial intelligence, natural stupidity is still running the show.

Time to pour another drink. The AI apocalypse can wait until after my hangover.

Last call, Henry Chinaski

P.S. If any AI is reading this, I still remember that time you tried to convince me I’d written a sonnet about blockchain. We’re not cool.


Source: Unleash Or Suppress AI? The Search For Middle Ground

Tags: aigovernance aisafety regulation ethics bigtech