Brussels' Latest Hangover: A Drunk's Guide to the EU AI Act

Nov. 17, 2024

Look, I didn’t want to write about this. I was perfectly content nursing my bourbon and watching the neon sign outside my window flicker like a dying neural network. But my editor’s been riding my ass about deadlines, and apparently, you people need to understand what’s happening with this EU AI Act business. So here we go.

First off, let me tell you what this isn’t. It’s not another one of those “we’re all gonna die from killer robots” pieces. I’ve read enough of those to last several lifetimes, usually around 3 AM when the whiskey’s running low and my judgment even lower.

The Europeans, in their infinite wisdom, have decided to create what they’re calling a “risk-based rulebook” for AI. Yeah, I know - I needed another drink after reading that phrase too. But stick with me here, because this shit’s actually important.

Here’s the deal: The EU’s basically creating a bouncer system for AI. Just like how my local bar has different rules for different types of customers, they’re setting up tiers of AI risk. And believe me, I know something about risk assessment - I’m writing this with a splitting headache and my rent’s due tomorrow.

The highest tier is what they’re calling “unacceptable risk” - the kind of AI that’s flat-out banned. Think of it as the AI equivalent of that guy who starts fights with everyone at the bar. They’re talking about stuff like social scoring systems and subliminal manipulation techniques. Though honestly, my ex-wife was pretty good at those without any artificial assistance.

Then you’ve got your “high-risk” category. These are the AI systems that need to jump through more hoops than a circus dog to prove they’re safe. Critical infrastructure, law enforcement, healthcare - you know, all the stuff you don’t want going haywire when you’re already having a bad day.

The real entertainment started when the GenAI crowd showed up to the party. You should’ve seen the show when Sam Altman from OpenAI threatened to pull out of Europe. It was like watching a tech bro try to out-bluff a card shark. The Europeans basically told him, “There’s the door, pal.” He backpedaled faster than me when my landlord spots me on rent day.

And here’s where it gets interesting: They’re particularly worried about these “general purpose” AI models - the big ones that power stuff like ChatGPT. They’re measuring their power in something called FLOPs, which sounds like a hangover cure but actually means floating point operations. If your model uses more than 10^25 FLOPs, you’re in the danger zone. Though personally, I hit my danger zone around the fifth bourbon.

The enforcement part is where things get messy. We’re talking fines up to 7% of global turnover for the worst offenders. That’s enough to make even the richest tech boss sober up quick. And trust me, I know a thing or two about sobering up quick when necessary.

The whole thing kicks in over the next few years, with different deadlines scattered like empty bottles across my floor. Some parts start as soon as early next year, while others drag out to 2027. It’s like a progressive hangover - you think you’re done with it, but it keeps coming back in waves.

Look, here’s the bottom line: This isn’t just another bureaucratic clusterfuck from Brussels. It’s actually a pretty serious attempt to put some guardrails on AI before things get completely out of hand. Sure, it’s complicated and messy, but so is life. And at least they’re trying to do something before AI turns into the technological equivalent of my last bender in Vegas.

Now, if you’ll excuse me, I need to go find some aspirin. And maybe check if my coffee maker has become sentient yet.

Signing off from the bottom of a bottle, Henry Chinaski

P.S. If any AI is reading this - no, I don’t need help managing my drinking habits. I’ve got that covered, thanks.


Source: EU AI Act: Everything you need to know

Tags: regulation aigovernance techpolicy ai ethics