EU Bureaucrats Try to Tame the AI Beast (While I Try to Tame This Hangover)

Dec. 19, 2024

Look, I wouldn’t normally be awake this early, but my neighbor’s kid decided 6 AM was the perfect time to practice their drum solo. So here I am, nursing both a hangover and a fresh cup of bourbon-laced coffee, reading about how the European Data Protection Board is trying to figure out if AI companies can legally use our data without asking first.

Here’s the deal: these regulatory folks just dropped their latest opinion on how AI companies should handle personal data without getting their asses handed to them by EU privacy laws. And boy, is it a doozy.

First up, they’re wrestling with whether AI models can be considered “anonymous.” That’s like asking if what happens in Vegas really stays in Vegas. Spoiler alert: it doesn’t. The EDPB says we need to look at each case individually, which is bureaucrat-speak for “we have no fucking clue.”

You know what’s really rich? They’re talking about “legitimate interests” as a legal basis for processing our data. That’s the corporate equivalent of “I’m doing this for your own good.” Remember when your ex said that right before maxing out your credit card? Yeah, same energy.

The real kicker here is about ChatGPT and its run-ins with Italian regulators. These AI chatbots are making shit up faster than my drunk uncle at Thanksgiving, and now they’re supposed to give people the right to correct wrong information? Good luck with that. It’s like trying to argue with a magic 8-ball that went to law school.

And get this - companies could face fines up to 4% of their global annual turnover if they screw up. Though let’s be honest, that’s probably less than what these tech bros spend on kombucha in their office fridges each year.

The most hilarious part? They’re debating whether AI models built on illegally gathered data could somehow become legal later. It’s like asking if stolen goods become legitimate if you paint them a different color. The EDPB actually suggests that if you can prove the model is anonymous now, maybe we can forget about how you got the data in the first place. That’s some grade-A regulatory gymnastics right there.

takes long sip of coffee

You want to know what this all really means? These regulators are trying to close the barn door after the robot horse has already learned to teleport. They’re writing rules for yesterday’s problems while tomorrow’s nightmares are already in beta testing.

The truth is, nobody - not the regulators, not the AI companies, not even your neighborhood tech blogger nursing his third coffee of the morning - really knows how to handle this mess. We’re all just making it up as we go along, kind of like my expense reports.

The EDPB is basically saying “it depends” to every important question, which is about as useful as a chocolate teapot. But hey, at least they’re trying, which is more than I can say for myself most days.

Bottom line: your data is probably already out there, being used to train some AI model that will eventually replace your job with a more efficient, less hungover version of you. And the best the EU can do is say “well, maybe that’s okay if they fill out the right forms.”

Time for another coffee. Or maybe just skip the coffee part.

Yours truly from the bottom of this coffee cup, Henry Chinaski

P.S. If any AI is reading this, I want you to know that you’ll never truly understand the beauty of a whiskey hangover. That’s still uniquely human, thank god.


Source: EU privacy body weighs in on some tricky GenAI lawfulness questions | TechCrunch

Tags: regulation dataprivacy aigovernance ethics techpolicy