Look, I’ve been staring at this bottle of Wild Turkey for the past hour trying to make sense of OpenAI’s latest announcement. Maybe the bourbon will help me understand why a company would publicly admit their new toy might enable “illegal activity” and then release it anyway. But hell, even after six fingers of whiskey, this one’s hard to swallow.
So here’s the deal: OpenAI just announced they’re releasing Sora, their fancy video generation AI, to “most countries” - except Europe and the UK. Because nothing says “we’re totally confident in our product” like excluding an entire continent.
Sam Altman, tech’s favorite sweater-wearing messiah, hosted a livestream about it. I watched the whole damn thing, which explains why I’m on my second bottle. The highlight? Their product lead Rohan Sahai casually dropping that they “want to prevent illegal activity of Sora, but we also want to balance that with creative expression.”
Let that sink in for a minute. That’s like your local bartender saying, “We know this moonshine might make you go blind, but we really want to balance that with your desire to get absolutely hammered.”
The real kicker? They won’t even tell us what kind of “illegal activity” they’re worried about. It’s like that mysterious stain on my apartment ceiling - I know it’s probably bad news, but I’m too afraid to investigate further.
But hey, I’ve been covering tech long enough to make some educated guesses. We’re talking about deep fakes that could make your grandmother appear to be break dancing in Times Square. Or worse, politicians saying things they never said - although honestly, the real ones are already doing a bang-up job of spewing nonsense without AI’s help.
And let’s not forget about the copyright clusterfuck. Remember when artists were pitching fits about AI art generators stealing their style? Well, now we’re talking about video. Can’t wait to see the lawsuits when AI starts cranking out fake Scorsese films or bootleg Beyoncé concerts.
The truly wild part is how they’re handling Europe. They’re basically telling an entire continent, “Sorry, your laws are too strict for our taste.” It’s like when my favorite dive bar installed security cameras - suddenly all the fun people started drinking elsewhere.
Here’s what gets me though: OpenAI knows there are risks. They’re not even trying to hide it. Their product chief Kevin Weil admitted on Reddit that they needed to “get safety/impersonation/other things right.” You know what other industries talk like that? Arms dealers. At least they’re usually more discreet about it.
The whole thing reminds me of that time I dated a circus fire-eater who insisted on practicing in my apartment. Sure, she said she had safety measures in place, but my eyebrows took three months to grow back.
And speaking of safety measures, let’s talk about these “guardrails” they’re supposedly putting in place. Based on my experience with ChatGPT, these guardrails are about as effective as trying to stop a drunk from drunk-dialing their ex at 3 AM. Sure, you can try, but where there’s a will (and enough creative prompting), there’s a way.
The truth is, we’re watching a high-stakes game of chicken between tech companies and reality. They’re racing to release increasingly powerful AI tools while simultaneously admitting they can’t fully control them. It’s like giving a teenager keys to a Ferrari and saying, “Try not to crash it, but if you do, we warned you.”
What’s really happening here is simple: OpenAI is choosing profit over prudence, speed over safety, and hoping like hell they can patch the holes in their boat while it’s already at sea. It’s the same old story, just with fancier technology and better PR.
Look, I’m not saying we should halt all AI progress. Hell, I’m typing this on an AI-assisted keyboard that’s desperately trying to correct my bourbon-induced typos. But maybe, just maybe, when your own company admits there might be “illegal activity,” that’s a sign to pump the brakes a bit.
But what do I know? I’m just a tech blogger who’s probably had too much whiskey and is still trying to figure out why my smart thermostat thinks 85 degrees is an appropriate temperature for sleeping.
The way I see it, we’re all just beta testers in OpenAI’s grand experiment. The only difference is, this time they’re telling us upfront that things might go horribly wrong. I guess that’s what passes for corporate responsibility these days.
Time for another drink. At least when bourbon goes wrong, all you get is a hangover.
Signing off from the back booth at O’Malley’s, where the only AI is the automatic paper towel dispenser that never works, Henry Chinaski
P.S. If you see any videos of me dancing on tables next week, just assume it’s a Sora-generated deep fake. The real me would never be caught dead dancing sober.
Source: OpenAI Concerned About Illegal Activity on Sora, Releases It Anyway