Another Monday, another blueprint from the mountaintop. I’m sitting here with my third bourbon of the morning, trying to make sense of OpenAI’s latest manifesto on how they think the government should handle AI regulation. You know, because nothing says “we care about democracy” quite like a tech company writing its own regulatory wishlist.
Let me tell you something about blueprints. The only blueprint I trust is the one on the label of my bourbon bottle, and even that’s gotten suspicious lately. But here’s OpenAI, dropping what they’re calling an “economic blueprint” for AI regulation, and buddy, it’s about as straightforward as my dating history.
Chris Lehane, their VP of global affairs (fancy title for “professional sweet talker”), opens with this gem about how the U.S. needs to “win on AI.” Win what exactly? Last I checked, this wasn’t the Olympics of Artificial Intelligence. But hey, nothing gets the government’s attention quite like framing everything as a competition with China.
The fun part? They want billions in funding for chips, data, energy, and talent. Join the club, pal. I want billions too, but my bank account keeps telling me to stick to the bottom shelf.
Here’s where it gets rich: They’re complaining about how different states have different AI laws. Seven hundred AI-related bills in 2024 alone. That’s more bills than I have empty whiskey bottles in my recycling bin, and trust me, that’s saying something.
Sam Altman, OpenAI’s golden boy, is out there critiquing the CHIPS Act like it’s a bad cocktail recipe. Says it hasn’t been effective enough, and get this - he’s making eyes at the Trump administration for a better deal. Nothing like playing both sides of the political street while pretending you’re above the fray.
The kicker? They want nuclear power plants. Lots of them. Because apparently, these AI models are thirstier for power than me at last call. They’re talking about “dramatic” increases in federal spending on power and data transmission. Meanwhile, my electricity bill makes me consider going back to candlelight.
But wait, there’s more! They want “best practices” for model deployment, “streamlined” engagement with national security agencies, and export controls that keep the good stuff away from the bad guys. It’s like a corporate wishlist written by someone who binged too many spy movies.
The copyright section is my favorite part. They’re basically saying, “Hey, we need to use everyone’s stuff to train our AI, just like humans learn!” Sure, and I need to drink free bourbon for research purposes. They’re trying to paint themselves as the reasonable ones while other companies “make no effort to respect IP rights.” That’s like me telling the bartender I’m the responsible drinker because I occasionally order water.
You want to know what this is really about? Money and control. OpenAI tripled their lobbying expenses faster than I can empty a shot glass. They’re hiring more former government officials than a K Street consulting firm. They’re playing the game while pretending they’re writing the rulebook.
Here’s the truth, straight up like my whiskey: This isn’t about protection or innovation or national security. It’s about a company trying to write its own regulations before someone else does. It’s about making sure the rules of the game favor the house, not the players.
But what do I know? I’m just a tech writer who spends too much time thinking about artificial intelligence while nursing very natural hangovers. At least I’m honest about my motivations - they usually involve bourbon and bad decisions.
Time to wrap this up. My glass is empty, and these AI companies aren’t going to critique themselves.
Stay human, stay drunk, stay free, Henry Chinaski
P.S. If any OpenAI execs are reading this, I accept bourbon as bribes. The good stuff only, please - I’ve got standards, even if they’re low.
Source: OpenAI presents its preferred version of AI regulation in a new ‘blueprint’