Look, I’ve seen a lot of stupid shit come out of the tech world. I’ve watched grown adults throw millions at startups that deliver lukewarm salads in boxes. I’ve seen CEOs wax poetic about disrupting industries that didn’t need disrupting. But OpenAI’s Sora 2 being weaponized to create fat-shaming videos? That’s a new low, even for an industry that regularly limbo-dances under the bar of human decency.
Ted Sarandos, the Netflix CEO, is out there selling AI like it’s some kind of magical storytelling elixir. “Tell stories better, faster, and in new ways,” he says. And you know what? He’s right. People ARE telling stories. Stories like “watch this fat woman break a bridge” and “Black woman falls through KFC floor.” Real Hemingway stuff. Someone get these auteurs to Sundance.
The thing that gets me isn’t even that people are making this garbage. Humans have always been cruel, petty creatures. We’ve been finding creative ways to be terrible to each other since we figured out we could draw stick figures on cave walls. What’s different now is the democratization of cruelty. You used to need some actual skill to make convincing fake videos. You needed software knowledge, editing chops, time, dedication to your craft of being a piece of shit.
Now? Now any mouth-breather with an internet connection can generate photorealistic content showing people they’ve decided to hate. It’s like giving everyone a printing press, except instead of religious pamphlets or revolutionary manifestos, we’re mass-producing digital snuff films of someone’s dignity.
The videos themselves are predictable in their awfulness. A woman bungee jumping, bridge collapses. Nearly a million views. Delivery drivers falling through porches. People “swelling up” after eating. It’s the same tired stereotypes that have been around forever, just rendered in crisp, AI-generated 4K. And here’s the real mindfuck: people can’t tell it’s fake.
That’s the part that should terrify you more than anything else. We’ve crossed into a reality where the line between real and generated is so blurry that your aunt is sharing these videos on Facebook thinking she’s watching actual news footage. “Can you believe this happened?” Yeah, Aunt Carol, I can’t believe it happened because it DIDN’T FUCKING HAPPEN.
OpenAI, naturally, has been about as vocal on this issue as a mime at a funeral. Complete silence. Which makes sense, I guess. What are they supposed to say? “Oops, our bad, we built a hate-content machine and slapped some guardrails on it that work about as well as a screen door on a submarine”?
Because that’s what this proves. All those safety measures, all those content policies, all that hand-wringing about responsible AI deployment? It’s theater. Security theater for the algorithm age. The guardrails exist so OpenAI can point to them when things go sideways and say, “Well, we TRIED.” But trying doesn’t mean shit when your tool is being used to create an assembly line of bigotry.
Here’s what really pisses me off about all this: the same people who built these tools are the ones who spent years telling us AI would unlock human creativity, that it would democratize art and storytelling. And they were right! It HAS democratized creativity. It’s just that a significant chunk of human creativity, as it turns out, is dedicated to finding new and innovative ways to be absolute garbage to each other.
We gave everyone a superpower, and they’re using it to make fun of fat people. That’s the species we are. That’s what we do when you hand us godlike creative tools. Not paint digital Sistine Chapels or compose symphonies. Nope. We make videos of people falling through floors because of their weight or race, then we share them for likes.
And the viral nature of it all just makes it worse. One asshole makes a hateful video, it gets a million views, and suddenly you’ve got a dozen more assholes going, “Hey, I could do that!” It’s a race to the bottom, except the bottom keeps getting lower. It’s bottoms all the way down.
The people defending this garbage will trot out the usual arguments. “It’s just dark humor.” “People are too sensitive.” “Freedom of speech.” But here’s the thing about punching down: it’s not comedy, it’s just cruelty with a punchline. There’s a difference between satire that challenges power and content that just shits on people who are already dealing with society’s nonsense.
And before someone comes at me with the “but you’re cynical and dark too” argument, yeah, I am. But there’s a difference between being cynical about systems and institutions and being cruel to individuals for characteristics they can’t control. I’ll happily tear apart a corporation or call bullshit on tech messiahs. But making fun of someone’s body or race? That’s not cynicism, that’s just being a dick.
What really gets me is that this was all predictable. Everyone saw this coming except, apparently, the people building the tools. Or maybe they saw it coming and just didn’t care because the revenue projections looked good. “Sure, people might use this to harass and dehumanize others, but think of the shareholder value!”
We’re watching in real-time as powerful creative tools get hijacked for the worst possible uses, and the companies responsible are nowhere to be found. They’re too busy talking about the next feature, the next model, the next way to “empower” users. Meanwhile, their current empowerment project is being used to create a flood of content that would make even the worst corners of the old internet blush.
The regulators are apparently “starting to pay attention,” which is comforting in the same way it’s comforting to know the fire department has been notified while your house burns down. By the time any meaningful regulation happens, we’ll have moved on to the next crisis, the next way AI can be weaponized for casual cruelty.
So here we are. We’ve got tools that can generate photorealistic videos of anything we can imagine, and we’re using them to imagine new ways to be terrible to each other. The technology is incredible. The applications are depressing. And the silence from the people who built this stuff is deafening.
Welcome to the future. It’s creative, it’s accessible, and it’s exactly as shitty as you’d expect when you give humans unlimited power with zero accountability. The machines aren’t the problem. We are. We always have been. The only difference is now we’ve got better tools to express it.
Source: OpenAI’s Sora 2 videos spark alarming trend of AI-generated fat-shaming content