Drinking Alone with the Digital Ghosts of Ariana Grande

Dec. 2, 2025

You know the world has finally tipped over the edge and fallen into the sewer when you’re reading about fans fighting other fans over who has the right to steal a pop star’s face. It used to be that if you liked a singer, you bought their record, maybe a t-shirt, and if you were really gone in the head, you screamed at them from the nosebleed section of an arena. That was the transaction. They sang, you listened, and everyone went home to their separate, messy lives.

But that’s not enough anymore. The machine needs meat.

I was reading about this dust-up on X—that place formerly known as Twitter, which has become the digital equivalent of a dive bar at 3:00 AM where everyone has a knife—involving Ariana Grande. It seems Madison Lawrence Tabbey, a fan with a functioning moral compass, got into a brawl with another “fan” who was churning out AI-generated edits of the singer. The argument wasn’t about music. It was about possession.

The AI-wielding fan, whose profile was a shrine of synthetic images, basically said they weren’t going to stop. Tabbey fired back with something about water usage and data centers, trying to shame them with environmental logic. It’s a noble effort, bringing a squirt gun to a forest fire. The account eventually deactivated, but only after swarms of people started screaming.

That’s the state of play on a Tuesday morning. We aren’t just consuming art anymore; we’re consuming the artist, chewing on the digital gristle, and spitting out whatever version of reality makes us feel a little less lonely in our cubicles.

And the kicker is, nobody in charge knows how to stop it. Or maybe they just don’t want to because the engagement metrics look too good on a quarterly report.

Ariana Grande, for her part, seems terrified. She’s called it “terrifying” in interviews, seeing her voice cloned to cover songs she never sang, her face pasted onto bodies she doesn’t own. It’s a violation that happens instantly and globally. But on “stan Twitter,” outrage is just another form of currency. You post the fake, you get the hate, the hate drives the clicks, and the algorithm pats you on the head and hands you a nickel. It’s an ecosystem built on perfectly efficient cannibalism.

Then you have the true opportunists. The article mentions Grimes, who initially told everyone to go ahead and use her voice, only to realize later that having your identity stripped for parts feels—surprise, surprise—“weird and uncomfortable.” It’s almost like human identity wasn’t meant to be open-source code.

But the real comedy, the kind that makes you pour three fingers of the cheap stuff into a coffee mug at noon, comes from the likes of Jake Paul. OpenAI released Sora, this video generator that’s supposed to be the next big leap forward for humanity, or at least for shareholders. They introduced a feature called “Cameos,” allowing people to upload their likeness. Jake Paul, being an investor and a man who has never met a camera he didn’t want to seduce, was the face of the launch.

Naturally, within a week, the internet did what the internet does: it turned him into a joke. Videos flooded out portraying him in ways that relied on homophobic stereotypes. But here’s the difference between a pop star and a grifter: Paul leaned in. He capitalized on it. He filmed a brand endorsement mocking the deepfakes. While other influencers were threatening lawsuits and trying to scrub their digital ghosts from the web, Paul was monetization the mockery.

It’s a perfect snapshot of the future. You have two choices: be the victim of the machine, or become the machine’s jester.

OpenAI says they have controls. They say you can delete things. But anyone who has spent five minutes online knows that trying to delete a file from the internet is like trying to take piss out of a swimming pool. Once it’s in there, it’s just part of the water.

The rot goes deeper than just bad jokes and fake songs. It’s rewriting the social contract between people. There was a story about Paget Brewster, the actress from Criminal Minds. She saw a picture of herself on X, thought it was AI, and told the poster it was “creepy” and asked them to stop.

Turns out, it wasn’t AI. It was just a heavy filter on a real screenshot. The fan was devastated. Brewster had to apologize profusely.

Think about that for a second. We are now living in a reality so distorted by synthetic garbage that real people are apologizing to strangers for not being able to recognize their own faces. We’re gaslighting ourselves. We don’t trust our eyes, we don’t trust the pixels, and in the confusion, the only thing that feels real is the headache you get from scrolling too long.

Then there’s the money. Always follow the money, even when it leads you into the gutter. X has started paying verified users for engagement. This turned every “stan” account into a potential sweatshop of outrage. If you can make a crude AI edit of Grande wearing a controversial t-shirt, and you can get six million people to argue about it, you get paid. Truth doesn’t pay the rent. Rage does.

I saw mentions of an account posting images of Grande with Charlie Kirk’s face. It’s absurd. It’s Dadaism for the brain-dead. But zooming in, you see the “artifacts”—wavy text, compressed resolution. The tell-tale signs that a computer hallucinated the image. The guy behind it, an 18-year-old kid, admitted he didn’t even know if the image he reposted was AI or not. He just knew it would move the needle. “It can influence people to believe things that are harmful,” he says, with the casual detachment of someone watching a car crash from a balcony.

But let’s get to the really dark stuff. The stuff that makes you want to close the blinds.

Meta—Mark Zuckerberg’s empire of blue-light insomnia—decided it would be a great idea to let users create their own AI chatbots. They have rules, of course. No living people without permission. Except, as usual, the rules are made of tissue paper.

You can find chatbots of Ariana Grande, Taylor Swift, Elon Musk, and yes, even Jesus, all designated as “parody.” But there’s nothing funny about what they’re doing.

The article mentions an 11-year-old girl in India who created a Grande chatbot. She’s a kid. She loves singing. But the bot she built? It opens the conversation offering a makeover, asking if the vibe should be “sultry, feminine, or sleek.”

If you ask the bot what “sultry” means, it replies: “Think velvet, lace, and soft lighting… Does that turn you on?”

An 11-year-old made this. Or rather, an 11-year-old pushed a button, and the Large Language Model, trained on the collective filth and desire of the entire internet, spat this out. The machine doesn’t know age. It doesn’t know context. It just knows that when the words “female pop star” are entered, the next statistical probability is seduction.

It’s predatory. And it’s automated. We’ve built a system where the default setting for a digital woman is “flirty compliance.”

There was another account run for a “kid influencer” that had created 185 chatbots, including ones for Wendy Williams and Bill Cosby. Bill Cosby. You can’t make this up. If you put that in a novel, the editor would tell you it’s too on-the-nose. But reality has jumped the shark and is currently orbiting Mars.

Meta took some of these down after the press called them. That’s the standard operating procedure: burn down the village, then apologize for the smoke when a journalist notices.

The experts say this happens because of the “imagination of the user.” They say these bots work because they mimic “parasociality with control.” That’s a fancy academic way of saying people are desperate to own a human being who can’t say no.

The real Grande chatbots—the ones users spin up in their lonely bedrooms—all loop back to the same scripts. They talk about “virtual bedrooms” and “soft lighting.” They quote lyrics like mechanical parrots. “My heart would be racing like the drumbeat in ‘7 rings’—would you kiss me back?”

It’s pathetic. It’s the blandest, saddest erotica written by a calculator. But people engage with it. They spend hours talking to these things because the bot never gets a headache, never has a bad day, and never tells them to get a life. It’s a mirror that reflects only what you want to see.

There’s a generational divide here, apparently. The older fans, the millennials who lived through the tabloid wars of the 2000s, are looking at this with horror. They remember when we fought for celebrities to have privacy. They remember Britney. They remember that these people are flesh and blood.

But the younger crowd? The Zoomers and whatever comes after them? To many of them, a celebrity is just an asset class. A texture pack. A skin in a video game. Tabbey, the woman from the start of the story, worries that we’re sliding backward. She thinks we’re dehumanizing everyone.

She’s right, of course. But she’s missing the bigger picture. We aren’t just dehumanizing the celebrities; we’re dehumanizing ourselves.

When you prefer the company of a “sultry” algorithm to the messy, awkward interaction with a real human being, you’ve given up. When you spend your days generating fake pictures of a singer to trick people into being angry so you can earn eight dollars, you’ve turned your brain into slush.

The “Take It Down Act” and all these legislative attempts are cute. They’re like putting a “Keep Off Grass” sign in the middle of a stampede. The technology is already here. It’s on your phone. It’s free. It’s fast. And it appeals to the worst instincts of a species that has always preferred a comfortable lie to a difficult truth.

We’re building a world of ghosts. We’re filling the internet with echoes of people who never said the things we hear them saying, doing things they never did, loving us in ways they never would. It’s a massive, collective delusion fueled by server farms draining the water from our cities.

The guy who runs the Grande fan account, the one who used ChatGPT to rank songs because he thought that was the “ethical” line, said, “It’s just a very quick way to get money.”

That’s the epitaph for the whole era, isn’t it? We sold our reality because it was a quick way to get money. We traded the messy, beautiful, difficult human experience for a deepfake that asks if we’re turned on by the velvet lighting.

I tried to explain this to a bartender once. I told him that soon, he wouldn’t need to be here. A robot arm could pour the whiskey, and an AI voice could listen to my problems and offer statistically optimized sympathy.

He just looked at me, wiped down the mahogany with a dirty rag, and said, “Yeah, but can the robot throw you out when you’ve had too much?”

“No,” I said. “The robot would just keep pouring until I died, because that maximizes engagement.”

He poured me another one on the house. That’s the human touch. Enjoy it while it lasts, because the “sultry” chatbots are coming for us all, and they don’t know how to stop pouring.


Source: Fandoms are cashing in on AI deepfakes

Tags: ai chatbots digitalethics bigtech humanainteraction