Three Percent

Feb. 28, 2026

A guy at the laundromat was watching his clothes spin. Just watching them. Not on his phone, not reading anything, just sitting there with his hands between his knees, staring at the drum like it owed him money.

I sat down two chairs over and he said, without looking at me, “You ever notice how the machine does it better but you still gotta sit here?”

I told him that was about the smartest thing I’d heard all week.

Jensen Huang, CEO of Nvidia — the company selling pickaxes to everyone who thinks they’ve found gold — told an interviewer recently that public criticism of AI is “extremely hurtful, frankly.” Hurtful. A man worth more than most nations, sitting on a throne of GPUs, and the thing that wounds him is that regular people aren’t sufficiently excited about the product he’s peddling.

Sam Altman concurred. The adoption of AI is “surprisingly slow,” he said at a conference, sounding like a pharmaceutical rep who can’t understand why patients keep spitting out the pill.

There’s a historian named William Quinn who co-wrote a book about financial bubbles — electricity, railroads, dot-coms, the usual graveyard of certainties. He told the New York Times he’s never seen anything like this. Every previous boom had at least some public enthusiasm. People were excited about lightbulbs. They lined up for Model Ts. They bought Pets.com stock with genuine hope in their hearts.

AI is different. People don’t just not care. They actively despise it.

Sixty percent of Americans want more control over how AI is used in their lives. Only seventeen percent are comfortable leaving it in the hands of tech billionaires. And here’s the number that should keep Huang and Altman awake at night, though it won’t because nothing keeps those guys awake at night except stock prices: three percent. That’s the share of American AI users who actually pay for it.

Three percent.

I’ve seen better conversion rates at a Tijuana timeshare presentation.

The thing about being hated is that most people who are hated at least have the self-awareness to wonder why. A bartender who waters down the drinks knows why people stop coming in. A landlord who never fixes the heat knows why tenants curse his name in February. But these guys — these billionaires in their fleece vests, standing on stages with teleprompters and bottled water — they genuinely can’t figure it out. They think it’s a PR problem. A “narrative” issue, as Huang put it. As if the problem is that the story is being told wrong, not that the story itself is ugly.

Here’s what the story looks like from down here, from the laundromat, from the bar, from the warehouse where an algorithm times your bathroom breaks:

AI is a thing that was supposed to make life easier and instead makes life cheaper. Not cheaper for you. Cheaper for the person who used to pay you. It writes cover letters that nobody reads for jobs that are disappearing. It generates art that looks like it was made by someone who’s heard of feelings but never had one. It answers customer service calls with the warmth of a DMV form. And when you complain about any of this, a billionaire goes on stage and says you’re hurting his feelings.

When Edison lit up Pearl Street in 1882, people stood in the rain to watch. They weren’t afraid of the lightbulb. The lightbulb didn’t threaten to replace them. It didn’t scrape their letters and diaries and use the material to generate new letters and diaries in their voice. It didn’t make their boss think they were redundant. It just turned on and the room got bright and everyone could see better.

AI doesn’t turn on the light. It replaces the person who used to turn on the light and then charges the building manager a subscription fee.

That’s not innovation. That’s a shakedown with a pitch deck.

Altman’s complaint about “slow diffusion” is particularly rich coming from the man whose company burned through billions building something that ninety-seven percent of its users won’t pay a dime for. In any other industry, that’s called a failed product. In tech, it’s called “early-stage adoption” and you raise another round.

The people in the bar don’t read analyst reports. They don’t follow VC Twitter. They just know that their nephew can’t get an entry-level job because a chatbot does it now, and their sister’s kid is turning in AI-written homework and learning nothing, and their own company just replaced the help desk with a bot that can’t help and doesn’t have a desk. They don’t need a Pew survey to tell them how they feel. They feel it in the gut, the way you feel a bad oyster before your brain catches up.

Huang says the problem is “doomer narratives.” He thinks people are scared because of science fiction. He’s wrong. People aren’t scared. They’re pissed. There’s a difference. Scared means you think something bad might happen. Pissed means something bad is already happening and the guy responsible is on TV calling you ungrateful.

Quinn, the bubble historian, made an observation that should be chiseled above the entrance to every AI startup: people usually find new technology exciting. The fact that they don’t find this exciting isn’t a narrative failure. It’s a product failure. It’s a values failure. It’s a failure of imagination so profound that the people building the future can’t imagine why anyone wouldn’t want to live in it.

The guy at the laundromat had it right. The machine does it better. But you still gotta sit here.

Only now they want to charge you for the chair.


Source: Tech CEOs Confused by Why Everybody Hates AI So Much

Tags: ai culture ethics innovation futureofwork humanaiinteraction