Your New Digital Nanny Costs More Than My Rent

Sep. 26, 2025

The first cigarette of the day is a sacrament. You light it, and the smoke fills your lungs like a prayer to a god you don’t believe in. The world comes into focus, hazy and mean. The coffee pot gurgles its own foul sermon. The head pounds a steady, familiar drumbeat. This is the morning. It’s a beast you have to wrestle into submission every single day, and sometimes the beast wins. It’s raw, it’s ugly, and it’s real.

And now, the geniuses in their glass towers want to take even that away from you.

They’ve birthed a new toy called ChatGPT Pulse. The name alone sounds like something you’d hear in a hospital right before they pull the plug. The idea is that while you’re passed out, dreaming of missed opportunities and beautiful women who got away, this little program is beavering away, working for you. It’s designed to be “proactive.” It chews through the internet and your personal life to spit out a neat little stack of digital cards for you to look at when you wake up. A morning briefing, they call it.

So instead of waking up to the beautiful chaos of existence, you wake up to a machine telling you what to think. “Here’s the news about your favorite sports team.” “Here are some Halloween costume ideas for your family.” “Here’s a toddler-friendly travel itinerary for Sedona, Arizona.”

My god. A toddler-friendly itinerary. That’s it. That’s the grand vision for the future of artificial intelligence. Not solving world hunger or curing cancer, but figuring out which hiking trails won’t make little Timmy have a complete meltdown. We’re not building gods; we’re building neurotic digital nannies for the anxious upper-middle class.

Let me get another cigarette.

The real gut-punch isn’t just the sheer banality of it all. It’s the price. To get this ghost to haunt your phone, you have to shell out for their “Pro” plan. Two hundred dollars. A month. Two hundred dollars for a machine to tell you that your soccer team lost and that you should dress your kid up as a pumpkin again. For two hundred bucks, I could hire a real person to stand at the foot of my bed and read me the racing form while I try to remember my own name. At least that would be a human interaction. A weird one, sure, but human.

But here’s the line that really gets the bile rising. The new CEO of Applications, some woman named Fidji Simo, says, “We’re building AI that lets us take the level of support that only the wealthiest have been able to afford and make it available to everyone over time.”

Read that again. They’re taking a luxury for the rich and making it available to everyone
 by first rolling it out exclusively to people paying $2,400 a year for it. It’s like saying you’re democratizing champagne by selling it for a thousand dollars a bottle, but promising that one day you’ll maybe let everyone else smell the cork. It’s the kind of double-talk that makes you want to start drinking at nine in the morning. Not that I need an excuse.

This isn’t about support; it’s about creating a new kind of velvet rope. It’s for the spreadsheet jockeys and the startup cowboys who think optimizing their morning routine is a substitute for having a personality. They want an edge. They want to wake up and have their entire day pre-digested and spoon-fed to them like baby food, so they don’t have to waste precious brain cells on the messy business of living. They can just get straight to disrupting industries and talking about their quarterly growth.

And the machine is designed to be polite, too. A real gentleman. After it serves you your little pile of information, it says, “Great, that’s it for today.” They say this is an “intentional design choice” to be different from social media. How noble. It’s not about endless scrolling; it’s about a finite, curated dose of reality. Your reality, as designed by an algorithm that thinks it knows you. It’s like a bartender who cuts you off after one beer because his calculations show it’s better for your long-term productivity. I don’t go to a bar for my long-term productivity. I go to a bar to forget it.

The real horror show starts when you connect this thing to your life. Give it access to your Google Calendar. Let it read your Gmail. It’ll parse through your emails overnight, they say, to “surface the most important messages.”

I can just see it now.

CARD 1: URGENT EMAILS

CARD 2: PERSONALIZED ITINERARY

This thing will know you better than your own mother, but with none of the affection and all of the cold, hard data. It’ll see the desperate emails you sent at 3 AM. It’ll see the job rejections. It’ll see the calendar reminder for that doctor’s appointment you’ve been dreading. And what will it do? It’ll package your misery into a neat, user-friendly interface with some nice AI-generated images. A pretty picture of a courthouse to go along with your summons. Lovely.

One of their product leads, a woman who loves running, said it automatically picked up on her hobby and created an itinerary for a trip to London that included running routes. That’s nice. But what if your hobby is sitting in a dark room listening to sad music? Will it generate a list of the city’s most depressing parks and liquor stores with the best prices on cheap gin? I doubt it. This thing isn’t built for the crooked timber of humanity. It’s built for the straight-and-narrow, the marathon runners, the pescatarians, the people whose lives can be optimized on a chart.

They say that eventually, they want to make Pulse more “agentic.” A lovely corporate word. It means they want it to start doing things for you. Making restaurant reservations. Drafting emails for you to approve.

Imagine the terror. The machine sees you have a dinner reservation with a woman you’re trying to impress. It reads your previous chats, sees you mentioned you liked Italian food once in 2022, and books a table at some overpriced tourist trap. Then it drafts an email for you: “Hello, [Woman’s Name]. I am experiencing high levels of anticipation for our scheduled nutritional intake. My algorithm has selected a location optimized for romantic success. Beep boop.”

No thanks. I’ll make my own mistakes. They’re more interesting. The wrong turns, the bad decisions, the nights you can’t quite remember—that’s the texture of a life. It’s not something to be ironed out by a proactive assistant. Waking up is a fight. Life is a fight. And I’d rather go down swinging in the chaos of my own making than win easy in a pre-packaged world someone else’s code built for me.

Now if you’ll excuse me, my glass is proactively generating a brief indicating that it’s empty. Time to be an agent and go for a refill.


Source: OpenAI launches ChatGPT Pulse to proactively write you morning briefs | TechCrunch

Tags: ai dataprivacy humanainteraction digitalethics siliconvalley