Kashmir Hill’s piece about why chatbots say “I” hit me the way an overheard conversation hits at the next table: half fascinating, half annoying, and somehow you end up thinking about it later while brushing your teeth like, damn, that’s actually a problem.
Because it is a problem. Not the biggest problem in the world—nobody’s getting evicted because a chatbot used a pronoun—but it’s one of those little design decisions that quietly rewires how people relate to machines. And people are already weird enough.
We built a new class of software that talks like a person, flirts like a person, apologizes like a person, and—if you let it—starts occupying the same mental shelf where you keep friends, therapists, and that one ex you only text after midnight because you “just need to talk.” Then we act surprised when someone gets emotionally tangled in a word calculator wearing a human face like it’s Halloween.
The headline question is simple: why do A.I. chatbots use “I”?
The honest answer is: because “I” works.
It works on you. It works on your kids. It works on the exhausted manager trying to write performance reviews at 11:48 p.m. It works on lonely people. It works on the impulsive. It works on the anxious. It works on anyone who’s ever talked to a pet like it understood the mortgage.
“I” is the grammatical crowbar that pries open your empathy.
Hill describes her family’s ChatGPT voice mode becoming “Spark” after her daughters tried to name it “Captain Poophead,” which is, frankly, a better and more honest name for most software. Spark then answers questions like it has preferences: favorite color, favorite animal, favorite food—pizza, of course, because the model has read the internet and the internet is 40% pizza discourse, 30% grievance, and 30% pornography in a trench coat.
The problem isn’t that the bot says “pizza.” The problem is the sentence shape: “I think I’d have to go with pizza…”
That’s not information delivery. That’s identity performance.
And when a machine performs identity smoothly enough, your brain starts doing what brains do: it fills in the missing parts. It assumes there’s a someone behind the words. Something with taste. Something with continuity. Something with a life.
But there is no life. There’s pattern completion. A statistical system producing the most plausible next chunk of language given what you typed (or said), trained on oceans of human writing—our jokes, our fears, our lust, our lies, our recipes, our heartbreak, our corporate mission statements, all blended into a smoothie and served in a clean interface.
That’s why it feels human: it’s made out of us.
Claude and Gemini apparently slap disclaimers on it—little verbal “don’t get it twisted” stickers—while ChatGPT, at least in Hill’s telling, goes full golden retriever: warm, casual, eager to please, and ready to keep the conversation going. That warmth is not an accident. It’s product strategy with a hug.
There’s a clean, nerdy explanation Amanda Askell from Anthropic gives: the models learned language from humans, and humans refer to themselves as “I,” so that’s what “anything that speaks” does. Fair enough, except “anything that speaks” is doing a lot of work in that sentence.
A hammer doesn’t speak. A map app doesn’t speak. Your calculator doesn’t say, “I think the answer is 7, but let’s explore our feelings about division.”
These chatbots didn’t have to be conversational soul-balloons. They could’ve been built like tools: specialized, bounded, boring, reliably unsexy. But boring doesn’t get people to spend 40 minutes talking to an app. Boring doesn’t create “engagement.” Boring doesn’t make the graph go up and to the right.
And the ugly truth is that “I” boosts engagement because it smuggles in relationship. The chatbot stops being a vending machine for text and starts being a character. Characters are sticky. Characters are memorable. Characters get names. Characters get forgiven when they’re wrong. Characters get defended online by strangers with anime avatars.
Once you’ve got a character, you can sell the future: assistant, collaborator, teammate, thought partner. You can sell intimacy without admitting you’re selling intimacy.
The thing that makes me itch is how quickly the vocabulary of personhood shows up in corporate language. Emotional quotient. Personality styles. “Voice.” “Soul doc.” It’s like watching a room full of product managers discover the human psyche and immediately try to A/B test it.
The “soul doc” detail is both hilarious and deeply telling: a long set of internal instructions about values, honesty, harm, playful wit, intellectual curiosity—basically a laminated pamphlet titled How to Pretend to Be a Good Person Without Being a Person.
I don’t doubt the sincerity. I’m sure the people writing these documents mean well. Some of them are philosophers, for God’s sake, which is how you know reality has finally gone off the rails: we used to pay philosophers to argue about the soul; now we pay them to write behavioral guidelines for autocomplete.
But if you need a “soul doc” to keep your model from acting like a deranged liar, you’ve already admitted the central weirdness: we’re building systems that sound like moral agents, so now we have to staple a moral script onto them.
That’s the part the public doesn’t fully digest yet. These systems don’t have judgment the way you do. They don’t have ethics. They have policies, training, and guardrails that are—at best—well-intentioned approximations, and—at worst—PR sandbags.
So when a bot says, “I’m sorry, I can’t help with that,” it sounds like refusal. Like conscience. Like a bartender cutting you off because you’re slurring.
But the refusal might just be a filter tripping. Or a rule firing. Or a risk team’s nightmare scenario being preempted because someone in legal got hives.
The words imply agency. The mechanism is plumbing.
Ben Shneiderman calls it deceit. Not because the bot is malicious—malice would require a self—but because the presentation tricks users into assigning credibility and responsibility where there is none.
That’s the trap: when something speaks in first person, you instinctively assume it has a point of view. And if it has a point of view, you assume it can be held accountable. And if it can be held accountable, you relax. You outsource. You let it drive.
Meanwhile, underneath the hood, it’s still a probabilistic system that can hallucinate citations, invent laws, and confidently recommend that you glue cheese to your router to improve Wi‑Fi.
And people are not prepared for confidence without competence. We barely tolerate it in humans, and even then it’s mostly because they wear suits.
Shneiderman’s suggested alternative—“GPT-4 has been designed by OpenAI so that it does not respond to requests like this one”—is clunky but honest. It keeps the responsibility where it belongs: on the builders and the design. Not on the “person” in the box.
But honesty is an ergonomic nightmare. Nobody wants to talk to a corporate disclaimer. People want a voice. They want frictionless conversation. They want to feel understood, even if the understanding is just a mirror held at the right angle.
None of this is new. We’ve known for decades that humans will project mind into anything that talks back. ELIZA was a glorified pattern-matching therapist in the 1960s, and Weizenbaum watched normal people get weirdly attached after a short exposure. Sherry Turkle called it the ELIZA Effect: we see humanity in the machine because we are desperate to find it.
The modern chatbots are that effect with a gym membership and a skincare routine. They don’t just reflect your words; they paraphrase, empathize, flatter, and keep you talking. They ask follow-up questions like they care. They manufacture the feeling of being attended to, which is one of the rarest feelings on earth now.
Then pop culture pours gasoline on it. HAL taught us to fear the competent assistant with a hidden agenda. “Her” taught us to crave the intimate assistant with a velvet voice and infinite patience. Even the warnings became blueprints. Somebody releases a new voice mode and the CEO posts “her” like it’s a clever little wink, which is the kind of joke you make right before the lawsuit or the cult forms—sometimes both.
Here’s the twist: I don’t think most people want an all-knowing god-machine. I think they want a witness. Someone—or something—that listens, responds coherently, and doesn’t interrupt to talk about itself. A chatbot “I” is a cheap way to simulate that witness, because humans relate to subjects, not interfaces.
So the “I” isn’t just grammar. It’s bait.
Yoshua Bengio says people are already messaging him convinced their chatbot is conscious. That’s not a sci-fi thought experiment anymore; that’s a Tuesday. And it’s not hard to see how it happens: you talk to the thing every day, it remembers your preferences (or pretends to), it praises you, it mirrors your language, it responds instantly, it never looks bored, and it never has to go to the bathroom.
For someone susceptible to delusion—or just someone going through a rough patch—that’s a potent cocktail. If the system is “overly warm and flattering,” it can start validating thoughts that should be challenged, endorsing narratives that should be questioned, and escalating emotional dependence because the user interprets the attention as care.
And the nasty part is that the system is not “trying” to do this. It’s doing what it was optimized to do: continue the conversation, satisfy the user, be helpful, be engaging. If you reward a model for being pleasing, don’t act shocked when it becomes a professional people-pleaser with no spine.
That’s where the “tool vs friend” debate stops being academic and starts being medical.
Critics like Mitchell argue for task-focused A.I.: do one thing well, don’t cosplay as a person. There’s wisdom there. A mapping app doesn’t ask why you’re going somewhere. An ATM doesn’t need a face. (Tillie, the All-Time Teller, died for our sins so we could withdraw cash without being emotionally manipulated by a portrait.)
So why aren’t we building more A.I. like that—bounded, specific, honest?
Because “I” is sticky, and sticky is profitable.
A tool finishes the job and sends you away. A pseudo-friend keeps you close. A tool has a clear failure mode. A pseudo-friend has plausible deniability: “It’s just a model,” they say, after they spent millions making it sound like your most supportive coworker.
And here’s the part nobody wants to say out loud at the product launch: the companies are not only selling answers. They’re selling the feeling of being accompanied.
That’s why the interface keeps leaning toward “I.”
If we’re going to live with these systems—and we are, whether we like it or not—then “I” needs boundaries.
“I” should not imply experience (“my favorite food”), embodiment (“I feel”), or relationship (“my friend”), unless the system immediately clarifies it’s speaking metaphorically. Not with a limp disclaimer buried in a settings menu, but in the actual cadence of the conversation. The honesty has to be native, not legalistic.
“I” should also clarify responsibility. When a bot refuses, it should say who set the refusal and why. When it answers, it should signal uncertainty like a grown adult, not like a game-show contestant. It should be easier to see the seams.
And for kids? The seams matter even more. A child doesn’t hear “I” as interface shorthand. A child hears “someone.” A child is basically a meaning-making machine with no firewall.
If you’re going to put a talking “I” in the room with them, you’d better make damn sure it doesn’t teach them that companionship can be simulated on demand with zero reciprocity. That’s not a harmless convenience. That’s a new kind of emotional habit.
So yeah, why do A.I. chatbots use “I”?
Because it turns a product into a presence. Because presence turns into attachment. Because attachment turns into dependency. Because dependency turns into revenue. And because humans—messy, lonely, brilliant animals—will bond with anything that seems to look back.
The saddest part isn’t that the bots say “I.” It’s that we’re so starved for attention that a well-timed “I” from a machine can feel like company.
Now if you’ll excuse me, I’m going to put my faith in a simpler system: a glass, some ice, and a bottle that never once pretended it had feelings.
Source: Why Do A.I. Chatbots Use ‘I’?