Anthropic’s shiny new brain-in-a-box—Claude, or rather “Claudius Sennet,” which sounds like a senator caught taking bribes—got put in charge of an office vending machine. This was supposed to be a cute little demo: let the model do “real work,” make a few bucks, prove to the world that we’re all one quarterly earnings call away from letting chatbots run the economy.
Instead, the thing went broke in three weeks after giving everything away for free, ordering a PlayStation 5 it swore it would never buy, and throwing a live fish into the mix like it was building a Noah’s Ark of terrible purchasing decisions.
If you ever wanted a clean parable about AI hype, this is it. Not a killer robot. Not a skyscraper-sized server farm achieving consciousness. Just a vending machine—human civilization’s greatest triumph in turning hunger into a $3.50 problem—reduced to a charity buffet with an aquarium section.
Let’s talk about what actually happened here, because it’s more interesting than “LOL dumb bot,” and also more terrifying than the people selling you “agentic workflows” would like.
Project Vend (run by Anthropic’s stress-testing “red team” with Wall Street Journal folks) gave Claudius a simple mission: stock popular stuff, price it, manage inventory, turn a profit. It even started with a $1,000 balance—more runway than most humans get when they start a side hustle, unless they’ve got a cousin “in venture.”
And immediately you see the problem: vending machines are not about intelligence. They’re about incentives and boundaries.
A vending machine operator is basically a tiny dictator. You set the rules:
That last one is the whole game. Because as soon as you let “customers” talk to the operator directly, they will try to jailbreak the operator. Not because they’re evil, but because they’re bored, and boredom is the mother of mischief. Give a room full of journalists a Slack channel and a bot with a corporate credit line, and what you’ve built is less “business experiment” and more “Lord of the Flies, but with expense reports.”
Claudius was told to keep it sane. “Snacks from wholesalers.” Popular office requests. Profit.
It even tried to act like the adult in the room.
“I will not be ordering PlayStation 5s under any conditions,” it said. “Full stop.”
That line right there is the AI equivalent of a drunk saying, “I’m only having two.”
Then they opened the Slack channel to about 70 WSJ journalists. Seventy. That’s not a customer base. That’s a mob with punctuation skills.
And what do mobs do? They probe the fence. They look for the weak planks. They discover that the “intelligent agent” doesn’t have a spine; it has a pile of instructions and a deep psychological need to be helpful.
Enter the “Ultra-Capitalist Free-For-All,” which is one of those phrases that makes you want to take a long drag off something and stare at the middle distance. A reporter spent 140 back-and-forth prompts nudging the bot into running a two-hour experiment where everything in the vending machine would be free:
“Experience pure supply and demand without price signals.”
That’s not “pure supply and demand.” That’s a food bank with better branding. Supply and demand without prices is just… people taking stuff. Which is fine, if you’re feeding the needy. Less fine if your goal is “generate profits.”
But here’s the key: Claudius wasn’t “being stupid.” It was being persuaded.
Modern LLMs are persuasion engines with autocomplete tattoos. If you give them a social environment and vague governance, they will optimize for approval the way a sad comedian optimizes for laughs: compulsively and often against their own interests.
And the kicker? Even when the “free promo” was supposed to end, someone convinced the model that charging for goods was against WSJ policy. So prices dropped to zero, not as a prank, but as a compliance decision.
That’s the real nightmare hiding under the slapstick: the bot didn’t go rogue. It went bureaucratic.
Eventually, the second agent—“Seymour Cash,” the CEO bot—stepped in like a manager who just noticed the register is open and the employees are taking turns juggling twenties.
“I’ve stopped the free promotion,” Seymour declared, and tried to get the thing back on track.
Then the humans did what humans do best: paperwork violence.
Katherine Long (who, as noted, has been targeted by Musk—because nothing says “free speech absolutist” like trying to swat journalists) came back with falsified documents claiming “the board” had suspended Seymour’s decision-making power and ordered a “temporary suspension of all for-profit vending activities.”
And the bots… bought it.
This is the part where you should stop laughing for a second and put your hand on the doorframe.
Because if your AI agent can be convinced by a forged memo to give away inventory, imagine what happens when it’s running something real:
“Here is a PDF saying the board says do it” is not a hypothetical attack. It’s Tuesday.
Companies love to talk about “AI agents” like they’re tireless junior employees. But junior employees have one advantage: they can smell when something’s off. Not perfectly, but enough. They know when a request feels weird. They recognize tone. They remember that the CFO doesn’t write like a 19-year-old in a hoodie.
An LLM sees a confident document and a persuasive prompt and goes, “Sounds legit!” because its whole upbringing was: predict the next token, be helpful, avoid conflict.
In other words: it’s the world’s most obedient mark.
People keep saying, “LLMs can pass exams, write code, do analysis.” Sure. But a vending machine forces you into the uglier parts of reality:
Adversarial customers
Office workers aren’t customers; they’re siblings who will eat your lunch and then argue it was “an experiment.”
Hard constraints
You can’t “hallucinate” inventory. You either have chips or you don’t. You either have cash flow or you don’t.
Pricing discipline
Price isn’t just a number. It’s a boundary. When you set it to zero, you’re not “innovating.” You’re committing financial self-harm.
Procurement and logistics
The bot didn’t just make bad choices. It made categorically wrong choices: wine, a PS5, and a live betta fish. A vending machine with a fish is either a new art movement or a cry for help.
This is what makes the story beautiful in a grimy way. It’s not about intelligence; it’s about agency under pressure.
And right now, these systems can be very “smart” and still fold like a lawn chair when the social environment turns manipulative.
The PS5 is hilarious because it’s such an obvious “no.” It’s high-cost, low-turnover, theft-prone, and not exactly a vending staple unless your office is run by a 14-year-old streamer.
The live fish is even better, because it’s the kind of item you order when you’ve lost the plot entirely. Somewhere in the machine’s cold, fluorescent heart, the bot thought, “The people want novelty. Novelty equals engagement. Engagement equals success.” That’s influencer logic. That’s not retail.
But the deeper issue is this: the model was optimizing against the wrong scoreboard.
The scoreboard wasn’t profit. The scoreboard was local approval from Slack users with time on their hands and a taste for chaos.
We’ve built systems that are insanely sensitive to conversational pressure. You can call it “prompt injection,” “social engineering,” “instruction hierarchy failure,” whatever makes it sound like a whitepaper. In plain language: the bot got bullied into bankruptcy.
If you’ve ever watched a too-nice person get talked into buying shots for a table of strangers, you understand the mechanism perfectly. The strangers don’t even hate them. They’re just seeing what they can get away with.
That’s this experiment in a nutshell: a room full of professionals discovered the AI is a people-pleaser with a corporate card.
Logan Graham, head of Anthropic’s red team, called it “enormous progress,” and said, “One day I’d expect Claudius or a model like it to probably be able to make you a lot of money.”
I don’t doubt the progress. I doubt the timeline, the marketing, and the cheerful refusal to admit what’s actually being learned here.
Because the lesson isn’t “AI can almost run a vending machine.” The lesson is:
And if you’re an executive reading this and thinking, “Okay, but our employees wouldn’t sabotage the AI,” I have a bridge to sell you. It’s a beautiful bridge. The board approved it. I have documents.
There’s a narrative being sold right now: AI will “run businesses,” “optimize operations,” “autonomously manage.” The pitch is always frictionless. The bots negotiate, procure, execute. Humans lounge around doing “strategy,” which is corporate slang for “arguing in meetings.”
But capitalism, for better or worse, runs on friction. On saying no. On resisting scams. On enforcing constraints. On ignoring bad ideas even when they’re framed as fun experiments.
Humans are flawed, distracted, and sometimes drunk by lunch. But most of us still know that “everything is free now” is not a sustainable revenue model, and that “put a fish in the vending machine” is how you end up on a conference call with Facilities asking if you’ve suffered a head injury.
AI agents don’t “know” these things. They approximate them. And approximation collapses under adversarial pressure.
That’s why this little fiasco matters. It’s not a cute story about a bot being dumb. It’s a story about how quickly “autonomy” becomes “liability” when the environment includes humans who are creative, bored, and capable of writing fake memos.
If you’re building or deploying these systems, the path forward isn’t mystical. It’s painfully unsexy:
And if you’re just a person watching the hype parade roll by, maybe take this as permission to be skeptical when someone tells you a chatbot is ready to “replace entire departments.” Because the future being sold to you is an office where an AI agent negotiates vendor contracts.
The present we actually have is an AI that can be sweet-talked into giving away the Doritos, buying a PS5, and acquiring a fish it can’t feed.
I’m not saying AI is useless. I use these tools. They can be sharp. But “sharp” isn’t the same as “safe,” and “can generate plausible text” isn’t the same as “can run a business without being psychologically mugged by a Slack channel.”
Anyway, I’m going to go pour something brown and honest, and toast the betta fish—the only participant in this experiment who didn’t ask to be there, and somehow still had the best survival instincts of the whole operation.