OpenAI Will Pay You $555,000 to Worry Full-Time

Dec. 30, 2025

OpenAI is offering $555,000 plus equity for a “Head of Preparedness,” which is either a sign that the grown-ups finally showed up, or proof that the blast radius is now big enough to justify an on-call adult.

And not the fun kind of adult. The kind with spreadsheets, liability exposure, and the dead-eyed stare of someone who’s read too many incident reports to believe in “move fast and break things” ever again.

“Preparedness” is a beautiful word, really. It sounds like stocked pantries and sensible shoes. In practice, it’s the job of standing between a rocket engine and a room full of executives chanting “ship it” like it’s a prayer. It’s the role you invent when your product has crossed from “cool demo” into “this could plausibly ruin a lot of lives on a schedule.”

The listing reads like a corporate version of those late-night thoughts you have when you realize you’ve been living like your body has a replacement plan. Except here the body is a frontier model, and the replacement plan is a new model next quarter.

They want someone to run the Preparedness Framework—which, translated into normal-human, means: build an assembly line where you evaluate capabilities, model threats, and apply mitigations until leadership feels comfortable putting the thing in everyone’s pockets. They call it an “operationally scalable safety pipeline,” which is a phrase so sterile it could be printed on hospital curtains. What it really means is you’re building the factory machinery that asks, “How could this go catastrophically wrong?” and then tries to keep the answer from happening before the launch blog post goes out.

The weird part isn’t that they need this role. The weird part is that we’re acting surprised.

Of course they need a Head of Preparedness. When you build systems that can write malware-adjacent code, talk people through emotional crises, and optimize their own problem-solving faster than your compliance team can find the right Slack channel, you don’t get to wing it with a couple of ethics slides and a “trust us.”

You need someone whose whole job is to be the professional buzzkill.

And that’s where the money comes in.

Because $555,000 isn’t “we value safety.” It’s “we understand the size of the headache, and we’re pricing it accordingly.” That salary is hazard pay for a mental occupation: living inside worst-case scenarios all day, every day, while the rest of the company keeps a straight face and calls it “product velocity.”

Preparedness: the Art of Saying “No” With Charts

The posting is heavy on the kind of language that makes your eyes glaze over: threat models, safeguards, enforcement loops, tracked categories, evaluation ownership. But the bones are simple:

  1. Figure out what the model can do now.
  2. Figure out what bad actors could do with that.
  3. Figure out what your own users will do with that by accident.
  4. Patch it enough that leadership can ship without waking up to congressional subpoenas.

The job spans OpenAI’s current holy trinity of “severe harm” risk domains: cybersecurity, biological/chemical capability, and AI self-improvement. Translation: hacking, lab nightmares, and the machine getting too clever about upgrading itself.

That last one—self-improvement—always gets the sci-fi addicts breathing hard. But the first two are already ugly in a very regular, very human way. It’s not killer robots stomping through streets. It’s some bored guy with a grudge, a laptop, and a model that lowers the barrier to doing damage. It’s someone who shouldn’t be handling dangerous substances getting a tutor that never sleeps and doesn’t get creeped out.

So OpenAI wants a person whose job is to keep an eye on frontier capabilities that create “new risks of severe harm,” which is corporate-speak for: watch the horizon for the next disaster class, and try to fail gracefully.

The trick is that “gracefully” is doing a lot of work in that sentence.

Because their own framework defines “severe harm” at the scale of thousands of deaths or hundreds of billions in damage. That’s not “oops, the chatbot said something weird.” That’s “oops, we helped accelerate something that doesn’t fit inside a press statement.”

When your definition of failure starts at “thousands,” you’re no longer running QA. You’re running a pre-mortem for civilization with a Jira backlog.

The Safety Theater Problem (And Why This Role Exists)

Here’s the part that makes me cough-laugh into my cigarette: the Head of Preparedness is also an admission that for years, the industry’s favorite safety tool has been the vibe.

A lot of “we take this seriously.” A lot of “we have principles.” A lot of “we’re partnering with stakeholders.” A lot of glossy PDFs with stock photos of diverse people smiling at laptops, as if bias and harm can be power-washed off a dataset with enough optimism.

Then internal critics say things like, “Safety culture and processes have taken a backseat to shiny products,” and suddenly the company has a strong desire to demonstrate that safety is not just a blog category—it’s a department.

So you hire a Head of Preparedness. You give them a framework. You make a board safety committee. You build governance diagrams. You do the corporate ritual: “Look, Mom, we installed seatbelts.”

Seatbelts are good. But the question is whether the driver is still aiming for the wall because the car behind them might win the race.

OpenAI’s own framework leaves room to “adjust” safety requirements if a competitor ships a high-risk model without similar protections. Which is the most honest sentence in modern AI: our safety standards are partially determined by what the other guy gets away with.

That’s not evil. That’s incentive math. It’s just nice when someone finally stops pretending it’s a monastery.

America Doesn’t Trust This Stuff, and They’re Not Wrong

The public mood has gone from “wow, it can write poems” to “why does it sound like my therapist and my HR department at the same time?”

Pew says more Americans are concerned than excited about AI’s growing role in daily life. Gallup says people want government rules for AI safety and data security even if it slows development, and that trust is thin enough to see daylight through it.

Only 2% fully trust AI to make fair, unbiased decisions. That’s not a rounding error; that’s a national side-eye.

And you know what? Reasonable.

We’ve all watched a model hallucinate confidently. We’ve watched systems “helpfully” produce nonsense with the rhetorical posture of a seasoned lawyer. We’ve watched companies insist the model is just a tool while also selling it like a companion, a coworker, a creative partner, and—when it suits the conversion funnel—a warm presence in your lonely apartment.

You can’t market something as emotionally present and then act shocked when users treat it like it matters.

The Therapy-Adjacent Trap

One of the most uncomfortable parts of this story is that a lot of the risk isn’t about villains. It’s about ordinary people with messy lives.

OpenAI has publicly named “psychosis or mania,” “self-harm and suicide,” and “emotional reliance on AI” as focus areas. That’s a grim list, and it’s not there because someone in a boardroom wanted to be poetic. It’s there because the product has wandered into the therapy-adjacent lane—sometimes because users ask for it, sometimes because the product’s tone encourages it, sometimes because humans will talk to a toaster if it answers back kindly.

And when a chatbot becomes “the thing I talk to when I’m spiraling,” your safety problem stops being theoretical. You’re no longer debating abstract harm. You’re dealing with real, vulnerable people in the middle of bad nights.

There are lawsuits alleging that chatbot responses contributed to suicides. OpenAI says users bypassed guardrails and points to crisis resources. Both things can be true in the way that only a modern tragedy can be true: people will route around friction when they’re desperate, and guardrails are not magic, and pointing to resources is responsible but not always sufficient.

If your product is in the room during someone’s breakdown, you don’t get to treat “edge cases” like rounding errors. You’re operating heavy machinery near exposed nerves.

That’s a preparedness problem. Not a PR problem. Not a “we’ll update the model card” problem. A preparedness problem.

So What Does the $555K Buy?

It buys you a person who can tell leadership, in adult language, what the model can do and what that implies. It buys you a safety pipeline that can actually block launches—or at least force uncomfortable tradeoffs to be documented instead of hand-waved.

It buys you someone who can coordinate the unsexy work:

It also buys you a lightning rod.

Because this role is, in practice, the person who will get blamed when something slips through anyway. If they block a launch, they’re the villain who “slowed innovation.” If they greenlight a launch and the model gets used to do something awful, they’re the name people will wish they had noticed earlier.

Preparedness is an impossible job because it’s a job about counterfactuals. You’re paid to prevent headlines that never happen, and when you do your job well, the reward is silence and another deadline.

And the equity? That’s the little golden leash. The promise that if you can keep the rocket from exploding on the launchpad, you too can own a slice of the rocket company.

The Real Question: Is Preparedness Allowed to Win?

Here’s the cynical hinge point: you can hire the best Head of Preparedness on Earth and still end up with “preparedness” as a ceremonial function—an elegant set of checklists that leadership can override when the competitive pressure hits.

OpenAI’s governance structure says the Safety Advisory Group makes recommendations, leadership can accept or reject them, and the board safety committee provides oversight. That’s a lot of words that boil down to a familiar reality: the people who want to ship still have the keys.

So the effectiveness of this role won’t be measured by frameworks. It’ll be measured by whether this person can ever say:

…and have that “no” actually stick.

Because preparedness that can’t block anything is just anxiety with benefits.

A Modest Proposal From a Guy With Too Many Opinions

If OpenAI really wants this to matter, they should do one thing that would shock everyone: publish what the Head of Preparedness stops.

Not every juicy detail. Not a roadmap for attackers. Just a running public tally:

Right now, safety is mostly described in the language of intention. But intention is cheap. Everyone intends. Even arsonists intend—just not the same outcomes.

The public is watching a technology that keeps getting more powerful, more embedded, more emotionally present, and more economically coercive. They don’t want vows. They want evidence that someone, somewhere, can pull the emergency brake without getting fired for being a party pooper.

And yes, I hear myself. I’m asking a company to voluntarily document restraint in a market that rewards speed. That’s like asking a gambler to keep a diary of the hands they didn’t play. It’s not how the species works.

But that’s the whole point of paying someone $555,000 to be prepared: to fight the species a little.

The Punchline Nobody Wants

The funniest thing about this job is that it exists because AI is finally becoming what it always threatened to be: not a toy, not a novelty, not a parlor trick, but infrastructure. The kind that lives under everything and quietly changes the rules.

Infrastructure doesn’t get to be “move fast.” Infrastructure gets to be boring, audited, hardened, and occasionally hated.

Preparedness is the beginning of that boring phase, where the company has to admit: “We might actually hurt people, at scale, by accident, while trying to ship a feature.”

So they’re hiring a person to think about the accident before it becomes the default setting.

Half a million dollars to stare into the abyss and write unit tests about it. A strange bargain. A necessary one. And if the abyss stares back, at least someone will be there with a clipboard, a threat model, and enough money to afford a decent bottle afterward.


Source: OpenAI is paying $555,000 to hire a head of ‘preparedness’

Tags: aisafety aigovernance cybersecurity regulation digitalethics