When the Robots Rat Out the Consultants: A $440,000 AI Faceplant

Oct. 6, 2025

So Deloitte just got caught with its hand in the AI cookie jar, and the whole thing is so beautifully absurd that I had to pour myself another cup of coffee just to process it. Actually, scratch that – this story deserves something stronger.

The Australian government paid these consultancy cowboys $440,000 to review their welfare compliance system, and what did they get? A report so riddled with AI-generated bullshit that even the robots should be embarrassed. We’re talking nonexistent court cases, phantom professors at universities that definitely exist but whose research apparently doesn’t, and references that lead nowhere except maybe to the fever dreams of a large language model having a bad day.

Here’s the thing that kills me: Deloitte didn’t even bother to check if the sources their AI buddy hallucinated were, you know, real. Like, that’s literally the one job. You feed the AI some prompts, it spits out what looks like a professional report, and then – and this is the crucial part they seemed to miss – you actually verify that the footnotes aren’t complete fiction.

Dr. Christopher Rudge from the University of Sydney caught them red-handed. The poor bastard probably thought he was just doing some routine fact-checking, maybe verifying a citation or two, and instead discovered he’d stumbled into an AI-generated house of cards. When they went to fix the errors, they didn’t just swap out the fake references for real ones. Oh no. They replaced each hallucinated citation with five, six, seven more references. It’s like watching someone try to cover up a lie by telling seven smaller lies. Classic drunk logic, except these guys were supposedly sober.

Think about what this means. The original report made claims that weren’t based on any actual evidence. They were based on what the AI thought sounded good. It’s like asking your drinking buddy at the bar what he thinks about quantum mechanics and then citing him in your physics dissertation. Sure, Bob’s got opinions, but Bob also thinks the moon landing was faked and that his ex-wife is monitoring his thoughts through the fillings in his teeth.

The really rich part? Deloitte is standing by their findings. They issued a statement saying the updates “in no way impact or affect the substantive content, findings and recommendations.” Translation: “Yeah, we made up all the supporting evidence, but trust us, the conclusions are totally legit.” That’s some weapons-grade chutzpah right there.

And they’re giving back a “partial refund.” Partial. Like they delivered partial work. Which, to be fair, they did – they delivered the AI-generated part but skipped the human verification part. That’s like a chef serving you a partially cooked chicken and then offering to refund you for the salmonella.

Labor Senator Deborah O’Neill nailed it when she said Deloitte has a “human intelligence problem.” She suggested that instead of hiring big consulting firms, maybe the government should just buy a ChatGPT subscription. It’s a sick burn, but here’s the uncomfortable truth: she’s not wrong. Why pay $440,000 for a report that was largely written by AI when you could pay twenty bucks a month and cut out the middleman?

This is the consultancy racket laid bare, folks. These firms have been charging premium rates for decades, selling the illusion that they’re providing irreplaceable expertise and insight. Now we find out they’re just fancy prompt engineers with PowerPoint templates. They’re the guy at the bar who claims he knows the owner, can get you a discount, and will totally pay you back next week – except they’re wearing thousand-dollar suits and billing by the hour.

What really gets me is the casual nature of it all. In the updated version of the report, Deloitte just slipped a little note into the appendix mentioning, “Oh yeah, by the way, we used AI.” Like it’s no big deal. Like admitting you outsourced the actual thinking to a machine is just standard operating procedure now. Which, let’s be honest, it probably is.

The whole situation is a perfect microcosm of where we are with AI right now. Everyone’s rushing to use it, nobody’s quite sure how to use it responsibly, and the people getting paid the most to figure it out are just winging it harder than anyone else. It’s the blind leading the blind, except the blind guy in front has a robot dog that sometimes walks into walls but looks really impressive doing it.

Dr. Rudge, to his credit, said he hesitates to call the whole report illegitimate because the conclusions align with other evidence. That’s the most charitable thing I’ve heard all week. “Sure, they made up all their sources, but they accidentally got to the right answer anyway” is not exactly a ringing endorsement of methodological rigor. It’s like giving a kid credit for showing their work when their work is just a bunch of doodles and the phrase “trust me bro.”

The department commissioned this review to look at their welfare compliance system – you know, the automated punishment machine they use to financially kneecap jobseekers who miss appointments. The system was already dystopian enough without adding AI-generated oversight into the mix. Now we’ve got robots checking on robots, and neither of them can tell fact from fiction. It’s turtles all the way down, except the turtles are all malfunctioning.

Here’s what nobody wants to say out loud: this is just the first one we caught. How many other reports, studies, and analyses are floating around out there, informing policy decisions and shaping public discourse, that are built on AI-generated phantoms? How many phantom professors are being cited in boardrooms and parliaments right now? How many nonexistent court cases are being used to justify real-world decisions?

The consultancy industry has always been part smoke and mirrors. They charge obscene rates to tell you things you already know, packaged in language complicated enough to make it seem valuable. Now they’ve found a way to automate the smoke and mirrors. It’s efficient, I’ll give them that.

What kills me is that Deloitte will walk away from this mostly unscathed. They’ll issue their partial refund, release some statement about “reviewing their processes,” and be back to billing six figures for AI-assisted reports within a month. The government will keep hiring them because that’s how the game works. Everyone knows everyone else is full of shit, but as long as the paperwork looks official, nobody rocks the boat.

Meanwhile, the actual people affected by these systems – the jobseekers getting automatically penalized, the welfare recipients navigating bureaucratic nightmares – they’re still stuck with a compliance framework that was just reviewed by a team that couldn’t be bothered to check if their sources were real.

The future is here, folks. It’s just not evenly distributed. Some of us are getting AI-generated bullshit at premium prices, and some of us are getting punished by algorithmic systems that were reviewed by AI-generated bullshit.

What a time to be alive.

I need another drink.

—Henry


Source: Deloitte to pay money back to Albanese government after using AI in $440,000 report

Tags: ai ethics bigtech automation regulation