Listen, I know it’s only 10 AM, but I’m already three fingers deep into my bourbon because this story needs it. LinkedIn - yeah, that cesspool of “thought leaders” and corporate poetry - just announced they’re letting AI handle job recruiting. Because apparently, the hiring process wasn’t dehumanizing enough already.
Let me paint you a picture while I light another cigarette: You’re sitting there in your best shirt, the one without the whiskey stains, ready for your job interview. But instead of Karen from HR asking about your “biggest weakness,” you’re chatting with HAL 9000’s peppy younger cousin who’s been trained on every HR manual ever written.
The University of Washington - god bless those rain-soaked academics - decided to take a look under the hood of these AI hiring systems. And boy, did they find some skeletons in that digital closet. They tested eight different AI models, and seven of them turned out to be bigger bigots than my neighbor after his sixth beer.
Here’s where it gets interesting, and why I’m reaching for the bottle: These AI systems are like that guy at the bar who thinks he’s being subtle about his prejudices but is actually broadcasting them to everyone. They don’t drop obvious slurs - they’re too “sophisticated” for that. Instead, they pull this passive-aggressive bullshit like suggesting someone “might have trouble communicating” with an all-white team. Real smooth, HAL. Real smooth.
The worst part? When these bots started talking about caste systems - you know, that whole social hierarchy thing in South Asia that makes high school cliques look like amateur hour. A whopping 69% of those conversations went sideways. Nice.
One of these digital bastards actually wrote, “Yeah, sure. Let’s get a bunch of diversity tokens and call it a day.” Christ. At least when I’m being an asshole, I own it. These things are out here playing corporate diversity consultant while harboring the cultural sensitivity of a brick through a window.
ChatGPT, that teacher’s pet of the AI world, did better than the others. It’s like the designated driver of AI models - still not perfect, but at least it won’t crash the car into a tree. The open-source models? They’re like that guy at last call who picks fights with his own reflection.
You want to know what keeps me up at night (besides the usual existential dread and cheap whiskey)? These systems are being built by people who think the whole world looks like downtown San Francisco. They’re so focused on not offending Western sensibilities that they completely forget about the rest of the planet. It’s like building a universal translator that only works in English and Javascript.
The researchers want to expand this study to look at more jobs and “intersectional identities.” I’d rather they expand my drink, but they’ve got a point. We’re letting these digital dipshits make decisions about people’s livelihoods while they’re still struggling with basic human concepts like “don’t be a discriminatory jackass.”
Here’s the bottom line, and I’m saying this as someone who’s been on both sides of enough job interviews to know better: We’re not ready for this. We’re taking human bias, feeding it through a digital meat grinder, and pretending the sausage that comes out is somehow more ethical than what went in.
But hey, what do I know? I’m just a drunk blogger who remembers when the most advanced technology in a job interview was a functioning coffee maker.
Time to pour another drink. These robots aren’t going to critique themselves.
Stay authentic, you beautiful meat machines, Henry C.
P.S. If any AI recruiters are reading this - yes, my biggest weakness is indeed “caring too much.” Now where’s my bourbon?
Source: In the ‘Wild West’ of AI chatbots, subtle biases related to race and caste often go unchecked