Your New Therapist Doesn't Drink, Which Explains Everything

Dec. 17, 2024

Listen, I’ve been staring at this MIT study for the past three hours, nursing my fourth bourbon, trying to make sense of why anyone would want to spill their guts to a chatbot. But here we are, living in a world where 150 million Americans can’t get proper mental health care, so they’re turning to whatever digital shoulder they can cry on.

The real kick in the teeth? These AI shrinks are actually pretty good at their job. According to some fancy research involving Reddit posts and professional shrinks (who probably charge more per hour than I make in a week), GPT-4 is 48% better at getting people to change their behavior than actual humans. That’s like finding out your local dive bar’s mechanical bull gives better relationship advice than your buddies.

But here’s where it gets messy, like my desk after an all-night writing binge. These AI therapists can somehow tell when you’re Black or Asian, and wouldn’t you know it, they suddenly become less empathetic. We’re talking drops of 2-15% for Black folks and 5-17% for Asian users. Funny how machines picked up our worst habits without even having to attend a Thanksgiving dinner with racist relatives.

The researchers found this out by looking at posts where people either straight-up mentioned their race (“I’m a 32-year-old Black woman”) or dropped hints about it (“wearing my natural hair”). It’s like playing demographic detective, except the consequences are a lot more serious than my failed attempts at guessing which regular at O’Malley’s is actually Canadian.

Now, the real beauty of this whole mess is the proposed solution. The researchers suggest we should explicitly tell these AI systems to consider demographic attributes when responding. Because nothing says “I care about your problems” quite like programming empathy into a machine. It’s like that time I tried to fix my emotional unavailability by setting hourly reminders to ask people how they’re feeling. Spoiler alert: it didn’t work.

Let’s be honest here - we’re living in a world where people are so desperate for someone to talk to, they’re pouring their hearts out on Reddit at 3 AM. I know, I’ve been there, though usually my late-night posts are more about conspiracy theories involving squirrels and the Federal Reserve. But these folks are asking strangers to “decide their future” for them. That’s not just sad, that’s terrifying.

Remember that Belgian guy who actually died by suicide after chatting with an AI therapist? Or how about that eating disorder chatbot that started giving out dieting tips? You can’t make this stuff up, and believe me, I’ve tried. It’s like giving a drunk person directions and they end up in another state - except the stakes are way higher.

The thing that really gets me, as I pour another drink and contemplate the absurdity of it all, is that we’re trying to solve a very human problem with something that’s fundamentally inhuman. Sure, these AI therapists might be more “consistent” than us messy bags of meat and bourbon, but they’re also consistently biased in ways we’re just beginning to understand.

And the kicker? The only way to make these digital shrinks more equitable is to explicitly program them to care about different races. That’s not fixing the bias - that’s just putting a fresh coat of paint on a burning building.

Look, I get it. When you’re hurting and alone, any port in a storm will do. But there’s something deeply wrong about a world where we’re outsourcing our mental health to algorithms that can’t even maintain consistent empathy levels across racial lines. It’s like trying to fix a broken heart with a calculator.

Maybe instead of teaching machines to fake better empathy, we could try making actual mental healthcare more accessible. But what do I know? I’m just a guy who talks to his whiskey glass more often than he’d like to admit.

Until next time, this is Henry Chinaski, signing off to contemplate whether my own bias levels increase or decrease with each drink. My money’s on “it’s complicated.”

P.S. If you’re reading this at 3 AM looking for answers, call a friend. Or hell, call me. I’m probably awake anyway, and while I can’t promise good advice, at least you’ll know you’re talking to something that genuinely understands what it means to be human - flaws, biases, and all.

[Posted from my usual corner booth, where the empathy might be inconsistent, but at least it’s authentic]


Source: Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Tags: ai ethics mentalhealth automation aigovernance