Look, I wasn’t planning on writing today. My head’s still pounding from last night’s philosophical debate with a bottle of Maker’s Mark about the nature of consciousness. But then this gem lands in my inbox: Stanford researchers are creating AI replicas of real people. For science, they say. For a hundred bucks a pop.
Let that sink in while I pour myself a morning stabilizer.
Here’s the deal: some PhD student named Joon Sung Park (who I’m betting has never had to explain to his landlord why the rent’s late) recruited 1,000 people to create their digital doubles. The pitch? “Imagine having a bunch of small ‘yous’ running around making decisions.” Yeah, because one of me making decisions isn’t already causing enough trouble.
The whole thing reminds me of that time I tried to clone myself by drinking enough bourbon to see double. At least my method was honest about its limitations.
They’re calling these things “simulation agents,” which sounds better than “digital identity theft waiting to happen.” The researchers claim these AI copies scored 85% similarity to their human originals on personality tests. That’s higher than my match rate on dating apps, and probably more accurate than my own responses depending on what time of day you catch me.
But here’s where it gets interesting, and by interesting, I mean terrifying. They want to use these digital copies for social science research. Testing how people react to misinformation, studying traffic patterns, probably figuring out why anyone thought crypto was a good idea. All those experiments that would be “unethical” to do with real humans.
takes long drag from cigarette
The real kick in the teeth? They tested these AI clones using something called the “Big Five personality traits.” Openness, conscientiousness, extroversion, agreeableness, and neuroticism. My therapist tried that test on me once. I scored off the charts on neuroticism and somehow negative points on agreeableness. The AI probably can’t replicate that kind of authentic human dysfunction.
They also used something called the “dictator game” to test fairness, which sounds like my kind of party game until you realize it’s just about sharing money. The AI copies apparently sucked at it. Finally, something we have in common.
Here’s what keeps me up at night (besides the usual existential dread and cheap bourbon): If they can make a decent copy of you for a hundred bucks now, how long until your boss decides the AI version of you is more reliable? It won’t call in sick, won’t demand raises, and won’t spend three hours in the bathroom reading Kafka.
The researchers are trying to be cautious, dropping words like “caveats” and “dangers.” They’re worried about deepfakes and digital impersonation. Meanwhile, I’m worried about my AI twin getting access to my credit score and trying to fix it.
But you want to know the real punchline? They’re building these things to understand human behavior better. As if humans understand human behavior. Half the time I don’t even know why I ordered that last shot of tequila, let alone why I retweeted that thing at 3 AM.
The truth is, we’re rushing headlong into a future where your digital clone might be applying for jobs while you’re nursing a hangover. And the scariest part? It might be better at being you than you are.
At least it can’t drink your whiskey. Yet.
Until next time, this is Henry Chinaski, wondering if my AI twin would make better life choices. Probably. But where’s the fun in that?
[Posted at 11:43 AM, through the bottom of a coffee mug that’s definitely not just coffee]