Alright, pour yourself a stiff one. Or don’t. More for me. Seems like the latest brain-rot to ooze out of the digital sewer involves turning your mug into a goddamn action figure. Or some weepy cartoon character that looks like it wandered off the set of a movie made by guys who probably drink sake, not whiskey. People are plastering these things all over the internet like they just won the lottery, showing off their little plastic selves holding coffee cups or yoga mats. Yoga mats. Jesus.
The magic behind this digital dress-up party? Some new upgrade to that ChatGPT thingamajig, the one everyone’s either terrified will steal their jobs or praying will write their lousy emails for them. GPT-4o, they call it. Sounds like a droid that’d bring you lukewarm coffee in some sterile future hellscape. Anyway, this new toy lets the AI fiddle with pictures better, make ‘em look like cartoons, slap your face onto a generic superhero body. Easy peasy. Free account, upload a photo, boom. Instant gratification for the terminally bored.
And that’s the rub, isn’t it? The oldest hustle in the book. The free drink that gets you talking, the free sample that gets you hooked. In this case, the free digital dollhouse version of yourself. Fun, sure. Harmless? Like hell it is. While you’re busy admiring how accurately the AI captured the desperate glint in your eye, you’re handing over the keys to the kingdom, or at least the keys to your own damn face and everything attached to it.
You upload that selfie, grinning like an idiot, thinking you’re just playing a game. Wrong. You’re feeding the beast. Every picture you snap on that phone these days is loaded with hidden crap – metadata, they call it. Sounds boring, but it’s the little details that’ll hang you. This tech guru fella, Tom Vazdar, lays it out: the time you took the picture, the exact spot on God’s green earth where you were standing when you decided the world needed to see you as a plastic figurine. Think about that. You just told the great blinking eye in the cloud where you get your morning coffee, or where you were staggering home at 3 AM last Friday. Hope it was somewhere classy. Probably wasn’t.
And it ain’t just the picture’s guts. Oh no. They want everything. What kind of phone you got clutched in your sweaty palm? What operating system? What browser you used to crawl onto their platform? Your IP address, those unique numbers that might as well be tattooed on your forehead in the digital realm. They track how you talk to the machine, too. What prompts did you type? “Make me look heroic”? “Give me bigger muscles”? “Hide the hangover”? They log it all. How you click, how often you come crawling back for another digital ego stroke. Behavioral data, they call it. Sounds like something a shrink would scribble down before prescribing happy pills. Or something a con man learns before he cleans you out.
This Vazdar guy calls it a “goldmine” for training these AI things. Especially the ones learning to see, the “multimodal” ones. Makes sense. Why teach a robot to recognize faces by scraping blurry photos off the public internet when millions of saps are lining up to upload high-resolution portraits, perfectly lit, smiling like they’re posing for a parole hearing? It’s genius, in a thoroughly depressing way.
Think about what’s in those pictures. It ain’t just your dumb face. It’s the background. That messy bedroom you didn’t clean. The crap piled on your desk. Maybe other people lurking behind you, who never agreed to be part of this digital peep show. Any stray documents? A work badge hanging on a lanyard? A street sign? A license plate? Anything readable? Fair game. You uploaded it. You clicked ‘Agree’. Checkmate, sucker. Another expert, Camden Woollven, points this out. You’re not just giving them you, you’re giving them a snapshot of your entire pathetic little world.
Now, OpenAI, the wizards behind this curtain, they play dumb. Of course they do. They put on their best choirboy face and swear they didn’t plan this viral trend just to Hoover up your data. Perish the thought! It was all just a happy accident, a spontaneous outpouring of digital creativity! Yeah, right. And the house always loses in Vegas. It’s a “convenient opportunity,” Vazdar says, dryly. Convenient like finding a wallet on the sidewalk with the owner’s address conveniently inside. They don’t need to scrape the web for your face if you deliver it gift-wrapped, tied with a bow made of your own vanity. Fresh, high-quality facial data from every corner of the globe, every age, every type. It’s a feast. And you’re serving yourself up on a platter.
They say they don’t use this stuff to build profiles on you for ads or sell your info. That’s what the spokesperson told WIRED, anyway. Probably through clenched teeth while counting stacks of virtual cash. But – and it’s a Mount Everest-sized but – their own damn privacy policy says images submitted can be kept and used to improve their models. Improve them how? To get better at recognizing faces? Better at understanding context? Better at… well, whatever the hell they plan on doing once they know everything about everyone? Your guess is as good as mine, and frankly, I need another drink just thinking about it.
Every time you feed it a prompt, every time you upload a picture, you’re teaching the damn thing. Like tutoring a kid who you suspect might grow up to burn down the neighborhood. And the more personal the info, the better it learns you. This cybersecurity advisor, Jake Moore, even made his own action figure just to show people the risk. A public service announcement disguised as digital narcissism. Gotta respect the hustle, I guess.
Now, some pencil-pushers in Europe cooked up rules, GDPR and whatnot. Supposedly gives you rights. Right to see your data, right to delete it. Sounds good on paper, like most manifestos. They even say using “biometric data” – like your unique face shape – needs explicit consent. Explicit. Like, “Yes, Mr. Robot, you have my permission to scan my soul through my eyeballs.”
But here comes the lawyerly two-step. A lawyer quoted in the story, Melissa Hall, says just processing a photo to make a cartoon version probably doesn’t count as biometric data under the rules. See? Always a loophole. Always an escape hatch for the folks running the show. It’s only biometric data if they use it in a specific technical way to uniquely identify you. Making a Ghibli-style caricature? Nah, that’s just fun and games! Never mind that the underlying process involves analyzing the unique features of your face to create that caricature in the first place. It’s like saying pointing a loaded gun at someone isn’t a threat until you pull the trigger. Semantics. Word games while the machine learns your T-zone.
So why? Why do people flock to this crap? Are we that desperate for a flicker of attention in the digital void? Do we need a cartoon version of ourselves to feel real? Maybe it’s just boredom. God knows there’s enough of that to go around. Sitting in our little boxes, staring at screens, waiting for something, anything, to happen. A digital action figure is cheaper than a bottle of bourbon, I suppose. Though not nearly as effective.
Maybe the real kicker isn’t just the privacy we’re pissing away. Maybe it’s the sheer, mind-numbing absurdity of it all. We’re living in a world drowning in actual problems – war, poverty, the slow crawl towards environmental collapse, the fact that decent bars are getting harder to find – and we’re obsessed with turning ourselves into goddamn cartoons. We’re trading our actual, messy, human faces, the ones that show the lines etched by laughter and worry and too many late nights, for smooth, sterile, plastic-looking avatars.
It’s like that old story, selling your soul to the devil for a fleeting desire. Except here, you’re selling your face, your habits, your location, your messy room, to some algorithm stewing in a server farm somewhere in Oregon or Virginia, all for a digital trinket that’ll be forgotten by next Tuesday. You’re not even getting a good price for it. At least Faust got Helen of Troy for a while. You get a JPEG.
Maybe the AI isn’t just learning to recognize faces. Maybe it’s learning to recognize weakness. Vanity. The desperate human need to be seen, even if it’s by a machine. And that kind of knowledge… that’s worth more than all the metadata in the world. They’re not building a face database; they’re building a psychological profile of the entire damn species, one action figure at a time.
Hell, maybe I should make one. Chinaski, the action figure. Comes with a bottle accessory, a typewriter, maybe a perpetually unimpressed expression. See what the machine makes of this old wreck. Nah. Too much effort. Besides, the real thing is disappointing enough. No need for a digital copy to remind me.
So yeah, think twice. Think three times. Then maybe pour a drink and think about something else entirely. Like that horse running in the third race. Or that woman at the end of the bar. Or just the comforting burn of cheap whiskey sliding down your throat. Anything’s better than uploading your life story for a cartoon that doesn’t even get the bags under your eyes right.
The bottle’s half empty. Or half full, if you’re one of those annoying optimist types. Time for a refill. Stay human, folks. It’s getting harder every day.
Chinaski out. Needs another cigarette.
Source: Think Twice Before Creating That ChatGPT Action Figure