Google's Gone Full Skynet, Baby (Or: How I Learned to Stop Worrying and Love the Algorithm)

Feb. 6, 2025

So, Google, the company that once told us “Don’t Be Evil” (remember that quaint little nugget?), has decided that maybe, just maybe, evil pays a little better these days. They’ve quietly slipped a mickey into their own AI ethics guidelines, removing the pesky bit about not using their all-knowing, all-seeing algorithms to build weapons or, you know, generally screw humanity over.

And they’re not even the first. The other AI bigwigs from OpenAI did the same last year.

It’s like watching a pack of wolves shed their sheep’s clothing, one by one, while the rest of us stand around with our mouths agape, holding our craft IPAs and wondering when the mauling starts.

Seems like only yesterday, they were all about sunshine and rainbows, promising a future where benevolent AI would fold our laundry and write our haikus. Now? Well, now it seems like they’re more interested in building the kind of tech that can turn your laundry, your haikus, and you into a fine, pink mist.

Margaret Mitchell, the former head of Google’s ethical AI team, bless her bleeding heart, is “troubled.” Troubled? Honey, I’m about three fingers of bourbon away from building a bunker in my backyard and stocking it with canned beans and vintage Playboys.

“Companies, governments, and organizations… should work together to create AI that protects people, promotes global growth, and supports national security,” Google says, in a statement that’s about as reassuring as a politician’s smile. Translation: “We’re gonna do what we want, and if you don’t like it, well, we’ve got algorithms that can predict your next bowel movement, so don’t test us.”

They call it “digital exhaust” – that trail of data we leave behind as we stumble through the internet like drunkards in a funhouse. Google, being the resourceful scavengers they are, figured out that this digital detritus was worth more than gold. It was the key to unlocking our deepest desires, our darkest fears, our most embarrassing online searches (yes, I’m talking about that time you Googled “is it normal to…” – we all have our secrets, pal).

And what did they do with this newfound power? They built an advertising empire, of course. An empire built on knowing us better than we know ourselves. An empire that can whisper sweet nothings (or targeted ads) into our ears, convincing us to buy things we don’t need, vote for people we don’t trust, and believe things that make absolutely no goddamn sense.

It’s the way of the world, baby, it’s always been.

But, the twist? I felt it in my gut, like that sixth shot of whiskey that tells you it’s going to be a long night. These tech giants, they’re not really interested in building killer robots. Not yet, anyway.

See, the real money isn’t in blowing things up. It’s in control. It’s in knowing everything. It’s in owning the data, the algorithms, the very fabric of our digital lives.

And, the kicker is… we gave it to them. Willingly. We clicked “accept” on those endless terms and conditions, we downloaded the apps, we traded our privacy for convenience like a bunch of chumps at a rigged carnival game.

We’re all just lab rats in their grand experiment, running through their digital mazes, chasing the cheese of instant gratification while they watch, learn, and refine their algorithms of control.

So, yeah, Google might be ditching the “Don’t Be Evil” mantra. But maybe, just maybe, they’re not aiming to be evil. Maybe they’re aiming to be God.

And that, my friends, is a hell of a lot more terrifying.

Time for another drink. Make it a double. Hell, I’ll see you at the bar. Might as well enjoy the apocalypse while it’s still happy hour, right?


Source: Google Quietly Walks Back Promise Not To Use AI for Weapons or Harm

Tags: ai ethics algorithms bigtech digitalethics