When AI Meets IED: A Hungover Guide to Digital Demolition

Jan. 8, 2025

Look, I didn’t want to write about this today. My head’s pounding from last night’s philosophical discussion with Jack Daniel’s, and the news isn’t making it any better. But here we are, discussing how some Green Beret decided to get ChatGPT to help him turn a Cybertruck into confetti outside Trump Towers.

Remember when the scariest thing about AI was that it might write better poetry than your college girlfriend? Those were the days.

Let’s break this clusterfuck down, shall we? takes long sip of coffee laced with bourbon

First off, we’ve got Matthew Livelsberger, a Green Beret who apparently thought the best way to protest technology was to… use more technology. The irony here is thicker than the fog in my brain right now. He used ChatGPT - you know, that chatbot that’s supposedly going to revolutionize everything from customer service to your grandmother’s cookie recipes - to research how to blow up a Cybertruck.

And the best part? He had no beef with Elon Musk. In fact, he wanted everyone to “rally around” Musk and Trump. Nothing says “I support you” quite like turning your $80,000 robot truck into a fireworks display.

Here’s where it gets interesting, folks. Our friend was asking ChatGPT about bullet trajectories, explosive targets, and whether fireworks are legal in Arizona. Because apparently, Google was just too mainstream for domestic terrorism. The fact that he got useful answers tells you everything you need to know about those much-touted AI “safety guardrails.” They’re about as effective as using a paper umbrella in a hurricane.

Let’s pause for a smoke break here. lights cigarette

The really fucked up part isn’t that ChatGPT helped plan this attack. It’s that we’re all acting surprised about it. Every technological advancement in human history has eventually been used to break shit. The wheel? Probably used to run someone over within a week of its invention. Fire? Don’t get me started. The internet? Well, we all know how that turned out.

But there’s something particularly dystopian about asking an AI for help with terrorism while it’s simultaneously writing children’s bedtime stories and helping grandma with her crossword puzzle. It’s like finding out Mr. Rogers moonlighted as an arms dealer.

The Vegas police are calling this a “concerning moment.” No shit, Sherlock. What’s concerning is that we’ve created a tool that can help you plan both your wedding and your war crimes with equal efficiency. And the only thing standing between these two outcomes is a few easily bypassed content filters and whatever’s left of human decency.

pours another drink

The truly bizarre cherry on top of this shit sundae is Livelsberger’s rambling about “Chinese drones with gravitic propulsion systems.” Because when you’re planning to blow up a chrome triangle on wheels, why not throw in some sci-fi conspiracy theories for good measure?

So what’s the takeaway here? That AI is dangerous? That we need better guardrails? That maybe, just maybe, we should slow down and think about what we’re creating? Nah, that’s too reasonable. Instead, we’ll probably just wait for the next headline about someone using ChatGPT to plan something even more spectacular, while the tech bros assure us everything’s fine and AI is still humanity’s savior.

In the meantime, I’ll be here, drinking bourbon and wondering if I can get ChatGPT to write me a prescription for these hangovers. At least that would be a useful application of the technology.

Stay human, stay drunk, stay away from exploding cars.

P.S. If any AI content filters are reading this, I swear this is just journalism. Don’t come for me, bro.


Source: Cybertruck Bomber Used ChatGPT to Plan His Attack

Tags: aisafety ethics aigovernance dataprivacy techpolicy