Jan. 17, 2025
Another day, another hangover, another brilliant mind trying to explain consciousness while I can barely maintain my own. Today we’re diving into Joscha Bach’s ideas about machine consciousness, and believe me, I needed extra bourbon for this one.
Let’s start with Bach himself - imagine growing up in a DIY kingdom in the German woods because your artist dad decided society wasn’t his cup of tea. Most of us were dealing with suburban drama while young Joscha was basically living in his own private philosophy experiment. No wonder he turned out thinking differently about consciousness and reality.
Jan. 8, 2025
Look, I’d normally be three bourbons deep before tackling another Sam Altman prophecy, but my doctor says I need to cut back. So here I am, disappointingly sober, reading through Sam’s latest blog post about how OpenAI has “figured out” AGI. And buddy, let me tell you - this hangover would’ve been easier to stomach.
You know what this reminds me of? Every guy at my local bar who’s “figured out” how to get rich quick. They’ve got systems, they’ve got plans, they’ve got everything except actual results. But hey, they just need a little more cash to make it happen. Sound familiar?
Jan. 8, 2025
Look, I wasn’t planning on writing today. My head’s still throbbing from last night’s philosophical debate with a bottle of Wild Turkey about whether consciousness is just a cosmic joke. But then I read about our impending digital ascension, and well… somebody’s got to keep the record straight while we’re all busy planning our upload to the great cloud in the sky.
Let me pour another drink before we dive into this mess.
Jan. 7, 2025
Look, I didn’t plan on starting 2025 by dissecting another tech messiah’s proclamations, but here I am, nursing a hangover while Sam Altman plays fortune teller with our future. Again.
Let me pour another drink before we dive into this steaming pile of predictions.
You know what’s funny about the future? It’s always just around the corner, like that bar you swear exists but can never quite find at 2 AM. Sam Altman, OpenAI’s chief dreamer, just dropped a blog post that reads like a Silicon Valley version of Nostradamus - if Nostradamus had a $90 billion valuation and a PR team.
Jan. 6, 2025
Listen, I’ve been through enough benders to know when someone’s talking crazy, and Sam Altman’s latest blog post reads like the ramblings you’d hear at last call from some guy who just discovered DMT.
Let me set the scene here: It’s Sunday night, and while most of us are dreading Monday morning, Saint Sam of OpenAI drops a bombshell that would make Timothy Leary blush. They’ve apparently cracked the code to artificial general intelligence. And hey, why stop there? They’re already pivoting to “superintelligence.”
Jan. 3, 2025
Listen, it’s 3 AM and I’m nursing my fourth bourbon while trying to make sense of this latest tech hype storm about AGI and integrity. The whiskey helps, trust me. You’re gonna need some too.
Let me break this down for you poor bastards who haven’t been drinking enough to understand what’s really going on here.
OpenAI - those magnificent bastards who named themselves after transparency while keeping their checkbooks closed - have a public definition of AGI that sounds like it was written by a committee of unicorn-riding optimists: “highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity.”
Dec. 28, 2024
Look, I’ve been staring at this interview with Sam Altman for the past three hours, nursing my fourth bourbon, trying to make sense of what he’s telling us about AI. And the more I drink, the clearer it becomes - we’re all living in Sam’s optimistic fever dream, and somebody needs to wake us up.
Here’s the thing about Sam’s take on AI adoption: he’s not wrong when he says it’s spreading faster than anything we’ve seen before. Hell, I tried using ChatGPT for search last night at 2 AM while trying to figure out why my neighbor’s cat was screaming like it was channeling Jim Morrison. The answers were surprisingly coherent, which is more than I can say for myself at that hour.
Dec. 20, 2024
Originally posted on WastedWetware.com, December 20, 2024
I’m three fingers deep into a bottle of Wild Turkey, staring at my screen, trying to make sense of the latest academic breakthrough that’s supposed to revolutionize artificial intelligence. Some guy named Robert Johansson just got his PhD by combining psychology with AI, and he’s calling it “Machine Psychology.” Because apparently what AI really needed was a therapy session.
Let me take another sip before I dive into this mess.
Dec. 20, 2024
Listen, I just dragged myself through another one of those fancy summits where rich people in expensive suits try to predict the future. The DealBook Summit, to be exact. Had to wear my one clean shirt and everything. The topic? AI in 2030. Christ.
Ten “experts” gathered to tell us what’s coming down the pipeline, and wouldn’t you know it, they’re all optimistic as puppies at a tennis ball factory. Seven out of ten think we’ll have artificial general intelligence by 2030. That’s right - machines that can do everything a human brain can do. Which makes me wonder if they’ve ever actually met a human.
Dec. 18, 2024
Look, I wouldn’t normally write about this superintelligence stuff before noon, but my bourbon’s getting warm and these press releases keep piling up like empties at last call. Everyone’s talking about how AI is going to evolve from today’s chatbots into something that’ll make Einstein look like a kindergartener eating paste.
Let me break this down while I pour another drink.
Remember 1956? Neither do I, but apparently some big brains at Dartmouth thought they’d crack this whole artificial intelligence thing over a summer. Real cute. Here we are, 68 years later, and the best we’ve got are chatbots that sound like your friend who took one philosophy class and won’t shut up about it.
Dec. 14, 2024
Look, I wasn’t planning on writing tonight. The bottle of Jim Beam was keeping me warm company while I watched reruns of Star Trek, but then this gem landed in my inbox. Ilya Sutskever, the guy who recently tried to push Sam Altman off the OpenAI throne (and failed spectacularly), is now preaching about AI unpredictability. The irony is thicker than the morning-after taste in my mouth.
Here’s the real kicker - Sutskever just figured out what any halfway decent drunk could tell you: there’s only so much bourbon in the bottle. Or in his case, “we have but one internet.” Revolutionary stuff, right? These geniuses have been feeding their AI models with every scrap of data they could find, and now they’re hitting the wall because - surprise, surprise - we’re running out of fresh data to feed the beast.
Dec. 14, 2024
Look, I’d love to write this piece sober, but it’s 3 AM and my bourbon’s telling me truths that water never could. OpenAI just dropped their new “o1” system, and boy, does it have daddy issues. For the low, low price of $200 a month - that’s roughly 40 shots of well whiskey at my local dive - you too can experience what they’re calling “human-level reasoning.” Which, given my current state, isn’t setting the bar particularly high.
Dec. 13, 2024
Listen, I wouldn’t normally be conscious at 8 AM, but my neighbor’s cat decided to host what sounded like the feline version of Woodstock on my fire escape. So here I am, nursing a bourbon (hey, it’s 5 PM somewhere) and reading about how AI “agents” are going to revolutionize our lives in 2025.
The suits at Reuters NEXT have been making predictions again. You know the type - people who think a $500 bottle of wine tastes better than my $7 whiskey. And boy, do they have some stories to tell.
Dec. 12, 2024
Another day, another tech summit where the brightest minds gather to tell us how they’re going to save humanity through PowerPoint presentations and canapés. This time it’s the DealBook Summit, where ten of our future overlords’ best friends gathered to discuss how AI is going to solve everything from cancer to my mounting bar tab.
Let me pour myself a bourbon before we dive into this mess.
Seven out of ten experts raised their hands when asked if super-smart AI would exist by 2030. You know what else seven out of ten experts agree on? That I should probably cut back on the drinking. Both predictions are equally likely to come true.
Dec. 11, 2024
Look, I’ve been sitting here at Murphy’s Bar for the last four hours trying to make sense of this whole AI definition mess, and I’ll tell you what - it ain’t getting any clearer after six whiskeys. But maybe that’s the point. The whole damn thing is designed to be as clear as mud.
You want to know what’s really happening with AI these days? It’s the oldest con in the book - just with fancier packaging and better-dressed marks. Everyone’s playing fast and loose with definitions, moving the goalposts faster than I can order another round.
Dec. 10, 2024
Well folks, it’s 3 AM, and I’m four fingers of bourbon deep into what passes for wisdom these days. Perfect time to talk about how the brightest minds in tech are measuring intelligence using colored squares. Yeah, you heard that right.
Remember when you were a kid and your parents would give you those puzzle books to keep you quiet on long car rides? Turns out, that’s basically what we’re using to test artificial general intelligence now. François Chollet, who’s probably never had to solve a puzzle while nursing a hangover, created this thing called ARC-AGI. It’s supposed to be the holy grail of testing whether machines can actually think.
Dec. 10, 2024
Look, I’m nursing the kind of hangover that makes me wish I’d chosen a different career path, but even through the bourbon haze, I can see what’s happening here. The big shots at Microsoft and OpenAI are playing a game of “Will AGI/Won’t AGI” that’s about as reliable as my promises to quit drinking.
Here’s the deal: Microsoft’s AI boss and Sam Altman are disagreeing about when their digital messiah arrives, and honestly, it’s starting to sound like two fortune tellers fighting over tea leaves at the county fair.
Dec. 6, 2024
Look, I didn’t want to watch another tech messiah interview. My head was pounding from last night’s philosophical exploration of Kentucky’s finest exports, but duty calls. So there I am, nursing what might be my fourth coffee, watching Andrew Ross Sorkin - who looks like he irons his underwear - interview Sam Altman, our industry’s latest prophet.
Let me tell you something about ChatGPT’s success story. Altman says people got excited because “they were having fun with it.” No shit. You know what else people have fun with? Cat videos and bubble wrap. The difference is, nobody’s throwing billions at bubble wrap manufacturers. Yet.
Dec. 5, 2024
Look, I probably shouldn’t be writing this with last night’s bourbon still tap-dancing in my skull, but when I saw Mira Murati’s latest pronouncements about AGI, I knew I had to fire up this ancient laptop and share my thoughts. Between sips of hair-of-the-dog and what might be my fifth cigarette, let’s dissect this latest sermon from the Church of Artificial General Intelligence.
First off, Murati – fresh from her exodus at OpenAI – is telling us AGI is “quite achievable.” Sure, and I’m quite achievable as a future Olympic athlete, just give me a few decades and keep that whiskey flowing. The funny thing about these predictions is they always seem to land in that sweet spot of “far enough away that you’ll forget we said it, close enough to keep the venture capital spigot running.”
Dec. 2, 2024
There’s something delightfully ironic about Sam Altman, a human, explaining how companies will eventually not need humans. It’s like a turkey enthusiastically describing the perfect Thanksgiving dinner recipe. But let’s dive into this fascinating glimpse of our algorithmic future, shall we?
The recent conversation between Altman and Garry Tan reveals something profound about the trajectory of organizational intelligence. We’re witnessing the emergence of what I’d call “pure information processors” - entities that might make our current corporations look like amoebas playing chess.
Dec. 2, 2024
There’s something delightfully human about our persistent belief that if we just make things bigger, they’ll automatically get better. It’s as if somewhere in our collective consciousness, we’re still those kids stacking blocks higher and higher, convinced that eventually we’ll reach the clouds.
The current debate about AI scaling limitations reminds me of a fundamental truth about complex systems: they rarely follow our intuitive expectations. We’re currently witnessing what I call the “Great Scaling Confusion” - the belief that if we just pump more compute power and data into our models, they’ll somehow transform into the artificial general intelligence we’ve been dreaming about.
Dec. 1, 2024
The universe has a delightful way of demonstrating computational patterns, even in our legal documents. The latest example? Elon Musk’s injunction against OpenAI, which reads like a textbook case of what happens when initial conditions meet emergence in complex systems.
Let’s unpack this fascinating dance of organizational consciousness.
Remember when OpenAI was born? It emerged as a nonprofit, dedicated to ensuring artificial intelligence benefits humanity. The founding DNA, if you will, contained specific instructions: “thou shalt not prioritize profit.” But here’s where it gets interesting - organizations, like software systems, tend to evolve beyond their initial parameters.
Dec. 1, 2024
Let’s talk about angels, artificial intelligence, and a rather fascinating question that keeps popping up: Should ChatGPT believe in angels? The real kicker here isn’t whether AI should have religious beliefs - it’s what this question reveals about our understanding of both belief and artificial intelligence.
First, we need to understand what belief actually is from a computational perspective. When humans believe in angels, they’re not just pattern-matching against cultural data - they’re engaging in a complex cognitive process that involves consciousness, intentionality, and emotional resonance. It’s a bit like running a sophisticated simulation that gets deeply integrated into our cognitive architecture.
Nov. 23, 2024
Well, folks, it’s 3 AM, and I’m nursing my fourth bourbon while contemplating whether we’re all just bits in some cosmic computer program. Not the usual existential crisis that hits at this hour, but here we are.
Professor Roman Yampolskiy dropped a mind-bender recently that’s got me questioning everything - and I mean everything. According to him, we’re probably living in a simulation run by superintelligent AI, and the real kicker? We might be able to hack our way out of it.
Nov. 22, 2024
Listen, I’ve been staring at this bourbon glass for the past hour trying to make sense of Sam Altman’s latest prophecy about superintelligent AI. You know the type - clean-cut tech prophet in a perfectly pressed t-shirt worth more than my monthly bar tab, telling us we’re just a few thousand days away from machines that’ll make Einstein look like a kindergartener eating paste.
Here’s the thing though - and I hate admitting this while nursing my fourth Wild Turkey - they might actually be onto something this time.
Nov. 19, 2024
Look, I’ve been staring at this screen for three hours trying to process this news without reaching for the bottle. Failed miserably. So here I am, four fingers of Buffalo Trace deep, attempting to explain how artificial intelligence is now playing Dr. Frankenstein with the building blocks of life itself.
They’re calling it “Evo,” which sounds like a nightclub where programmers go to pretend they can dance. But this isn’t your regular ChatGPT spewing Shakespeare sonnets or helping teenagers cheat on their homework. No, this bad boy is designed to write actual genetic code. You know, the stuff that makes you you, and me this gloriously flawed meat puppet typing away at 2 AM.
Nov. 19, 2024
Look, I should be passed out right now after finishing that bottle of Wild Turkey, but these leaked OpenAI emails got me sitting up at 3 AM, chain-smoking Camels and laughing my ass off. Pour yourself something strong – you’re gonna need it.
Remember back in 2017 when everyone was worried about AI stealing their jobs? Turns out the real drama was happening behind closed doors, with tech billionaires fighting over who gets to play God. These newly leaked emails from the Musk vs. Altman lawsuit read like a soap opera written by a bunch of megalomaniacs with god complexes.
Nov. 19, 2024
Listen, I’ve been staring at this bottle of Jim Beam for the past hour trying to wrap my head around this latest piece of tech journalism that crossed my desk. The whole thing reads like a bad acid trip, but here’s the deal: apparently, AI is now part of our “collective intelligence.” Yeah, you heard that right. The machines aren’t just learning from us anymore - they’re teaching us back, and we’re all stuck in some kind of digital circle jerk that would make Nietzsche reach for the hard stuff.
Nov. 17, 2024
Well, friends of the bottle and binary, I just crawled out of my usual morning fog to watch Sam Altman’s latest sermon at DevDay. Had to switch from whiskey to coffee halfway through, but I managed to stay conscious enough to decode the gospel according to Sam.
Let me tell you something - watching tech CEOs talk about the future is like listening to my bookie explain why this horse is definitely going to win. The difference is, at least my bookie knows he’s selling me bullshit.
Nov. 17, 2024
Let me tell you something about consciousness while I nurse this hangover with some Wild Turkey. Bach - not the composer, the AI guy - has been saying our thoughts aren’t really ours. Usually when someone tells me thoughts aren’t mine, it’s after I’ve had way too much bourbon at closing time. But this time, the man might be onto something.
Here’s the deal: everything in the universe is basically competing software. Not in some metaphorical “the world is a computer” way that stoned college freshmen babble about at 3 AM. I mean literally - we’re all just different programs running on various substrates, from carbon to silicon, trying to perpetuate ourselves.
Nov. 16, 2024
Look, I wasn’t planning on writing today. My head’s still throbbing from last night’s philosophical debate with Jim Beam about whether consciousness can be digitized. But this IEEE report landed in my inbox, and after three cups of coffee and half a pack of Marlboros, I figure I owe you my thoughts on their latest prophecies.
First off, let me tell you something about prediction reports. They’re like horoscopes for people with advanced degrees. “Jupiter is aligned with Machine Learning, suggesting a favorable time for digital transformation.” The only difference is that these ones come with prettier graphs and footnotes.
Nov. 16, 2024
Listen, it’s 3 AM and I’ve been staring at this article about AI metacognition for longer than I care to admit. Between sips of Buffalo Trace, I’m trying to wrap my head around how we’re attempting to teach machines to think about thinking when most humans I know can barely think at all.
The whole thing started with some researchers claiming AI needs to “think about thinking” to become wise. They even dragged Yoda into this mess. You know, that little green puppet who speaks like someone randomized a sentence generator. “Wise, you must become. Metacognition, you must have. Bourbon, you must share.”
Nov. 16, 2024
Listen up, you beautiful disaster of a readership. While I’m nursing my fourth bourbon of the evening, let me tell you about the latest circus act in our digital nightmare. The Information - usually a solid source when they’re not huffing unicorn farts - dropped a bombshell claiming AI progress is hitting a wall. Cute story. Real cute.
Here’s what’s got everyone’s panties in a twist: supposedly, OpenAI’s next big thing, Project Orion, isn’t the revolutionary leap forward we were promised. The improvements are “smaller” compared to the jump between GPT-3 and GPT-4. And the kicker? It might actually be worse at coding than its predecessor. Oh, the humanity.
Nov. 16, 2024
Another day, another tech executive having an existential crisis. This time it’s Eric Schmidt, former Google CEO, warning us that artificial intelligence might start cooking up deadly viruses in its spare time. And here I thought my microwave plotting against me was just the bourbon talking.
Look, Schmidt’s not entirely wrong. He’s suggesting we might need to guard AI labs the same way we guard nuclear facilities - with armed personnel and enough firepower to make a small country nervous. The kicker? He thinks we might need to actually “pull the plug” if things get dicey. Because apparently, the off switch is going to be our last line of defense against synthetic biology gone wrong.
Nov. 16, 2024
Man, my head is pounding something fierce this morning, but these leaked emails from OpenAI’s early days are better entertainment than the usual bar fights I witness. Pour yourself a drink - you’re gonna need it.
Let me break down this circus of egos and billions for you, because beneath all the corporate speak and “save humanity” rhetoric, this is basically a really expensive version of high school drama. Except instead of fighting over who gets to sit at the cool kids’ table, they’re fighting over who gets to potentially control the robot apocalypse.
Nov. 15, 2024
The Monk of Machine Learning
Christ, what a story this is. Let me tell you about a guy who makes my life choices look downright conventional - and that’s saying something, considering I once spent three days living off nothing but coffee and cigarettes while debugging printer drivers.
Gwern Branwen. Sounds like a character from some discount fantasy novel, right? But this digital hermit is about as real as they come. Picture this: while tech bros in Patagonia vests are burning through VC money faster than I burn through Lucky Strikes, this guy’s living on twelve grand a year in the middle of nowhere, documenting the rise of artificial intelligence like some kind of digital monk.
Nov. 14, 2024
The Digital Spirit World: Software Agents and Modern Animism
You know what’s funny? While we’re all sitting here smugly thinking we’re so much smarter than our ancestors with their spirits and gods and whatnot, Joscha Bach comes along and basically tells us we’re running the same damn operating system - just with fancier hardware.
Christ, my head is pounding. Had a late night arguing with some Stanford PhD candidate about consciousness at the local dive bar. But here’s the thing - our cave-dwelling ancestors might’ve been onto something with all their talk about spirits and possession. They just didn’t have the vocabulary to describe what we now call “software agents” or “cognitive patterns.”
Nov. 13, 2024
Jesus Christ, This One’s Heavy
takes long pull from bourbon
Let me tell you something about watching two intellectual heavyweights duke it out over whether we’re all going to die. It’s about as comfortable as sitting through your parents’ divorce proceedings while nursing the mother of all hangovers. Which, coincidentally, is exactly how I started my morning before diving into this particular slice of digital doom.
I’ve been covering tech long enough to know when something’s worth switching from coffee to whiskey, and this conversation between Stephen Wolfram and Eliezer Yudkowsky definitely qualifies. Christ, even my usual morning cigarette couldn’t steady my hands after this one.
Nov. 10, 2024
Alright, you existential crisis-inducing bastards. Grab a bottle and strap in. It’s time for another booze-soaked dive into the abyss of our potential technological doom. Today’s flavor of silicon nightmare fuel? “11 Elements of American AI Dominance”. Christ, even the title makes me want to reach for the hard stuff.
Let’s cut through the bullshit, shall we? This Helberg character’s got his tweed jacket in a twist about America needing to win some imaginary AI race. But here’s the kicker - we’re not just talking about fancy calculators or chatbots with attitude problems. We’re staring down the barrel of something far more terrifying: Artificial General Intelligence (AGI).
Nov. 9, 2024
Posted at 3:47 AM while questioning my life choices
Jesus fucking Christ. Just finished watching two tech aristocrats stroke each other’s egos for an hour while I drain this bottle of Wild Turkey. Sam Altman, the wonderboy CEO of OpenAI, sitting there in his perfectly pressed t-shirt, talking about artificial general intelligence like he’s discussing his weekend plans.
Let me tell you something about intelligence, artificial or otherwise. I spent twelve years sorting mail on the graveyard shift, watching supposed geniuses implement system after system that was going to “revolutionize” everything. Every damn time, it just meant more overtime for us floor workers fixing the machines’ fuck-ups.