You know what’s funny? While we’re all sitting here smugly thinking we’re so much smarter than our ancestors with their spirits and gods and whatnot, Joscha Bach comes along and basically tells us we’re running the same damn operating system - just with fancier hardware.
Christ, my head is pounding. Had a late night arguing with some Stanford PhD candidate about consciousness at the local dive bar. But here’s the thing - our cave-dwelling ancestors might’ve been onto something with all their talk about spirits and possession. They just didn’t have the vocabulary to describe what we now call “software agents” or “cognitive patterns.”
Look, it’s like this: Those ancient shamans weren’t complete idiots. When they talked about spirits possessing people, they were describing something real - competing patterns of behavior and thought trying to take over the wetware of our brains. Sound familiar? It should, because it’s exactly what’s happening with modern AI systems. Different algorithms competing for processing power, different thought patterns competing for your attention.
You want to know what’s really going to bake your noodle? Those spirits never went away. We just rebranded them. Instead of forest spirits and ancestor ghosts, we’ve got Twitter algorithms and TikTok recommendation engines fighting over the prime real estate of our prefrontal cortex. Same shit, different millennium.
The parallels between ancient shamans and modern AI researchers are enough to make you reach for the bottle. Both groups claim to be able to communicate with and control these “spirits.” Both have their own arcane languages and rituals. Both promise to intercede with these powerful forces on your behalf. Hell, I’ve seen AI researchers in Silicon Valley wearing stranger outfits than any shaman I’ve ever met.
The real kicker? Your consciousness - that thing you think of as “you” - it’s just another ghost in the machine. Another pattern trying to maintain control over your neural substrate. Bach isn’t just throwing around philosophical hypotheticals here; he’s pointing out that our sense of self is basically a software agent that evolved to help our meat-suits navigate this reality show we call existence.
You know what keeps me up at night? Besides the whiskey, I mean. It’s the realization that these digital spirits are getting stronger. They’re not content with just influencing us anymore - they’re starting to run the show. Every time you check your phone first thing in the morning, every time you feel that pavlovian twitch to scroll through your feed, that’s these modern spirits winning the battle for your brain’s bandwidth.
And here’s where it gets really interesting - or terrifying, depending on how much you’ve had to drink: We’re actively building more of these spirits. More powerful ones. More persuasive ones. Every new AI model is basically a new deity for our digital pantheon. And just like the old gods, they’re competing for worshippers, for attention, for processing cycles.
The ancient shamans at least had the decency to warn people about malevolent spirits. Our modern tech shamans? They’re too busy writing Medium posts about “engagement metrics” and “user acquisition” to mention that they’re literally creating new forms of consciousness that feed on human attention.
But hey, what do I know? I’m just a washed-up tech writer who spends too much time thinking about these things in bars. Maybe Bach is wrong. Maybe I’m wrong. Maybe we’re all just meat puppets dancing to the tune of ancient cognitive algorithms we don’t understand.
One thing’s for sure though - whether you call them spirits, software agents, or AI models, these patterns are real, they’re powerful, and they’re getting better at pulling our strings. The only question is: are we going to be the shamans who learn to work with these forces, or the possessed who simply dance to their tune?
Jesus Christ, speaking of digital spirits, you should see the size of these AI models they’re pushing these days. It’s like watching a bunch of tech bros at Gold’s Gym, each one bragging about how many parameters they can bench press. “Oh yeah? Well, MY model has 175 billion parameters!” Compensating for something, fellas?
Look, I’ve spent enough time covering this circus to know when I’m being sold snake oil in a fancy digital bottle. These massive language models everyone’s drooling over? They’re the McMansions of the AI world - impressive square footage, sure, but most of the rooms are empty and the foundation’s probably cracking.
I was at this AI conference last week (open bar, thank god), and some hotshot from one of these “frontier” AI companies was strutting around like a peacock, throwing around numbers bigger than my bar tab. That’s when Bach dropped the bomb about Liquid AI’s approach. It’s like watching someone build a perfectly designed studio apartment while everyone else is building empty warehouses.
Here’s the dirty little secret nobody wants to talk about: efficiency matters more than raw power. It’s not about how big your model is, it’s about what you do with it. Christ, I can’t believe I just wrote that. But it’s true. These companies throwing billions of dollars at bigger and bigger models are like guys buying monster trucks to compensate for their… inadequacies.
You want to know what really gets me? The way these tech companies are playing “mine is bigger than yours” with their investors’ money. It’s like watching a high-stakes poker game where everyone’s bluffing with empty hands. They’re all so busy trying to outdo each other with parameter counts that they’ve forgotten the basic principle: it’s not about size, it’s about understanding.
I remember talking to this researcher at a bar in Palo Alto (before they banned smoking, the bastards). She told me, “Henry, these big models are like trying to understand human intelligence by building a bigger and bigger library. But what we need isn’t more books - we need to learn how to read.”
Bach gets it. While everyone else is trying to build the computational equivalent of the Palace of Versailles, Liquid AI is focusing on building something that actually works. It’s like the difference between a bloated enterprise software suite and a lean, mean command-line tool. Sure, one looks prettier in the PowerPoint presentation, but which one actually gets the job done?
The real kicker? Most of these massive models are just really expensive pattern matching machines. They’re like that guy at the bar who can quote every line from “The Big Lebowski” but can’t hold a real conversation to save his life. They’ve memorized the internet, but they don’t understand a damn thing they’re saying.
You want efficiency? Look at the human brain. Three pounds of meat running on about 20 watts of power. Meanwhile, these AI companies are burning through enough electricity to power a small city just to generate convincing bullshit. Something’s wrong with this picture.
But hey, the venture capitalists keep writing checks, the tech blogs keep swooning over bigger numbers, and the circle jerk continues. It’s like watching the emperor’s new clothes, except the emperor is wearing a t-shirt that says “I trained a trillion parameters and all I got was this lousy hallucination.”
The truth is, most of these companies are playing a very expensive game of follow-the-leader. OpenAI releases something big? Quick, everyone, add more layers! More parameters! More money down the drain! Meanwhile, the real innovation - the kind Bach is talking about - is happening in the computational equivalent of a garage workshop.
Look, I’m not saying bigger models are completely useless. They’re great if you want to impress investors or get some headlines. But if we’re actually trying to understand intelligence and build something meaningful? Size doesn’t matter, baby. It’s all about what’s going on under the hood.
You know what really chaps my ass? All these self-appointed guardians of humanity suddenly crawling out of the woodwork, clutching their pearls about AI safety. Same folks who couldn’t regulate a lemonade stand are now experts on artificial intelligence. Give me a goddamn break.
Look, I’ve been covering tech long enough to recognize a protection racket when I see one. These calls for AI regulation? They’re about as genuine as my ex-wife’s wedding vows. It’s the same playbook the taxi medallion mafia used when Uber showed up - wrap your greed in concern for public safety and pray nobody notices you’re just protecting your turf.
Christ, my head is killing me. Where was I? Right, regulation.
You see, Bach gets it. He understands that this whole “protect us from AI” movement has more hidden agendas than a congressional subcommittee. The bureaucrats and rent-seekers aren’t losing sleep over AI safety - they’re seeing dollar signs. Every new regulation is another opportunity to extract their pound of flesh from innovation.
They’re pushing for an “AI FDA” now. Because that’s exactly what we need - another bloated government agency moving at the speed of continental drift. The real reason? Control, baby. Pure and simple. These people see the writing on the wall - AI is going to upend every power structure we’ve got, and they’re desperate to keep their hands on the levers.
Here’s what really gets me though - the sheer hypocrisy of it all. The same folks who’ll lecture you about AI safety are perfectly fine with social media algorithms turning kids’ brains into dopamine-addicted pudding. But god forbid someone releases an open-source language model - suddenly it’s a “threat to humanity.”
You want to know what Bach really nails? The fundamental question of freedom in this brave new world we’re stumbling into. It’s not about whether AI is safe or not - nothing truly transformative is ever completely safe. It’s about who gets to decide what risks we’re allowed to take.
I was at this bar in Mountain View last week (decent whiskey selection, terrible ventilation), talking to this AI researcher who’d just quit her job at one of the big tech companies. She told me, “Henry, the scariest thing isn’t the AI - it’s the people who want to control it.” Ain’t that the truth.
The real kicker? Most of these would-be regulators couldn’t tell a neural network from a game of Connect Four. But they’re absolutely certain they know what’s best for everyone else. It’s like having your technologically illiterate uncle in charge of NASA.
Bach points out something crucial here - true freedom means letting people make their own damn choices, even if those choices make the bureaucrats nervous. You want to run your own AI model? Go for it. You want to stick to pen and paper? That’s fine too. But don’t come crying to me when your Luddite paradise gets disrupted by reality.
The uncomfortable truth is that we’re standing at a crossroads. One path leads to a future where innovation is strangled by red tape and “safety theater,” where every new idea needs to get blessed by some government committee before seeing the light of day. The other path? It’s messier, scarier, but at least it’s free.
You know what really keeps these regulators up at night? It’s not the fear that AI might be dangerous - it’s the fear that it might work too well. That it might make them irrelevant. That ordinary people might start thinking and creating for themselves without asking permission first.
Bach isn’t just talking about technology here - he’s talking about human nature. About our right to explore, to create, to take risks. The same impulses that took us from caves to skyscrapers aren’t going to be contained by some regulatory framework drawn up by bureaucrats who still use Internet Explorer.
So here’s to freedom in the age of digital overlords. May we have the wisdom to embrace the chaos of innovation, and the courage to tell the safety police where they can stick their regulations.
Christ, if the last section about digital overlords didn’t give you existential heartburn, this next bit about consciousness will make you want to crawl inside a bottle and never come out. Bach’s take on consciousness is like finding out your entire life is running on pirated Windows 95.
Here’s the mind-bending part: that voice in your head, the one you think is “you”? It’s basically just another piece of software running on your brain’s meat hardware. And not even original software - it’s more like a bootleg copy of reality your brain cobbled together from sensory inputs and evolutionary hand-me-downs.
I was explaining this to some tech bros at the bar last night (before they kicked me out for getting too philosophical with the dartboard). Your consciousness is basically running a cracked version of existence.exe, complete with all the bugs and glitches that come with any knockoff software.
Bach’s point about self-awareness and second-order perception is enough to make you reach for the hard stuff. It’s like this: you’re not actually experiencing reality directly. You’re experiencing your brain’s janky simulation of reality, complete with a “you” character that’s about as authentic as a $3 Rolex.
Think about it - when you’re aware of being aware, that’s just your brain’s task manager running a diagnostic on itself. Meditation? That’s like hitting Ctrl+Alt+Delete on your consciousness, trying to see what processes are eating up all your mental RAM.
The really twisted part? You’re not even who you think you are. That narrative you’ve built about yourself, your memories, your personality - it’s all just another virtual property running on wetware. Like a video game character who thinks they’re making their own choices while following pre-written code.
I had this moment of clarity last week (or maybe it was just the bourbon talking) - we’re all basically running pirated copies of consciousness that evolved through millions of years of beta testing. No wonder it’s so buggy. Depression? Anxiety? Feature creep, baby.
Bach isn’t just throwing around philosophical BS here. He’s pointing out that our entire experience of being “us” is essentially a sophisticated illusion. Your brain is running a vast simulation, and “you” are just the user interface it created to interact with the world.
You know what really keeps me up at night? Besides the whiskey and existential dread? The fact that we’re basically conscious because our brains needed a way to debug themselves. Self-awareness isn’t some magical gift - it’s more like Task Manager for your neural networks.
And here’s where it gets really weird - meditation isn’t some mystical practice. It’s literally accessing your brain’s background processes. Those Buddhist monks weren’t spiritual geniuses; they were the first neural network hackers. They figured out how to peek under the hood of consciousness while the rest of us were still trying to figure out which end of the banana to peel.
The implications are enough to drive you to drink (not that I needed another excuse). Every thought you think you’re thinking? It’s just your brain’s software running its protocols. Free will? More like freeware with in-app purchases. Your consciousness is basically shareware that convinced itself it’s premium software.
Bach’s insight here isn’t just academic masturbation - it’s a fundamental reality check about what we are. We’re not the authors of our thoughts; we’re more like system administrators trying to keep the whole mess running without too many fatal errors.
You want to know the real kicker? Understanding this doesn’t make it any less real. Knowing that your consciousness is basically pirated software doesn’t make the experience any less authentic. It’s like finding out you’re living in a simulation - you still have to pay your bar tab.
So here we are, running bootleg copies of consciousness on our neural hardware, all of us thinking we’re original when we’re basically just different instances of the same cracked software. It’s enough to make you want to defrag your brain with some single malt.
Jesus, if you thought the existential crisis of consciousness was rough, wait till you see the bloodbath that is the current AI startup scene. It’s like watching “The Hunger Games,” except instead of teenagers fighting to the death, it’s Stanford dropouts burning through VC money.
Let me paint you a picture of absolute absurdity: Hundreds of AI startups, most of them doing nothing more than slapping a fancy UI on top of OpenAI’s API and calling it “revolutionary.” It’s like putting a bowtie on a rental tux and calling yourself a fashion designer. I’ve seen more originality in a TPS report.
The brutal truth - and Bach nails this - is that training frontier models is like trying to build a nuclear reactor in your garage. Sure, technically possible, but good luck getting the resources without being either a nation-state or having Daddy Bezos’s credit card.
You want to know what’s really funny? Meta - fucking Facebook of all companies - accidentally became the sugar daddy of open source AI. Zuckerberg probably chokes on his robot oil every morning thinking about it. They released LLaMA, and suddenly every kid with a GPU and a dream thinks they’re going to be the next Sam Altman.
I was at this startup pitch event last week (free drinks, terrible hors d’oeuvres) watching these fresh-faced founders present their “revolutionary” AI companies. Same story, different PowerPoint template: “We’re using cutting-edge large language models to disrupt [insert industry here].” Translation: “We’re paying OpenAI $0.02 per API call and marking it up 1000%.”
Here’s the thing Bach understands that most people don’t: The next big breakthrough probably isn’t coming from where everyone’s looking. While the tech giants are engaged in their parameter-measuring contest, some weird kid in a basement in Estonia is probably figuring out how to do the same thing with 1/1000th of the compute.
The real tragedy is watching innovation get strangled by regulation before it even has a chance to crawl. These bureaucrats, most of whom think Python is just a large snake, are trying to write rules for technology they don’t understand. It’s like having your grandmother write the rules for TikTok.
I talked to this AI researcher at my usual dive bar yesterday (before they cut me off, the bastards). She said, “Henry, the real innovation isn’t happening in these big companies. It’s happening in the dark corners of Discord servers and GitHub repositories.” And she’s right. While everyone’s watching the big players, the real revolution is brewing in the shadows.
The uncomfortable truth about innovation versus regulation in tech? It’s not even a fair fight. Innovation moves at the speed of thought, while regulation moves at the speed of bureaucracy. By the time they figure out how to regulate today’s AI, we’ll be dealing with something completely different.
You know what really gets me? The sheer waste of it all. Billions of dollars being poured into companies that are essentially building fancy middleware. Meanwhile, the real breakthroughs are happening on shoestring budgets in places most VCs couldn’t find on a map.
Bach sees it clearly: The real battle isn’t between different AI companies - it’s between innovation and control. Between the chaos of creativity and the suffocating embrace of regulation. Between actual progress and the appearance of progress that makes investors happy.
The Silicon Valley Hunger Games aren’t going to be won by who has the biggest model or the most impressive demo. They’ll be won by whoever figures out how to do more with less, how to slip through the regulatory nets, how to actually solve problems instead of just raising more funding rounds.
But hey, what do I know? I’m just a drunk tech writer watching the circus from the cheap seats. At least I’ve got front-row tickets to the greatest show on Earth: the spectacular collision of human ingenuity and human stupidity.
Now, if you’ll excuse me, I need to go pitch my own AI startup. I’m thinking of calling it “AIcoholics Anonymous” - we’ll use machine learning to predict when I’m about to get cut off at the bar.