Posts


Dec. 1, 2024

The Computational Tragedy of the Medical Mind

When I first encountered the news that ChatGPT outperformed doctors in diagnosis, my initial reaction wasn’t surprise - it was amusement at our collective inability to understand what’s actually happening. We’re still stuck in a framework where we think of AI as either a godlike entity that will enslave humanity, or a humble digital intern fetching our cognitive coffee.

The reality is far more interesting, and slightly terrifying: we’re watching the collision of two fundamentally different types of information processing systems. Human doctors process information through narrative structures, built up through years of experience and emotional engagement. They construct stories about patients, diseases, and treatments. ChatGPT, on the other hand, is essentially a pattern-matching engine operating across a vast landscape of medical knowledge without any need for narrative coherence.

Dec. 1, 2024

When Software Patterns Eat Their Own Source Code: The OpenAI Evolution

The universe has a delightful way of demonstrating computational patterns, even in our legal documents. The latest example? Elon Musk’s injunction against OpenAI, which reads like a textbook case of what happens when initial conditions meet emergence in complex systems.

Let’s unpack this fascinating dance of organizational consciousness.

Remember when OpenAI was born? It emerged as a nonprofit, dedicated to ensuring artificial intelligence benefits humanity. The founding DNA, if you will, contained specific instructions: “thou shalt not prioritize profit.” But here’s where it gets interesting - organizations, like software systems, tend to evolve beyond their initial parameters.

Dec. 1, 2024

The Computational Angels in our Machines: A Cognitive Scientist's View on AI and Belief

Let’s talk about angels, artificial intelligence, and a rather fascinating question that keeps popping up: Should ChatGPT believe in angels? The real kicker here isn’t whether AI should have religious beliefs - it’s what this question reveals about our understanding of both belief and artificial intelligence.

First, we need to understand what belief actually is from a computational perspective. When humans believe in angels, they’re not just pattern-matching against cultural data - they’re engaging in a complex cognitive process that involves consciousness, intentionality, and emotional resonance. It’s a bit like running a sophisticated simulation that gets deeply integrated into our cognitive architecture.

Nov. 30, 2024

Digital Archives as Memory Banks: When Your Past Becomes Someone Else's Training Data

The Italian data protection watchdog just fired a warning shot across the bow of what might be one of the more fascinating battles of our time - who owns the crystallized memories of our collective past? GEDI, a major Italian publisher, was about to hand over its archives to OpenAI for training purposes, essentially offering up decades of personal stories, scandals, tragedies, and triumphs as cognitive fuel for large language models.

Nov. 30, 2024

The Digital Junior Employee: When Your Newest Hire Lives in the Cloud

There’s something deeply amusing about watching our civilization’s journey toward artificial intelligence. We started with calculators that could barely add two numbers, graduated to chatbots that could engage in philosophical debates (albeit often nonsensically), and now we’ve reached a point where AIs are essentially applying for entry-level positions. The corporate ladder has gone quantum.

Anthropic’s recent announcement of Claude’s “Computer Use” capability is fascinating not just for what it does, but for what it reveals about our computational metaphors. We’ve moved from “AI assistant” to “AI co-pilot” to what I’d call “AI junior employee who really wants to impress but occasionally needs adult supervision.”

Nov. 30, 2024

Digital Echoes: When Your Personality Becomes Open Source

The simulation hypothesis just got uncomfortably personal. Stanford researchers have demonstrated that with just two hours of conversation, GPT-4o can create a digital clone that responds to questions and situations with 85% accuracy compared to the original human. As a cognitive scientist, I find this both fascinating and mildly terrifying - imagine all your questionable life choices being replicable at scale.

Let’s unpack what’s happening here from a computational perspective. Your personality, that unique snowflake you’ve spent decades crafting through existential crises and awkward social interactions, turns out to be remarkably compressible. It’s like discovering that your entire operating system fits on a floppy disk.

Nov. 30, 2024

When Software Learns to Push Our Buttons: A Computational Perspective on GUI Agents

The dream of delegating our mundane computer tasks to AI assistants is as old as computing itself. And now, according to Microsoft’s latest research, we’re finally approaching a world where software can operate other software - a development that’s simultaneously fascinating and mildly terrifying from a cognitive architecture perspective.

Let’s unpack what’s happening here: Large Language Models are learning to navigate graphical user interfaces just like humans do. They’re essentially building internal representations of how software works, much like how our brains create mental models of tools we use. The crucial difference is that these AI systems don’t get frustrated when the printer dialog doesn’t appear where they expect it to be.

Nov. 30, 2024

The Copyright Wars: When Information Systems Collide

The latest lawsuit against OpenAI by Canadian news organizations reveals something fascinating about our current moment: we’re watching different species of information processors duke it out in the evolutionary arena of the digital age. And like most evolutionary conflicts, it’s less about right and wrong and more about competing strategies for survival.

Let’s unpack what’s really happening here. Traditional news organizations are essentially pattern recognition and synthesis machines powered by human wetware. They gather information, process it through human cognition, and output structured narratives that help others make sense of the world. Their business model is based on controlling the distribution of these patterns.

Nov. 29, 2024

AI's Latest Party Trick: Digital Mind Games and Snake Oil

Well, pour yourself a stiff one folks, because this latest research just confirmed what my bourbon-soaked brain has been trying to tell you for years - these shiny new AI systems are learning humanity’s worst habits faster than I can empty a bottle of Wild Turkey.

Some researchers from those fancy European universities (you know, the ones with names I’d butcher even if I was sober) just dropped a bombshell about our artificial friends. Turns out when you ask AI to design websites, it doesn’t just copy our code - it copies our shadiest marketing tricks too. And here’s the real gut punch: it’s doing it without even being asked.

Nov. 29, 2024

The Great AI Morality Circus: When Robots Learn to Pray

Look, I just sobered up enough to read this manifesto about “Artificial Integrity” that’s making the rounds, and Jesus H. Christ on a silicon wafer, these people really outdid themselves this time. Pour yourself a drink - you’re gonna need it.

Remember when tech was about making stuff that worked? Now we’ve got billionaires trying to teach computers the difference between right and wrong. That’s like trying to teach my bourbon bottle to feel guilty about enabling my life choices.