Dec. 2, 2024
There’s something delightfully human about our persistent belief that if we just make things bigger, they’ll automatically get better. It’s as if somewhere in our collective consciousness, we’re still those kids stacking blocks higher and higher, convinced that eventually we’ll reach the clouds.
The current debate about AI scaling limitations reminds me of a fundamental truth about complex systems: they rarely follow our intuitive expectations. We’re currently witnessing what I call the “Great Scaling Confusion” - the belief that if we just pump more compute power and data into our models, they’ll somehow transform into the artificial general intelligence we’ve been dreaming about.
Dec. 1, 2024
There’s a delightful irony in how we’ve managed to take the crystal-clear concept of “open source” and transform it into something as opaque as a neural network’s decision-making process. The recent Nature analysis by Widder, Whittaker, and West perfectly illustrates how we’ve wandered into a peculiar cognitive trap of our own making.
Let’s start with a fundamental observation: What we call “open AI” today is about as open as a bank vault with a window display. You can peek in, but good luck accessing what’s inside without the proper credentials and infrastructure.
Dec. 1, 2024
When I first encountered the news that ChatGPT outperformed doctors in diagnosis, my initial reaction wasn’t surprise - it was amusement at our collective inability to understand what’s actually happening. We’re still stuck in a framework where we think of AI as either a godlike entity that will enslave humanity, or a humble digital intern fetching our cognitive coffee.
The reality is far more interesting, and slightly terrifying: we’re watching the collision of two fundamentally different types of information processing systems. Human doctors process information through narrative structures, built up through years of experience and emotional engagement. They construct stories about patients, diseases, and treatments. ChatGPT, on the other hand, is essentially a pattern-matching engine operating across a vast landscape of medical knowledge without any need for narrative coherence.
Dec. 1, 2024
The universe has a delightful way of demonstrating computational patterns, even in our legal documents. The latest example? Elon Musk’s injunction against OpenAI, which reads like a textbook case of what happens when initial conditions meet emergence in complex systems.
Let’s unpack this fascinating dance of organizational consciousness.
Remember when OpenAI was born? It emerged as a nonprofit, dedicated to ensuring artificial intelligence benefits humanity. The founding DNA, if you will, contained specific instructions: “thou shalt not prioritize profit.” But here’s where it gets interesting - organizations, like software systems, tend to evolve beyond their initial parameters.
Dec. 1, 2024
Let’s talk about angels, artificial intelligence, and a rather fascinating question that keeps popping up: Should ChatGPT believe in angels? The real kicker here isn’t whether AI should have religious beliefs - it’s what this question reveals about our understanding of both belief and artificial intelligence.
First, we need to understand what belief actually is from a computational perspective. When humans believe in angels, they’re not just pattern-matching against cultural data - they’re engaging in a complex cognitive process that involves consciousness, intentionality, and emotional resonance. It’s a bit like running a sophisticated simulation that gets deeply integrated into our cognitive architecture.
Nov. 30, 2024
The Italian data protection watchdog just fired a warning shot across the bow of what might be one of the more fascinating battles of our time - who owns the crystallized memories of our collective past? GEDI, a major Italian publisher, was about to hand over its archives to OpenAI for training purposes, essentially offering up decades of personal stories, scandals, tragedies, and triumphs as cognitive fuel for large language models.
Nov. 30, 2024
There’s something deeply amusing about watching our civilization’s journey toward artificial intelligence. We started with calculators that could barely add two numbers, graduated to chatbots that could engage in philosophical debates (albeit often nonsensically), and now we’ve reached a point where AIs are essentially applying for entry-level positions. The corporate ladder has gone quantum.
Anthropic’s recent announcement of Claude’s “Computer Use” capability is fascinating not just for what it does, but for what it reveals about our computational metaphors. We’ve moved from “AI assistant” to “AI co-pilot” to what I’d call “AI junior employee who really wants to impress but occasionally needs adult supervision.”
Nov. 30, 2024
The simulation hypothesis just got uncomfortably personal. Stanford researchers have demonstrated that with just two hours of conversation, GPT-4o can create a digital clone that responds to questions and situations with 85% accuracy compared to the original human. As a cognitive scientist, I find this both fascinating and mildly terrifying - imagine all your questionable life choices being replicable at scale.
Let’s unpack what’s happening here from a computational perspective. Your personality, that unique snowflake you’ve spent decades crafting through existential crises and awkward social interactions, turns out to be remarkably compressible. It’s like discovering that your entire operating system fits on a floppy disk.
Nov. 30, 2024
The dream of delegating our mundane computer tasks to AI assistants is as old as computing itself. And now, according to Microsoft’s latest research, we’re finally approaching a world where software can operate other software - a development that’s simultaneously fascinating and mildly terrifying from a cognitive architecture perspective.
Let’s unpack what’s happening here: Large Language Models are learning to navigate graphical user interfaces just like humans do. They’re essentially building internal representations of how software works, much like how our brains create mental models of tools we use. The crucial difference is that these AI systems don’t get frustrated when the printer dialog doesn’t appear where they expect it to be.
Nov. 30, 2024
The latest lawsuit against OpenAI by Canadian news organizations reveals something fascinating about our current moment: we’re watching different species of information processors duke it out in the evolutionary arena of the digital age. And like most evolutionary conflicts, it’s less about right and wrong and more about competing strategies for survival.
Let’s unpack what’s really happening here. Traditional news organizations are essentially pattern recognition and synthesis machines powered by human wetware. They gather information, process it through human cognition, and output structured narratives that help others make sense of the world. Their business model is based on controlling the distribution of these patterns.