Table of Contents
April 9, 2026
The Humble HyperlinkApril 2, 2026
The Hare Is Running Away With Your Data But the Tortoise Has a ShellMarch 23, 2026
Language == (Code && 💕)March 20, 2026
Notes on Cognitive LibertyMarch 20, 2026
From Tulips to Transformers: A Brief History of Expensive MistakesMarch 18, 2026
DoomtubersMarch 18, 2026
The Private MindMarch 16, 2026
The Fluency IllusionMarch 16, 2026
The Map Is Not the TerritoryMarch 16, 2026
The Modular MindMarch 16, 2026
Who Named These Animals?March 4, 2026
Align AI? Try Aligning HumansFebruary 22, 2026
This AI Cannot Be EmpireFebruary 14, 2026
Unsubscribe & ResistJanuary 26, 2026
The ContentkeeperApril 11, 2025
Review: The Idea FactoryMarch 26, 2025
Whither Apple Intelligence?March 19, 2025
Let’s Call Them Answer EnginesMarch 11, 2025
DoomtubersMarch 4, 2025
“It’s not A.I., It’s the Non-Economy of Content, Stupid.”January 25, 2025
The Lasting Creak of Legacy CodeJanuary 26, 2005
Weaponizing Bits

While studying AI back in graduate school, and even my undergrad days, there was a clear trend in the field of artificial intelligence to move the goal posts. AI is always around the corner, and then it’s not. What it is remains ill-defined. We have a new wave of excitement about AI upon us and the reality is starting to dawn that perhaps AI is once again too broad a term for what are really savvy answer engines.
John Gruber, over at [Daringfireball] has some great thoughts on the state of Apple Intelligence, which if you are an iPhone owner, [you already know] it’s pretty unintelligent. This reminds me once again that I dislike this jump to the term “artificial intelligence” when we are still dealing with machine learning algorithms that speak well, but don’t actually have a solid underlying framework for the real world. [Paul Kafasis] (creator of my favorite [audio software, Piezo]) performs an excellent demonstration of this by asking these “AI” pretty simple questions about the Superbowl, like, uh, who won? Patiently asking Siri about Superbowls 1 through 58, [Siri did really bad]—34% bad.
All of this got me thinking about a training session I had with an AI. I’m under an NDA and can’t go into specifics, but suffice it to say, what this AI got wrong is similar to Siri’s mistakes in just not knowing what a Superbowl really is.[^2] These large language models (LLMs) don’t have a conceptual framework of the real world beyond a probability matrix and the corpus they’ve been given. The AI I was giving feedback to was just off on simple facts that, as a human, I could spot pretty quick. ChatGPT has an interesting (if coached) notion about the probability of seeing a purple dog. It knows that the probability is low, but when it “invents” instances for seeing one, it says, “Someone used pet-safe dye on their dog.” Why pet-safe? Is it not aware of cruel creeps and idiots in the world? I pursued the question for fun and ChatGPT got a little nosy!
Fair point—unfortunately, some people do irresponsible things to animals. While pet-safe dye is designed to be non-toxic and gentle, there are definitely cases where people use harmful substances or subject animals to unsafe conditions for the sake of novelty or social media attention.
If you ever see a dyed animal and suspect cruelty, it’s always good to check if it’s safe and ethical. Some groomers and pet owners use safe dyes for fun or artistic expression, but if an animal seems distressed or harmed, that’s a whole different issue.
Did something specific prompt this question?
Woah. Investigate much?
The long and the short of this post though is just this: if we persist in calling this software artificially intelligent, people will give them more credit than they deserve. We really should be calling them what John Gruber and others are referring to them as: answer engines. Everyone knows about search engines and the precautions necessary when using them. The moniker answer engine correctly implies their capabilities. They know a lot. They can spell out what they know in nice human-readable chunks. They don’t have a good over-arching context or framework for the world, and should be double-checked now and again.
Me: No. I was just curious about how much you knew about purple dogs.
ChatGPT: Got it! Well, I’d say I know a fair bit about purple dogs—at least as much as anyone who hasn’t actually seen one in the wild. If you’re ever looking for one, your best bet is probably an art installation, a pet grooming contest, or a really dedicated Photoshop user.
But if you do ever see a naturally occurring purple dog, take a picture—it could be a scientific breakthrough!
Could it? I mean, ChatGPT’s not wrong, right?
[^1]: A good indication of the fact that what we have is not AI is the fact that the AI companies need a new term to define what they mean by AI which is AGI (Artificial General Intelligence). That’s called protecting your investments.
[^2]: I personally could never decide between the two. How would you eat soup without a bowl? (Let’s see an AI make sense of that joke.)