Table of Contents
April 23, 2026
Welcome to the MediumocracyApril 15, 2026
Non-human Citizens United?April 9, 2026
The Humble HyperlinkApril 2, 2026
The Hare Is Running Away With Your Data But the Tortoise Has a ShellMarch 23, 2026
Language == (Code && 💕)March 20, 2026
Notes on Cognitive LibertyMarch 20, 2026
From Tulips to Transformers: A Brief History of Expensive MistakesMarch 18, 2026
DoomtubersMarch 18, 2026
The Private MindMarch 16, 2026
The Fluency IllusionMarch 16, 2026
The Map Is Not the TerritoryMarch 16, 2026
The Modular MindMarch 16, 2026
Who Named These Animals?March 4, 2026
Align AI? Try Aligning HumansFebruary 22, 2026
This AI Cannot Be EmpireFebruary 14, 2026
Unsubscribe & ResistJanuary 26, 2026
The ContentkeeperApril 11, 2025
Review: The Idea FactoryMarch 26, 2025
Whither Apple Intelligence?March 19, 2025
Let’s Call Them Answer EnginesMarch 11, 2025
DoomtubersMarch 4, 2025
“It’s not A.I., It’s the Non-Economy of Content, Stupid.”January 25, 2025
The Lasting Creak of Legacy CodeJanuary 26, 2005
Weaponizing Bits
A neural network system that can, for instance, classify lung X-rays near perfectly, and learn to do better, is Narrow AI because it cannot be used to classify skin lesions or translate languages without being entirely retrained on a new domain of data. Narrow AI, sometimes even pejoratively called “Weak AI,” is different from the AI Slop generators we’ve come to love and hate. However, Narrow AI, has been around for longer and is the only AI that is trustworthy enough to be incorporated into industrial processes.
The species of AI that the general public is upset about—the AI slop—is generative AI (this includes ChatGPT and Sora, Claude, Gemini and Nano Banana, Perplexity, etc.), and the press incorrectly believes that generative AI will replace humans in the labor force, because that is what they have been told by executives of Generative AI companies because they can “reason”. Funny that. They can reason but they can’t learn. As I have argued elsewhere generative AI can assist people, but so far, it can’t be put in control. It turns out, Narrow AI is what you want if you want real gains in benefits, efficiency and profits, because by comparison, it is way more cost effective and local and affordable.
Green cans, blue cans, brown cans; regardless, most of our reusable waste still ends up in the ocean and landfills. Unfortunately, recycling is a marginal business; i.e. making profit is hard. The US throws $6.5 billion worth of valuable, reusable materials every year in to landfills. Despite this loss of value, we just haven’t made much progress in recycling for over a decade—the US rate has remained flat at 32%!. It’s nowhere close to meeting the EPA’s national goal of 50% by 2030. That’s the economics, and there is so much room for improvement.
But if you look at the nitty-gritty: the work itself is hazardous. Talk about a job you don’t want: in order for the recycling process to begin, a massive amount of sorting must be done by humans, putting them in front of broken glass, scrap metal, sharp objects, syringes, tetanus and hepatitis! There’s also chronic illnesses associated with the work, like hearing loss and lung disorders. Injury rates in the recycling industry are four times higher. See? AI is going to help us improve recycling at a job that, really, we don’t want humans to have to do.
There are multiple efforts to use Narrow AI to improve the recycling industry, but one company, Amp Robotics Corporation, is tackling the sorting problem using a cascading series of AI do to the sorting 100% automatically (at least at their prototype facility in Cleveland, OH, US). They can use cameras and deep learning AI to identify different textures and jams in the process, detectors to determine material content to be removed by robot arms, and their systems are adaptive. Have a new stream of waste with new materials? No problem. The system can learn.
If digging through trash while avoiding needles sounds like a bad day at the office, consider the high-stakes gamble of drug development. Historically, bringing a new drug to market is a billion-dollar coin flip where the success rate is a dismal 10-15%. It’s a slow, grueling process of trial and error that usually takes a decade. But while the world was panic-buying toilet paper in 2020, Moderna was proving that Narrow AI could do for biotech what it’s doing for the junkyard: making the “impossible” industrial.
Moderna didn’t just stumble into a COVID-19 vaccine; they spent a decade building a digital-first “ecosystem” that treats biology like software. Because an mRNA vaccine is essentially just a biological instruction manual—a code that tells your cells how to fight a virus—Moderna built an AI-driven drug-design application to write that code.
By using integrated data science to predict the best protein sequences, they swapped out “hunches” for high-throughput automation. Their Narrow AI models don’t write poetry or generate “slop” images of doctors; they focus exclusively mRNA constructs. The results:
Again, this is the “Weak AI” that the media seems to ignore in the debate over AI. It can’t tell you a joke or help you cheat on a history essay, but it can parse a massive library of genomic data to find the exact sequence needed to stop a pandemic. When speed is the difference between a global lockdown and a return to normalcy, you don’t need a chatbot to muddle through its own rationalized steam of consciousness—you need a highly specialized, industrial-grade algorithm that is built to beat the odds.
If you take a stroll through the massive tomato greenhouses of the Costa Group in New South Wales, Australia, you’ll notice something missing: the hum of bees. In fact, using bumblebees for indoor farming is illegal in many parts of Australia, and importing non-native species is a major biosecurity risk. For years, this left farmers with the grueling, low-tech task of manual pollination—workers walking rows of a million plants with vibrating wands to shake the pollen loose.

Enter the “Polly” robot, a prime example of Narrow AI solving a geographical and regulatory deadlock. Developed by Arugga AI Farming, these robots use computer vision and deep learning to do what humans find tedious and bees aren’t allowed to do.
The process is a masterclass in specialized AI:
The “Weak AI” win here is undeniable. Early results show a 15% higher yield than manual labor and even a 7% boost over traditional bumblebees. Beyond the numbers, the bots provide a biosecurity shield, reducing the spread of plant viruses because, unlike human workers, they don’t need to touch the plants to get the job done. It’s a cleaner, faster, and more scalable way to grow food. And All thanks to an AI that doesn’t know how to give advice—which they are terrible at—but knows exactly how to shake a tomato.
Given these few examples out of hundreds more, an odd thing to me is how little talk there is in the media about this kind of AI, this Narrow AI versus the chatty kind. AI that is productive instead of often being quite destructive. The problem is one that has been with the academic pursuit of Artificial Intelligence for a long time. Coined in the 1950s, it’s pretty meaningless. It’s as insightful as saying you are more intelligent than a dog, but are you more intelligent at being a dog than a dog?
When your car saves you ten minutes on your trip to avoid traffic or when Netflix or Instagram make recommendations to you—that’s AI! It’s been here all along but we keep moving the goalposts as to what even defines it. Right now we are all caught up in talking about Generative AI while ignores the leaps and bounds that Narrow AI has made. There’d be less fervor and doomsday predictions if we just called it “machine learning” or “stochastic information processing.” And stopped falling for the Fluency Illusion.
Narrow AI creates systems that can actually learn. ChatGPT and Claude do not learn. They are trained on massive datasets and once they are trained, they are frozen in place, amnesiacs between you conversations with them. Everything that breaches that gap—memory, “thinking”—is just typical software.
Narrow AI learns and yet doesn’t need data centers. Fancy that, ChatGPT.
Usually the models are small enough to be run locally, either on a robotic platform or a server. There’s no cloud needed. And that means that no data centers are needed. Narrow AI is nifty that way. Narrow AI is the only AI that I can point to today that has measurably created a net benefit for humanity, doing jobs we don’t want people doing, helping us recycle and manage waste, improving health care, and creating more sustainable agriculture. I don’t know, maybe the true bonus is that the narrow brand of AI doesn’t tend to talk back.