From Tulips to Transformers
A Brief History of Expensive Mistakes
After a couple years of HTML slinging, I did what every ambitious web drone did in 1999: moved to New York to watch money evaporate from Silicon Alley in real-time. It was only two years until the inevitable happened: venture capitalists remembered that businesses need revenue, and the dot bomb vaporized our stock options into the digital ether where they belonged.
For those too young to remember, here’s what went down: The dot-com bubble burst in 2000-2001 when investors suddenly realized that “eyeballs” and “mindshare” weren’t actually business models. Trillions in market value vanished as unprofitable startups—shocking, I know—went bankrupt. The NASDAQ plummeted nearly 80% from its peak, taking jobs, retirement accounts, and an entire generation’s faith in foosball tables with it. I think we’re about to get a déjà vu special with AI.
Now, before you dismiss this as the bitter rantings of one scarred xennial, let me point out that capitalism has been running this same scam for centuries. We’re practically speedrunning history at this point.
The pattern starts in 1630s Amsterdam, where Dutch investors decided tulip bulbs—yes, flowers—were worth more than houses. Spoiler: they weren’t. Fast-forward to the 1700s, when the South Sea Bubble in England and Mississippi Bubble in France proved that “exotic colonial trading ventures” was just period-appropriate jargon for “we have no idea what we’re doing, but give us your money anyway.” The 1800s brought railroad mania on both sides of the Atlantic, as investors threw cash at any company that could spell “locomotive.” Then came the Roaring Twenties stock market bubble—which, as it turns out, didn’t roar so much as whimper pathetically before the 1929 crash. More recently, Japan’s asset price bubble in the 1980s showed us that even the world’s most disciplined savers can convince themselves that real estate will appreciate forever.
See the pattern? New market opens. Investors lose their minds. Money flows like water. Someone remembers math exists. Everyone acts shocked.
Here’s the thing about AI that makes this bubble particularly stupid: like the web, the technology actually works and is legitimately useful—just not in the way that justifies current valuations. I’ve watched Llama 3 with 70 billion parameters run on a mini desktop PC, churning out thousands of tokens per second. These models are getting more efficient by the day. Processing power is still improving. Chips designed for machine learning are getting better. AI models in the cloud are plateauing in terms of usefulness. None of this requires the AI equivalent of the Manhattan Project.
Large Language Models are a genuinely useful software tool that should be integrated into existing applications, not worshipped as the second coming of electricity. After watching Facebook, Google, and the gang violate every privacy norm imaginable, you’d think people might be cautious about handing their entire digital lives to AI companies. But no—we’re speedrunning that mistake too. And AI companies aren’t even focusing on software anymore. They’re pouring billions into the least imaginative strategy possible: building massive data centers so we can rent AI from the cloud.
Let me paint you a picture of the post-bubble wasteland. When (not if) the money dries up, these AI companies will pivot to advertising faster than you can say “enshittification.” We’re talking ads so targeted, so woven into simulated conversations, that you won’t even notice you’re being sold dish soap by what you thought was your helpful AI assistant. Better yet: these companies will collect enough samples of your face and voice to simulate you. That’s not paranoia—that’s their business model.
I’ve argued before that calling LLMs “artificial intelligence” is generous at best. They have no goals, no ability to take action, and no way to measure progress toward anything. They’re cognitive assistants—really good ones—but assistants nonetheless. For that use case, people need privacy and personality in their AI. They’re already trying to confide in these things, date them, marry them. And Silicon Valley’s response? “How do we monetize that trauma?” With the government in its current state of functional paralysis, expect exactly zero regulation until something truly horrifying happens. Then we’ll get a Congressional hearing where septuagenarians ask ChatGPT if it knows it’s a computer.
Most AI companies seem oblivious to any of this. Apple’s trying to bolt AI onto Siri—which, given Siri’s track record, inspires tremendous confidence. Meta and Grok gave their AIs “personalities” with all the thoughtfulness of a focus group that met once. And yes, at least one person has died because of AI chatbots. That number will grow.
Want to know how I’m certain this is a bubble? Sam Altman recently announced that OpenAI expects to spend trillions of dollars on infrastructure and that traditional fundraising won’t cut it. His solution? Creating a “very interesting new kind of financial instrument for finance and compute that the world has not yet figured out.”
Altman’s talk of novel financial instruments for compute should trigger everyone’s 2007 PTSD. Remember when Wall Street geniuses invented “interesting new financial instruments” that “the world had not yet figured out”? Remember what happened next? I think OpenAI is trying to pull an Amazon—build so much infrastructure on borrowed money that they can surf the bubble burst and emerge profitable on the other side. Bold strategy. Let’s see if it works out better than it did for Pets.com.
The parallels to previous bubbles aren’t subtle. Revolutionary technology? Check. Irrational exuberance? Check. Massive infrastructure buildouts based on fantasy projections? Check. Exotic financial mechanisms to sustain fundamentally unsustainable growth? Check and check.
The dot-com crash taught us—or should have taught us—that transformative technologies can be completely real and valuable while their initial implementations remain catastrophically overvalued. The internet changed everything. It just didn’t change everything in the way that justified every college dropout with a business plan getting $50 million in venture capital.
AI will reshape our world. But it won’t happen through the centralized, cloud-dependent, advertising-fueled surveillance capitalism model that current valuations assume. When this bubble bursts, the survivors will be the companies that figured out AI’s actual value: practical, privacy-respecting tools that augment human capability instead of exploiting it.
The question isn’t whether the bubble will pop. It’s how much damage gets done when it does, and whether we’ll finally learn to distinguish between technological revolution and investment mania. Based on the last few centuries of evidence, I’m not optimistic. But hey, at least this time we’ll have AI-generated think pieces explaining what went wrong.
