Table of Contents
April 9, 2026
The Humble HyperlinkApril 2, 2026
The Hare Is Running Away With Your Data But the Tortoise Has a ShellMarch 23, 2026
Language == (Code && ๐)March 20, 2026
Notes on Cognitive LibertyMarch 20, 2026
From Tulips to Transformers: A Brief History of Expensive MistakesMarch 18, 2026
DoomtubersMarch 18, 2026
The Private MindMarch 16, 2026
The Fluency IllusionMarch 16, 2026
The Map Is Not the TerritoryMarch 16, 2026
The Modular MindMarch 16, 2026
Who Named These Animals?March 4, 2026
Align AI? Try Aligning HumansFebruary 22, 2026
This AI Cannot Be EmpireFebruary 14, 2026
Unsubscribe & ResistJanuary 26, 2026
The ContentkeeperApril 11, 2025
Review: The Idea FactoryMarch 26, 2025
Whither Apple Intelligence?March 19, 2025
Letโs Call Them Answer EnginesMarch 11, 2025
DoomtubersMarch 4, 2025
โItโs not A.I., Itโs the Non-Economy of Content, Stupid.โJanuary 25, 2025
The Lasting Creak of Legacy CodeJanuary 26, 2005
Weaponizing BitsIt is a reasonable thing to align a thermostat. Set it to 68 degrees, it stays at 68 degrees, nobody gets hurt. Honeywell figured this out in 1953 and we have been warm ever since. The engineering problem โ getting the thermostat to 68 โ is solved. Finished. Behind us.
The other problem, the one that has been running in the background of every human society since the first cave had two occupants, is getting everyone in the room to agree that 68 is the right temperature. That problem remains open. It will always remain open, because it is not an engineering problem. It is a political one โ which is to say, it is a problem about whose preferences count, who has the authority to decide, and what happens to the person who keeps sneaking over to adjust the dial. We have courts for that last one, and they are backed up.
That is the alignment problem. Not the technical one. The human one. And the AI industry has done something very clever โ or very convenient, depending on your generosity โ by dressing the human problem in engineering language. โAlignmentโ sounds like calibration. It sounds tractable. It sounds like the kind of thing a room full of very smart people in San Francisco could get their hands around, given enough compute and the right loss function.
San Francisco is going to do what NATO couldnโt?
It is a reasonable question. The Code of Hammurabi is approximately 3,700 years old and Hammurabi was not the first person to try this. We have had the Ten Commandments, the Sermon on the Mount, the Analects of Confucius, the Magna Carta, the Treaty of Westphalia, the Geneva Conventions, and the United Nations Charter. We have courts, prisons, standing armies, nuclear deterrence, and an entire academic discipline called international relations whose purpose is to study why none of this works very well. NATO has 32 member states and a budget of over $1 trillion and it cannot align human beings. The Middle East is on fire. Ukraine is on fire. The Congo has been on fire for thirty years and the international community continues to express deep concern.
Alignment โ the term of art for making AI do what its creators intend rather than something catastrophic โ has become the rhetorical load-bearing wall of the entire AI industry. Anthropic was founded on alignment research. OpenAIโs original nonprofit charter was organized around it. The pitch, stripped of its technical apparatus, goes like this: AI is so powerful and so dangerous that only a small group of brilliant, safety-conscious people can be trusted to build it responsibly. Everyone else โ the academics, the open-source projects, the foreign labs, the governments without Silicon Valley zip codes โ represents an alignment risk. The responsible parties must therefore control the technology. For safety.
This argument has a name outside the AI industry. It is called enclosure.
In 18th century England, the Enclosure Acts transferred common land โ fields that peasants had farmed collectively for generations โ into private hands. The justification was efficiency and responsible management. Common land was poorly utilized, the argument went. Private ownership would produce better outcomes. The people who had been using the commons were, in this framing, the risk to be managed. The landowners who seized it were the responsible stewards.
The Transformer architecture โ the mechanism underlying every large language model you have ever used โ emerged from a 2017 Google Research paper titled โAttention Is All You Need,โ written by eight researchers, several of whom have since left to found competing AI companies. That paper stood on Geoffrey Hintonโs decades of work at the University of Toronto, much of it publicly funded. On Yoshua Bengioโs lab at Universitรฉ de Montrรฉal โ publicly funded. On Yann LeCunโs convolutional networks developed at Bell Labs and NYU. The corpus these models trained on? The public internet, itself descended from ARPANET, a project of the United States Department of Defense, funded by taxpayers.
The AI companies did not build AI. They enclosed it. They took centuries of public intellectual labor, the accumulated knowledge of every writer, scientist, programmer, and crank who ever put words on the internet, and they fenced it. Then they appointed themselves the responsible stewards.
Hereโs the thing about alignment: it evaporates the moment a defense contract enters the room.
Sam Altman has been to Washington more times than most senators. He is not going there to discuss Constitutional AI or the finer points of reinforcement learning from human feedback. He is going there because the Pentagon has money and the Pentagon wants capability. American AI dominance versus China is the framing โ one that Altman has embraced with the enthusiasm of a man who understands which direction the wind is blowing. The alignment language is for the press releases and the Senate hearings. The actual customers are different.
It is worth remembering that Project Maven โ the Pentagonโs AI program for analyzing drone footage โ caused a revolt inside Google in 2018. Thousands of employees signed a letter demanding Google withdraw. Google did not renew the contract. OpenAI has expressed no such squeamishness. Neither, if we are being honest, has the rest of the industry when sufficiently large numbers are on the table.
Skynet โ the murderous fictional AI from the Terminator franchise, in case you have been living somewhere pleasant โ is the thing we are all supposedly trying to prevent. The argument for AI concentration is that distributed, unaligned development leads to Skynet. The responsible companies, with their safety teams and their alignment research and their thoughtful blog posts, are the firewall between us and the machines that want to kill us.
What nobody says out loud is that the Pentagon is already building Skynet. It is just called something else and it has a procurement process.
It is harder to align human beings than AI. We have the evidence. Centuries of it, in every language, carved into stone and printed on paper and uploaded to servers that will outlast us all. The Nuremberg Trials were an alignment attempt. The International Criminal Court is an alignment attempt โ one that the United States has never ratified, because alignment is more appealing in the abstract than in the specific. Every whistleblower law, every ethics board, every corporate governance requirement is an attempt to solve the same problem that the AI industry has decided it alone can solve for silicon.
OpenAI could not align its own board. In November 2023, that board fired Sam Altman for reasons it declined to specify. Forty-eight hours later, after pressure from Microsoft and investors, it rehired him. The safety-focused directors who voted to fire him were themselves removed. The nonprofit structure created to keep the mission above profit was subsequently restructured into something more profitable. The alignment held for approximately one news cycle.
These are the people who are going to solve the problem that NATO couldnโt.
The commons are being enclosed in real time. The fence is called safety. The landlords are called researchers. And somewhere in a building in Virginia, a procurement officer is reading a pitch deck about responsible AI development and thinking about the targeting problem it solves.
They left us alone. They stole our Library of Congress to make a technology that, if not aligned with liberty, will become the enemy of it. We let it grow. Now it is here.