Table of Contents
March 23, 2026
Language == (Code && š)March 20, 2026
Notes on Cognitive LibertyMarch 20, 2026
From Tulips to Transformers: A Brief History of Expensive MistakesMarch 18, 2026
DoomtubersMarch 18, 2026
The Private MindMarch 16, 2026
The Fluency IllusionMarch 16, 2026
The Map Is Not the TerritoryMarch 16, 2026
The Modular MindMarch 16, 2026
Who Named These Animals?March 4, 2026
Align AI? Try Aligning HumansFebruary 22, 2026
This AI Cannot Be EmpireFebruary 14, 2026
Unsubscribe & ResistJanuary 26, 2026
The ContentkeeperApril 11, 2025
Review: The Idea FactoryMarch 26, 2025
Whither Apple Intelligence?March 19, 2025
Letās Call Them Answer EnginesMarch 11, 2025
DoomtubersMarch 4, 2025
āItās not A.I., Itās the Non-Economy of Content, Stupid.āJanuary 25, 2025
The Lasting Creak of Legacy CodeJanuary 26, 2005
Weaponizing Bits
āOur family are an alternate stratification of poetry and mathematics.ā āAda Lovelace, in a letter to Andrew Crosse
For those non-programmer types out there, that `==` in the title is a special sign in coding that means to check that two variables are equalāyou can't use `=` because that's what *sets* the variable in the first place. The first programmer was a writer. Ada Lovelace wrote the first algorithm for Babbageās Analytical Engine in 1843āin prose, essentially. Annotations to a translation of an Italian paper, extended into something that described what a machine could do if someone told it what to do. She was writing a specification. She was writing natural language instructions for a machine that didnāt fully exist yet. She had never touched a compiler. There were no compilers. There was only her language, precise and visionary, reaching toward a machine that would take another century to arrive. Lovelace was Lord Byronās daughter, and her mother, Anne Isabella Milbanke, deliberately raised Ada away from poetry, his chaotic antics, and toward mathematics. Still, she didnāt separate them. She called it poetical science. The first programmer refused to accept that poetry and mathematics were different languages. She was right.
Then writing for machines got mechanicalāholes punched in heavy stock paper, punch cards, fed into machines that only cared about sequence and nothing else. You fed them into computers to make the computer perform something. They were the cartridge, the disk, the CD, of the day. My mother worked as a technical writer in that era, and she relates with chagrin watching someone drop a box of punch cards. The sequence, the programāgone, scattered across a linoleum floor or sidewalk. Someone had to get on their hands and knees and reconstruct the program, the order of the universe, card by card. Thatās not programming. Thatās liturgy. The machine only cared about the sequence; didnāt care about your intentions. The languages of that eraāassembly, FORTRAN, COBOLārequired the programmer to think like the machineāregisters, opcodes, memory addresses. A small priesthood, initiated through suffering. How do you even error correct?

Compilers arrived thanks to Grace Hopper and the conversation got a little more civilized with āhigh-levelā programming; closer to English. She built a layer of abstraction between human intention and machine execution, and the priesthood exhaled. I remember coming home from college, stoked that I had learned how to manipulate Unix (The operating system at the heart of Macs, iPhones and most of the web). My father, ever whimsical, said he worked down the hall from the guy who created Unix. He wasnāt joking. He worked down the hall from Dennis Ritchie when he was at Bell Labs. I couldnāt believe itāsomeone invented Unix? (Let us not forget Ken Thompson.) B was a compiled language that came out of Bell Labs Then came C. Then C++. Then along came Java in the 90s! The dream, barely concealed in each iteration, was always the same: what if we could just say what we meant for the machine to do?
COBOL made a serious attempt in 1959. Business logic in plain English sentences, readable by non-programmersāin theory. But, that didnāt quite work out, and the dream persisted when Ruby arrived and you could write code that read like Englishāor at least like English with very strict grammar and no patience for ambiguity. Python read like pseudocode made real. JavaScript spread everywhere, flawed and beloved, the Latin of the web.
Hereās the thing I never found strange about any of this: I only ever thought of language as language. To make it more or less grammatical depending on the subject was always obvious to meāeven when I was speaking, I was thinking about my audience. Who is listening? Who is reading? The grammar adjusts accordingly. (Thatās a skill worth learning)
Writing code requires very strict grammatical rulesācomputers are just bad at inferenceāweāre not, we fill in blanks. I write English considerably looser than my code. As noted by linguist, professor emeritus at MIT, Noam Chomsky, āColorless green ideas sleep furiouslyā is a grammatically correct sentence that literally doesnāt possess semantic meaning. But we inferential machines imbibe it with meaningāyou felt it before you analyzed it. Code doesnāt allow for this. A compiler rejects it without ceremony. Language holds it, turns it over, finds the light inside it. Poetry lives in exactly that gap between grammatical and meaningful, between what the rules permit and what the mind receives.
The entire history of programming languages is the history of closing that gapānot by loosening codeās grammar, but by making the machine climb toward our language. That summit is arising.
Natural language interfaces for coding agents arenāt the end of that climb. You describe what you want in plain language. The agent writes the code, runs the tests, catches the errors, refactors the output. And the tools are now starting to formalize this. Speckit, GitHubās specification kit, treats the English-language spec as the primary artifactāthe thing from which software is derived. Not a document stapled to the back of a pull request. The spec is the software, upstream of every line of code that follows.
This development has been coming for a long time. Test-driven developmentāTDD, the practice of writing tests before writing codeāwas already pointing in this direction. You describe the desired behavior first, precisely, in terms the machine can verify. Then the code follows. The description precedes the implementation. The intention leads, and execution catches up.
Git, the version control system every developer lives inside, is a writing discipline in disguise. A good commit message is a sentence that says exactly what changed and why. A pull request is an argumentāhere is what I did, here is why it is correct, here is what you should check. Code review is editing. The whole apparatus is language, structured and purposeful, carrying intention from one mind to another across time.
The developer curriculum of the future is a writing curriculum: how to write a product requirements document; how to write a specification. Maybe even how to write pseudocode? Take a course in symbolic logic. Definitely know math (strictest grammar there is!), design patterns, design systems. Know how to write a test before you write the code; how to write a commit message that your collaboratorāhuman or machineācan act on. How to write a prompt that gets you what you actually meant. These are not soft skills orbiting the technical core. They are the technical core.
Writing for machines means being specific. It doesnāt mean typing faster or producing more. It means developing the capacity to judge whatās in front of you.
Because hereās what working with a coding agent actually feels like: you are not making the software. You are a director, pushing these stochasitcal masses we call models. You describe the shot. You review the take. You say no, thatās not itāand you redirect. A great director can walk onto a set and know immediately when the light is wrong without being able to operate the equipment. But the good ones got there by looking at thousands of movie frames. By developing an eye.
The same is true here. A developer working with an agent needs tasteāthe capacity to recognize when the output is technically correct but wrong or simply not elegant. When it works but itās brittle. When it passes the tests but itās going to be a nightmare to maintain. When it solves the stated problem but missed the actual one.
And taste is downstream of skepticism. Skepticism is the root. You have to walk up to the output and say, I donāt trust this yet. You have to resist the pull of the plausible. Jack Clark of Anthropic says that they have seen their developers turn more and more work over to agents. Their developers interrupt the agents to approve steps less and less. LLM output is very good at being plausibleāthatās structurally what it is: the most statistically reasonable next token, the smoothest path through the probability space. It wonāt likely surprise you with a solution the way a real solution surprises you, because surprise requires a willingness to be wrong, to go somewhere uncomfortable, to follow a problem past the point where itās safe. Large language models arenāt really known for that.
The output defaults to anodyne. Smooth, competent, inoffensive, forgettable. It sands off the edges. A developer with taste walks in and says noāand knows why. Skepticism is teachable. Not through being fooled, but through practiceāclose reading, Socratic questioning, finding patterns, the discipline of asking what is this actually doing before accepting what it appears to be doing. Every good writing teacher and every good code reviewer has always done this. It just got more urgent.
In the 6th grade, my inventive English teacher, Mrs. Ryan, set out to teach us expository writingājust the kind needed for dumdum machines. She asked us all to write instructions to make a peanut butter and jelly sandwich. After we completed the assignment she stood behind the counter and followed studentsā instructions literallyārobotically for effect š. If you didnāt tell her to open the jar, she would bang the knife on the lid. If you didnāt tell her how much spread to use, sheād use a little or too much. Anyway, it was a brilliant demonstration which stuck with me for years, and itās the exact kind of skill new developers need.
So, before there were compilers, before there were punch cards, before there were programming languages at all, there were human computersāpeople who took natural language instructions from scientists and mathematicians and produced numerical output by hand. That was the interface. That was the original API. The machine eventually replaced them, and for eighty years we taught ourselves to speak machine. Now the machine is learning to speak human again. We have come full circleāback to Ada Lovelaceās prose, back to natural language as the primary interface, back to the beginning.
My bio says writer of story, poetry and code. I used to think that was a description of rangeālook how many modes I can work in. I understand now that it was never three things. It was always one thing, with the semantics and grammar dialed to different levels depending on who or what was listening. A story for humans with time. Poetry for humans with feeling. Code for machines with no patience for ambiguity whatsoever. But, the machine is developing patience. The grammar is loosening at that end of the dial.
Which means the person who knows how to writeāprecisely, structurally, ironically?āwith intention and taste and earned skepticism, is now the person who can drive the machine at every level. Specification. Prompt. Test. Commit. Story. The skill that was always called soft is now becoming the peak technical skill. The thing that was always central is now, finally, legible as suchāwell, semi-legible. Excuse me while I go use a pencil to scribble some pseudocode to scan into an LLM for it to turn into a program.