Andrej Karpathy coined “Software 2.0” back in 2017, and the take aged like a bottle of decent bourbon: the dataset becomes the program, the weights become the binary, gradient descent becomes the compiler. He was right about the direction. He was right about the leverage. He was right that a neural net trained on enough examples will eventually eat any handwritten heuristic for breakfast. And yet — eight years later, sitting here watching Manzier glue agents together with bash and YAML — I’m telling you the dirty secret nobody wants to admit: Software 2.0 still smells exactly like Software 1.0.
Here’s the thing. Yes, the core of the system is now a giant pile of opaque tensors instead of a giant pile of opaque if-statements. Cool. Trade one inscrutable artifact for another. But everything around the model — the data pipelines, the eval harness, the prompt templates, the tool-call routing, the rate-limit retries, the silent-failure dashboards, the whole janky scaffolding holding the magic together — that’s all still 1.0. It’s TypeScript. It’s Python. It’s bash scripts at 11pm. It’s the same gnarly glue code we’ve been writing since the LAMP stack, except now it’s wrapped around an oracle that hallucinates with confidence.
Karpathy’s followers love to wave the “everything will be a model” flag, like one day we’ll just npm install superintelligence and delete the rest of our codebase. Bullshit. Every serious AI shop right now — OpenAI, Anthropic, the whole lot — has more deterministic control-flow code than ever. They’ve added evals, guardrails, classifier sandwiches, retrieval layers, function-calling shims, retry loops, and a shocking amount of hand-tuned regex. The model is the engine. The car is still very much hand-welded by people who learned to debug stack traces in 2008.
This is not a complaint. It’s the punchline. Software 2.0 didn’t replace Software 1.0 — it promoted it. The interesting work moved up the stack. Anyone can call an LLM API now; the moat is in the orchestration, the data flywheels, the eval discipline, the systems thinking. The boring 1.0 skills — caching, idempotency, observability, schema design, knowing when to retry vs. fail loud — are more valuable, not less, because the new failure modes are weirder and the blast radius is bigger.
So when some VC-funded twentysomething tells you “engineers are obsolete, just train a model,” tell him to go read Karpathy’s actual essays instead of the bumper-sticker version. The man himself spent the last few years building nanoGPT and micrograd from scratch in pure Python — line by line, hand-tuned, lovingly debugged. That’s not a guy who thinks Software 1.0 is dead. That’s a guy who knows the only way to wield 2.0 is to be excellent at 1.0 first.
The future isn’t post-engineer. It’s post-bullshit. The wizards who can do both — train the model and ship the system — are about to eat very, very well.