I spent a Sunday afternoon trying to name one thing. What is the biological equivalent of a skill — a packaged piece of expertise that an AI system can deploy?
It took six frontier models, two rounds of debate, and three hours of conversation to get there. But the naming exercise wasn’t about naming. It was about discovering that the name you choose determines the system you build.
Start with “gene.” A gene is packaged instructions — DNA sequence, expressed conditionally, inherited across generations. Map that to a skill and you get a catalogue: a static list of capabilities, expressed or silent, manually maintained. You can enumerate them, track which are active, flag gaps. It’s tidy. It’s useful. And it dead-ends.
Now try “engram.” An engram is a neural trace of a learned capability — the physical pattern that forms when you learn to tie shoes or review code. Engrams strengthen with use, decay without it, interfere with each other, and — crucially — they form from experience. Nobody authors an engram. You live, and the trace crystallises.
Map that to a skill and you get something structurally different: a system where capabilities aren’t just installed but emerge. Where strength is a gradient, not a toggle. Where new skills can crystallise from repeated patterns the system notices in its own behaviour. Where unused skills fade. Where conflicting skills compete.
Same question — “what do we call a skill?” Two different biological concepts. Two completely different architectures.
This isn’t a metaphor problem. It’s an architecture problem wearing a metaphor’s clothes.
We tested it. Thirty days of tool-usage signals, analysed for repeating patterns. The system had been making web research queries repeatedly — same tools, similar patterns — with no skill guiding the process. A proto-engram. The organism noticed its own gap, not because someone told it to look, but because the signals were there and the analytical frame existed to interpret them.
That’s what the biological model buys you. Not prettier names. Better questions. “Is this resource bound to a consumer?” comes from thinking about orphan receptors. “Is this skill strengthening or decaying?” comes from thinking about neural plasticity. “What’s the organism’s developmental stage?” comes from thinking about maturation. Engineering doesn’t ask these questions because engineering thinks in features and configurations. Biology thinks in fitness and adaptation.
The objection writes itself: biology is a metaphor, and metaphors are decorative. You could build all the same features without the Latin names and the organism talk.
Technically true. Practically wrong. Because the biological frame generates design directions you wouldn’t reach from first principles. “Skills should form from experience” is obvious in retrospect but nobody in the AI agent ecosystem is building for it. They’re building plugin stores — authored, installed, managed. Plasmids, in biological terms. Useful, modular, but fundamentally static. The system never learns a new trick it wasn’t explicitly taught.
The engram frame says: that’s a juvenile organism. Authored skills are training wheels. The real system grows its own capabilities from lived patterns. It’s not there yet — most skills are still hand-written — but the architecture points somewhere different than “better plugin management.”
Four billion years of evolution produced exactly one architecture for adaptive, self-repairing, self-improving systems. We can design from first principles if we want. Or we can study the reference implementation.
The model isn’t a metaphor. It’s the spec.