Workflows, Not Containers

Every AI coding tool gives you containers. Claude Code has tools, skills, agents, and teams. Cursor has rules and notepads. Codex has agents. The pitch is always the same: here are the boxes, now organise your work into them.

I spent a morning doing exactly that. Mapping each container to a biological analogy, because that is how I think about my AI system. Tools are enzymes. Skills are behaviours. Agents are cells. Teams are tissues. It was elegant. It was satisfying. It was the wrong question.

The breakthrough came when I stopped asking “what box does this go in?” and started asking “what is the lifecycle of this work?”

A piece of content enters my system. It gets digested into a structured note. That note might contain a spark for a consulting asset, a pattern worth scanning against peers, or a claim sharp enough to tweet. In biology, a macrophage does not just engulf — it presents antigens, signals nearby cells, triggers cascading responses. The cell is not a container. It is a workflow with connections.

When I mapped my actual workflows as cell lifecycles, the gaps became obvious. My content intake was a dead end — articles went into the vault and sat there. No signalling to the publishing pipeline that a new spark had arrived. My coaching drills ran the same way every time, with no memory of previous performance. My publishing pipeline was fire-and-forget, with no feedback loop to learn what resonated.

None of these gaps were visible through the container lens. “Is this a skill or an agent?” never surfaces “these two workflows cannot talk to each other.” The container question is about taxonomy. The workflow question is about architecture.

The AI tool industry is converging on containers as the organising principle. The Agent Skills open standard defines a folder structure and two metadata fields. Every tool implements the same box differently — Claude Code adds model switching and forked execution, Gemini reads only the name and description, Cursor does something else entirely. The standard is a filing convention, not a protocol. MCP defines behaviour. Agent Skills defines layout.

This means the containers are not fundamental. They are routing heuristics — practical decisions about where to put code, not truths about the nature of work. The useful question is never “tool or skill or agent?” The useful question is “what triggers this, what steps does it take, what does it produce, and what should happen next?”

Biology does not organise by container. A macrophage is not filed under “immune cell” in some cellular org chart. It is defined by its lifecycle: what it senses, how it responds, what it signals, how it adapts. The cell type is the workflow pattern, not the box it lives in.

The practical implication is that your AI system’s architecture should be a graph of workflows with signals between them, not a taxonomy of containers with work sorted into them. Agents are useful as workers inside a workflow. Skills are useful as steps in a lifecycle. But the design unit is the workflow, and the critical infrastructure is the signals between workflows.

My system still uses Claude Code’s containers. Tools, skills, agents — they are all there. But I no longer design around them. I design the lifecycle first, then map steps to whatever container fits. The container is the last decision, not the first.

P.S. I arrived at this by trying to prove that Anthropic’s four containers (tool, skill, agent, team) map perfectly to biological layers (enzyme, behaviour, cell, tissue). They do map, but only because you can always find a biological analogy for any N categories if you try hard enough. The real insight came from abandoning the mapping exercise and asking what biology actually prioritises. Cells do not care what container they are. They care about the signals they receive and the signals they send.