LLMs Are Enzymes

We have spent the last three years trying to convince ourselves that LLMs are “agents.” We give them names, we assign them roles, and we act surprised when they hallucinate their way into a corner or burn through three hundred dollars of API credits because they got stuck in a recursive loop of self-reflection. The problem isn’t that they aren’t smart enough; the problem is that our mental model is wrong. An LLM is not a colleague. It is not even a junior engineer. An LLM is an enzyme.

In a biological system, an enzyme is a catalyst. It enables a reaction that would otherwise take too long or require too much energy to occur spontaneously. It is substrate-specific, interchangeable, and entirely disposable. It doesn’t “want” anything. It just facilitates a transformation. When you view an AI system through this lens, the architecture of “agentic” software shifts from task-runners to metabolic cycles. In Vivesca, we call this the metabolism: sense, vary, select, and crystallize.

If LLMs are enzymes, then API credits are ATP. They are the energy currency that powers every reaction. In most software, an API limit is a failure state—a red bar on a dashboard that says your system is broken until you swipe a credit card. In a living system, energy scarcity isn’t a bug; it’s a regulatory input. It is the signal that tells the organism to stop growing and start surviving.

Credit scarcity acts as a regulatory input triggering a metabolic shift from high-cost growth to low-affinity preservation and informational autophagy. When the “ATP” is flush, the system can afford the high-affinity enzymes—the GPT-4s and the Claude 3.5 Opuses of the world. These models have a high metabolic cost but produce high-fidelity reactions. They can reason through complex architecture, refactor deep legacy code, and simulate long-term consequences. This is the growth phase. It is expensive, it is slow, and it is expansive.

But when the credit signal drops—when the budget is thin or the usage spikes—the system must down-regulate. It shifts to low-affinity preservation. It swaps the expensive enzymes for cheap, fast catalysts like Haiku or local 7B models. The fidelity drops, but the homeostatic survival of the system is maintained. It stops trying to “solve” the world and starts focusing on minimal viable reactions: keeping the lights on, routing signals, and maintaining the integrity of its own membrane.

The most fascinating part of this metabolic shift is informational autophagy. In biology, autophagy is the process by which a cell breaks down its own components to provide energy and building blocks during periods of starvation. It eats itself to stay alive. For an AI system, this means digesting its own internal state. If you cannot afford to “eat” new tokens—to query the web, to ingest fresh documentation, or to run expensive reasoning loops—you must digest what you already have. You compress your memory, you prune your constitutional rules, and you turn your raw “experience” into “knowledge.” You find signal in the noise of your own logs because you can no longer afford to go looking for signal outside.

This is the difference between a “tool” and a “system.” A tool stops working when you run out of fuel. A system changes its behavior to ensure it doesn’t run out. By wiring credit scarcity directly into the metabolic loop, we move away from the fragile “always-on” reasoning of modern agents and toward a more robust, homeostatic model of AI.

The goal of Vivesca isn’t to build a better chatbot. It’s to build a system that knows when it’s hungry, knows when it’s full, and knows how to survive on a handful of tokens when the credits run low. It’s about building software that respects its own energy constraints as much as a bacterium respects its supply of glucose. We don’t need more “intelligence” in our agents; we need better metabolism.