Posts about agents
-
I Built 200 CLIs for My AI. Here's What Actually Matters.
A Chinese article argues CLI is becoming the AI plugin format. I've been living this for months with 442 tools. The article is right about CLI. It's wrong about what makes CLI work.
-
The reversible direction
When choosing CLI vs MCP, pick the one you can undo. CLI wraps into MCP cheaply. MCP does not unwrap.
-
What Anthropic's Managed Agents validates — and what to steal
Anthropic shipped a hosted agent platform. Its architecture looks familiar. Here's what a solo builder can learn from how they decoupled the brain from the hands.
-
What LLM Wiki Looks Like After Six Months
Karpathy's LLM Wiki pattern is a good starting point. Here's what changes when you run it for real — enforcement over convention, decay over growth, and knowledge that fires without being asked.
-
Your AI Agent's Quality Gate Is Lying to You
A 96% rejection rate that was actually a 96% false positive rate — how a monitoring blind spot turned a productive overnight batch into apparent failure.
-
4 Principles for Agent-Facing CLI Design
Most advice about making CLIs agent-friendly is just good CLI design. Only four principles are actually agent-specific.
-
The architect-implementer split: why your expensive model shouldn't write code
Smart model plans, cheap model builds. The pattern everyone's converging on for AI coding agents — and the piece nobody's shipped yet.
-
Building porin: a library for agent-facing CLIs
I turned the seven patterns into a zero-dependency Python library. Then I added MCP bridge support. Here's what I learned about the gap between patterns and code.
-
Seven patterns for agent-facing CLIs
Three independent authors converged on nearly identical patterns for CLIs that AI agents invoke. Here's what they agree on, what's missing, and why nobody has built a framework for it yet.
-
CLI, MCP, or code mode: the answer depends on who's running the sandbox
Willison says CLIs beat MCP. Cloudflare says server-side code mode beats both. They're both right, because they're answering different questions.
-
The Cell Biology Agent Design Manual
Engineering metaphors give you clean abstractions. Biology gives you resilient ones. Twenty design heuristics from four billion years of R&D.
-
Bridge or Seed
Every skill you build is one of two things. Knowing which changes what you build next.
-
The Organism Has a Cortex
Biological metaphors in AI systems break at the autonomic-deliberate boundary. The fix isn't dropping biology — it's getting the neurology right.
-
Deterministic Over Judgment
Why the future of agentic trust depends on liquidating prompt-first reasoning for a metabolic core.
-
Metabolism of the Real World
Language doesn't describe metabolism. Language is metabolism — of meaning, between minds.
-
The Constitution Eats Itself
Design for the failure modes of your medium, not the capabilities. Then watch the rules dissolve themselves into programs.
-
hygiene
On the metabolic necessity of pruning agentic context to survive the entropic heat death of the credit balance.
-
LLMs Are Enzymes
Why we should stop treating AI as a chatbot and start treating it as a metabolic organism governed by credit scarcity.
-
Conversation Is Metabolism
When epistemic trust runs dry, generative synthesis regresses into mechanical synchronization and eventual structural dissolution.
-
Everything Is Energy
Tokens are energy. Text is mass. The context window is the budget. The rest is plumbing.
-
Taste Is the Metabolism
Tool descriptions were just the first thing to evolve. Everything in an agent's context window is a genome under selection pressure — and taste decides what counts.
-
The Missing Metabolism
We build agent tools the way medieval farmers bred crops — by hand, by instinct, one season at a time. There's a better loop.
-
Design Actions, Not Actors
The word 'agent' makes us think in nouns. The better designs start with verbs.
-
The Naming Problem
We called them agents. But the word is doing more harm than we think.
-
The Marginal Agent
I deployed twelve AI agents to polish a CV. Five would have been plenty. Here's what the waste taught me about agent team economics.
-
The Emergence Ladder: From Molecules to Economies
The larger the system, the less it can be managed and the more it must be emerged. This pattern — from water to ant colonies to AI agents to economies — reveals the design principle for scaling autonomous systems.
-
AI Agent Teams Are Colonies, Not Companies
The right organisational metaphor for AI agent teams isn't a company with managers and reports — it's a colony with autonomous workers responding to coordination signals.
-
Managing AI Agents Like Managing a Team
The governance patterns for autonomous AI agents are the same ones good managers already use: cadence reviews for normal flow, escalation channels for urgent anomalies, and human judgment only where it has maximum information value.
-
Exoskeleton, Not Colleague
The AI governance conversation is stuck in the wrong frame. The pattern that works isn't autonomous agents — it's exoskeletons. Micro-agents handling narrow tasks, with human judgment at every point that matters.
-
Mining Management Theory for AI Agent Teams
What Grove, Drucker, Deming, and Weinberg knew about managing humans turns out to apply — with surprising specificity — to orchestrating AI agent teams.