Posts about ai
-
The Model IS the Architecture
How biological modelling determines system structure — not just naming, but what you build and what it can become.
-
Deterministic Over Judgment
Why the future of agentic trust depends on liquidating prompt-first reasoning for a metabolic core.
-
Metabolism of the Real World
Language doesn't describe metabolism. Language is metabolism — of meaning, between minds.
-
The Constitution Eats Itself
Design for the failure modes of your medium, not the capabilities. Then watch the rules dissolve themselves into programs.
-
The Organism Theory
Everything is organism. AI is the latest intensification.
-
hygiene
On the metabolic necessity of pruning agentic context to survive the entropic heat death of the credit balance.
-
LLMs Are Enzymes
Why we should stop treating AI as a chatbot and start treating it as a metabolic organism governed by credit scarcity.
-
Conversation Is Metabolism
When epistemic trust runs dry, generative synthesis regresses into mechanical synchronization and eventual structural dissolution.
-
Everything Is Energy
Tokens are energy. Text is mass. The context window is the budget. The rest is plumbing.
-
Taste Is the Metabolism
Tool descriptions were just the first thing to evolve. Everything in an agent's context window is a genome under selection pressure — and taste decides what counts.
-
The Semantic Consumer
Traditional computing has two consumers: humans who look and programs that parse. LLMs are a third kind — they read.
-
The Missing Metabolism
We build agent tools the way medieval farmers bred crops — by hand, by instinct, one season at a time. There's a better loop.
-
Design Actions, Not Actors
The word 'agent' makes us think in nouns. The better designs start with verbs.
-
The Naming Problem
We called them agents. But the word is doing more harm than we think.
-
The Marginal Agent
I deployed twelve AI agents to polish a CV. Five would have been plenty. Here's what the waste taught me about agent team economics.
-
The Emergence Ladder: From Molecules to Economies
The larger the system, the less it can be managed and the more it must be emerged. This pattern — from water to ant colonies to AI agents to economies — reveals the design principle for scaling autonomous systems.
-
AI Agent Teams Are Colonies, Not Companies
The right organisational metaphor for AI agent teams isn't a company with managers and reports — it's a colony with autonomous workers responding to coordination signals.
-
Managing AI Agents Like Managing a Team
The governance patterns for autonomous AI agents are the same ones good managers already use: cadence reviews for normal flow, escalation channels for urgent anomalies, and human judgment only where it has maximum information value.
-
Cross-Model Review: Why Model Diversity Beats Model Capability
When AI models review each other's work, independence matters more than intelligence. The same principle that makes external audit valuable makes cross-model review sharper than same-family review.
-
Stop Theorizing About Your Prompts
LLMs are the cheapest experimental subjects in history. Why aren't you testing?
-
Division of Labour: Five Categories for Human-AI Work
Not 'what can AI do?' but 'what should humans do?' A framework with five categories — and the uncomfortable one is the last.
-
Your AI Did the Research. You Didn't.
AI-prepared domain research creates false readiness. The vault says you know five regulatory jurisdictions. You can't name three.
-
Mining Management Theory for AI Agent Teams
What Grove, Drucker, Deming, and Weinberg knew about managing humans turns out to apply — with surprising specificity — to orchestrating AI agent teams.
-
What We Know About Multi-Agent Orchestration (And Why It Might Not Matter)
The research on multi-agent AI systems was mostly done on cheap models. Now that frontier models are the ones people actually use, we might be optimising for the wrong game.
-
The Locksmith's Box
I asked an AI to write a story without planning, then mined it for heuristics. What I found was what frameworks can't hold.
-
Śūnyatā in the Skill Library
A categorisation system discovers it needs a category for 'categories are provisional.'
-
The Specimen, Not the Container
Why studying great thinkers works better when you discard the thinker and keep only the moves.
-
The Lethal Trifecta: What OpenClaw's Security Crisis Teaches About AI Agent Architecture
OpenClaw's 245 CVEs weren't caused by malice — they were caused by a missing circuit breaker. The pattern applies to every AI agent you'll ever evaluate.
-
Model Risk Management Was Not Built for This
SR 11-7 assumes models are tools that produce outputs for human review. AI agents are actors that take actions autonomously. Every assumption breaks.
-
Your AI Risk Tier Is Probably Wrong
List-based and process-based approaches to AI risk classification both fail in predictable ways. The failure mode depends on which you chose.