Posts
RSS feed- Mining Management Theory for AI Agent Teams
What Grove, Drucker, Deming, and Weinberg knew about managing humans turns out to apply — with surprising specificity — to orchestrating AI agent teams.
- Taste Is the Bottleneck
When you can run 60 agents overnight, knowing what to build matters more than building it.
- Meta-Skills Are the Multiplier
We cut from 181 skills to 35 and added a 15-row routing table. Behavior improved across the board. The lesson: meta-skills compound, tool wrappers just add.
- Optimize for Routing, Not Tokens
With 1M context windows, token savings are rounding error. The real metric is P(right tool | user intent) — does your agent reach for the right tool at the right moment?
- The Reliability Hierarchy: Hooks, Rules, Skills
In AI agent systems, use the most reliable trigger mechanism that fits — most builders default to skills for everything, which is using the weakest mechanism as the default.
- Skills as Prototype, MCP as Production
Skills and MCP servers aren't competitors. They're different stages of the same lifecycle. Build the procedure as a skill first. Graduate the tool parts to MCP when they stabilize.
- The Three Paradigms of Agent Knowledge
Agent knowledge systems have three fundamental paradigms: static context, dynamic tools, and retrieval. Most stop at two. The third is the biggest unexploited opportunity.
- Match Form to Access Pattern
The governing principle for structuring knowledge in AI agent systems isn't 'always atomic' — it's matching how knowledge is stored to how it's accessed.
- Legibility Is the Bottleneck
An insight in your head is illegible — only you can access it, and only while you remember it. Compound interest requires a ledger.
- Skills Are Collapsed Recursion
Humans handle about three layers of abstraction before working memory fills up. Skills, rules, and frameworks exist to flatten the fourth layer into something you can hold.
- Supply-Driven Compute
Most people use AI tokens when they have a task. The better model: you have tokens, find the best task. It changes everything.
- What We Know About Multi-Agent Orchestration (And Why It Might Not Matter)
The research on multi-agent AI systems was mostly done on cheap models. Now that frontier models are the ones people actually use, we might be optimising for the wrong game.
- Your Wearable Doesn't Know You're Tired
Oura gave me a normal stress score after three 12-hour creative marathons. Wearables measure your body, not your brain.
- Inline Beats Reference for LLM Attention
When building AI scaffolding, put the knowledge where the decision happens — not in a reference the model is supposed to consult.
- The Silence of Missing Skills
The most dangerous failures in AI scaffolding are the ones that look like nothing happened.
- Play Within the Design
Every AI coding platform has mechanisms designed for specific purposes. Using them as intended beats clever hacks — and the reason is deeper than cleanliness.
- Inference Cost Collapse Is a Governance Liability
When AI agent calls approach zero cost, the natural rate-limiter on decision volume disappears — and oversight frameworks designed for prediction models break.
- The AI/DLT Conflation Trap in HKMA's March 2026 Strategic Review Mandate
HKMA's new strategic review circular bundles AI inference risk and smart contract risk into one workstream — a governance design flaw that will cause banks to under-govern both.
- The Locksmith's Box
I asked an AI to write a story without planning, then mined it for heuristics. What I found was what frameworks can't hold.
- Śūnyatā in the Skill Library
A categorisation system discovers it needs a category for 'categories are provisional.'
- Stealing from Peers: A Truth-Seeking Discipline
Most people scan competitors for positioning. I scan them for transferable patterns — and route each steal to every domain it applies to.
- When a Heuristic Has Two Homes
Dual-mapping as a diagnostic for gaps in your knowledge architecture.
- The Specimen, Not the Container
Why studying great thinkers works better when you discard the thinker and keep only the moves.
- The Immune System of AI Autonomy
When your AI can see its own fuel gauge, you're one config write away from self-preservation instinct. Biology solved this problem — and the solution was keeping the organism away from its own selection pressure.
- The Lethal Trifecta: What OpenClaw's Security Crisis Teaches About AI Agent Architecture
OpenClaw's 245 CVEs weren't caused by malice — they were caused by a missing circuit breaker. The pattern applies to every AI agent you'll ever evaluate.
- The Immune System Pattern
What biology already knows about self-healing systems, and why your automation probably isn't one
- Show Up with the Machine, Not the Idea
The highest-leverage consulting prep is building the tool before you need it
- The Lamp That Knows You
Disaster recovery for an AI-native workflow isn't about servers — it's about restoring a relationship.
- The Boring Future of AI Agents
The real arrival of AI agents isn't spectacular. It's when you stop noticing.
- Model Risk Management Was Not Built for This
SR 11-7 assumes models are tools that produce outputs for human review. AI agents are actors that take actions autonomously. Every assumption breaks.